Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Endurance (Score 1) 71

Every time someone gives numbers like these, I look at my laptop's uptime and disk write counters and see what they say. Apparently I've written an average of about 13GB/day since my last reboot. This machine has a 256GB SSD, so if the write endurance scales linearly with the size as your numbers imply (assumes near-perfect wear levelling), this would give it a 24TB limit. I'd reach that limit in just over 5 years, which is a bit longer than the typical time that I use a laptop as my primary machine. It's probably adequate - I feel very nervous using hard disks that are over that age. I'd feel a lot happier with something a bit further away from the 5-year mark though...

Comment Re:Incredible (Score 3, Interesting) 297

Oracle is expensive, but if it were really overpriced then you'd see lots of cheaper alternatives. For a lot of workloads, something like PostgreSQL will get the job done for a fraction of the price. When you really need something at the high end, however, Oracle or a small handful of other companies will charge you similar amounts. The real problem for a company like Oracle is the same as the problem for SGI. In the '90s, a database with a few GBs of data was something you needed Oracle (or similar) and a lot of hardware for. Now, a cheap commodity machine can keep the whole thing in RAM for read-only queries and can write to an SSD (or a few in RAID-1) for a few thousand dollars, including the time it takes someone to set it up. The number of companies that have data of a size where an Oracle DB will work is increasingly small: at the very high end, you have companies like Google and Facebook that can't use any off-the-shelf solution, and at the other you have companies that can get away with cheap commodity hardware and an open source RDBMS.

This is why companies like IBM and Oracle are focussing heavily on business applications and vertical integration. They may be expensive, but there's a whole class of medium sized enterprises for whom it's a lot cheaper to periodically give a huge pile of money to Oracle periodically than it is to have a large in-house IT staff.

Comment Re:impossible (Score 1) 297

Companies have no incentive to invest in infrastructure if most of the benefits will be reaped by other companies. If one company owns an entire campus, town, or island, then they are generally good at improving the infrastructure. If such an area is owned by a diverse set of companies and individuals, then good infrastructure is rarely an emergent phenomenon, unless some organisation is responsible for collecting money to pay for it and for providing it. This organisation is traditionally referred to as a government...

Comment Re:Price (Score 1) 172

Even for sequential reads, SSDs can be an improvement. My laptop's SSD can easily handle 200MB/s sequential reads, and you'd need more than one spinning disk to handle that. And a lot of things that seem like sequential reads at a high level turn out not to be. Netflix's streaming boxes, for example, sound like a poster child for sequential reads, but once you factor in the number of clients connected to each one, you end up with a large number of 1MB random reads, which means your IOPS numbers translate directly to throughput.

Spinning disks are still best where capacity is more important than access times. For example, hosting a lot of VMs where each one is typically accessing a small amount of live data (which can be cached in RAM or SSD) but has several GBs of inactive data.

Comment Re:SAS SSD (Score 2) 172

SAS doesn't really get you anything useful with an SSD. The extra chaining isn't that important, because it's easy to get enough SATA sockets to put one in each drive bay. There's no mSATA equivalent for denser storage, and if you really need the extra speed then why not go all the way and get something like FusionIO cards that hang directly off the PCIe bus?

Comment Re:Summary of your post (Score 2) 372

I compile large projects on a regular basis. We have one machine with 12 cores (24 threads) 256GB of RAM, so I tried running builds entirely to and from a RAM drive. The speed difference between that and using a mid-range SSD was too small to measure (-j12 up to -j64). The difference in performance between an SSD and a RAM drive is significantly greater than the difference between any two SSDs. In contrast, the difference between using a hard disk and an SSD is easily a factor of 2 in terms of build speed and often more.

Comment Re:Yes (Score 1) 372

Video editing is typically done in a nondestructive fashion, so you do a big copy to get the initial data on, but then it's comparatively small transactions. It's been almost 10 years since I did any, but I think the basic approach is still the same. You grab the data from the camera (easier now - back then FireWire was essential because you were getting DV footage from tape with no buffering in the camera, so you needed isochronous transfer. Now flash costs about as little as tapes did). DV footage was 10GB/hour, which was a bit painful to edit with 1GB of RAM, but a modern system with 32+GB of RAM it's nothing. HD footage for consumer editing is about the same data rate. For pro stuff, I believe about 40GB/hour is still common, but even that fits nicely in 64GB of RAM.

You're then going to be streaming it through some filters (typically on the GPU, but sometimes on the CPU) and writing the results out to cached render files. These are fairly small (order of 100MB or so) files containing short composited sequences. When you play, you're doing a lot of random seeks to get all of these and play them in sequence (or just cache them in RAM - with 64GB that's quite feasible, with 128GB it's easy).

Finally, you'll write out the whole rendered sequence. Your cached pre-renders might be at lower quality than this, so you might not use them for the final step, in which case you do have something like a simple copy with some processing in the middle.

Comment Re:Phone-based ransom-ware? (Score 4, Interesting) 321

Ah, starting with an ad hominem, good job.

No, your plan isn't completely unworkable, but unless you are completely confident in your random number generator (possible, but hard), you have the potential for a really expensive recall when someone works it out. With 10 digits, you have about 33 bits of entropy. That's not a trivial search space, but it may be possible to brute force if it's something you can do over the local network. If you can do 1000/second, it will probably take about 1-2 months. 10,000/second, and you can do it in a week. Pretty obvious network traffic though. If, however, your random number generator is a lot less random than you think, then in this kind of thing you may end up with only 16 bits of entropy (random number generator errors in the past have resulted in a lot less than half the expected entropy). In that case, at 1000/second you could probably brute force it in about half a minute, and definitely do it in slightly over a minute.

And that's assuming the only flaw is in the random number generator. A more common error in implementing this kind of system would be a timing error in checking the code. If the time taken to process the key is related to the number of digits that you got right, then you can easily target a phone to disable, even with a strong random number generator.

Sure, it's possible to do it right. It's just a lot easier to do it wrong. There's only one way of doing it right and there are hundreds of ways of doing it wrong...

Comment Re:Phone-based ransom-ware? (Score 2) 321

If the pin is 10 digits then "they" are wasting their time

Assuming that they are generated by a strong random number generator. Of course, there are no recent examples of random number generators having a lot less entropy than was believed (or required for the application). Well, except for that whole chip-and-pin thing. And the Debian OpenSSL packages. And...

Comment Re:But, But... (Score 1) 282

You seem to forget there's one more phone available, likely at a reduced price

But not in the same market. Its IMEI will be blacklisted, so it won't be useable in the country in which it is stolen and often not anywhere where the manufacturer cares about sales. You could argue that Apple (for example) gets the same benefit from thefts as Microsoft does from piracy in emerging markets: stolen phones get a generation of people accustomed to using Apple devices, priming demand so that when the economy has grown to the level where a significant number of people can afford new iPhones, Apple can just start selling them.

Comment Re: But, But... (Score 1) 282

I'm not sure about the USA, but when you steal a phone in most of Europe its IMEI is blacklisted and it can no longer be used on any of the networks. Thieves get around this by exporting the phones (well, the small-time thieves sell to someone else who will export them) to countries with networks that don't participate in this scheme. The phones are then sold for a very small fraction of their retail cost. The people who buy them are not people who would be able to afford a new smartphone.

Comment Re:Oil and nuclear are separate markets (Score 1) 319

That sentence hurt my head, but even with what you mean it's irrelevant. In the USA, for example, 82% of the population lives in cities and suburbs and so could have most of their transportation needs met by mass transit and the gaps filled with taxis or schemes similar to ZipCar for occasional use.

Comment Re:It's actually surprisingly cheap... (Score 1, Offtopic) 311

I've only spent a few months in the USA, but I don't remember any restaurants I saw offering all-you-can-drink including alcoholic beverages along with a fixed price meal, and yet I recall this being fairly common in Tokyo. Or are you deliberately misreading the grandparent so that you can call him a retard?

Slashdot Top Deals

An authority is a person who can tell you more about something than you really care to know.

Working...