Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Wow, just wow. (Score 3, Insightful) 406

There's no hypocrisy if your distinction is one of scale. I regard censorship as only being bad when it has an impact on an individual's ability to speak freely. There is no problem with a single newspaper refusing to carry something, as long as there are other newspapers that are willing to run it or some other (relatively easy) mechanism for publication. There is a problem if a government or an industry body says 'no one may run this story'. There's a difference between saying 'you may not post this opinion on my blog' and saying 'you may not post this opinion on any blog'. The latter is dangerous censorship, the former is exercising free speech - the thing that rules about censorship are supposed to protect. It only becomes a problem when everyone with the infrastructure to host blogs says 'you may not post this on a blog that I run', at which point there should be government intervention.

Comment Re:Translation: (Score 1) 111

There are lots of IP companies that no one has a problem with. There are basically two business models for IP companies:
  • File or buy a load of patents and then, the next time someone independently invents something you've patented, ask for royalties and sue them if you don't get them.
  • Design things of value and sell the rights to use the designs to companies that would end up paying more if they developed something in house.

There are a load of companies in the second category that are very profitable and usually respected. It's the ones in the first category that give them all a bad name.

Comment Re:Endurance (Score 1) 71

That's a bit closer to what I was expecting. Last time I did these calculations, I came up with a figure of something like 100 years at my usage pattern. Seeing this drop to 5 years is somewhat alarming, but not too far off the trends we've seen with decreasing numbers of rewrites per cell in modern flash.

Comment Re:It's... OK. (Score 1) 161

As for PCs you can program your decoder in CUDA or OpenCL so "hardware support" is not very important.

Mobile GPUs are also programmable, but without knowing the details of the algorithms involved it's hard to say what kind of speedup you'll get from a GPU. In general, later generations of video CODECs require inferring more from larger areas and so are less amenable to the kind of access that a GPU's memory controller is optimised for. Just doing something on the GPU isn't an automatic speedup, and until we see real implementations it's hard to say exactly how much better it will be.

Comment Re:Endurance (Score 1) 71

Every time someone gives numbers like these, I look at my laptop's uptime and disk write counters and see what they say. Apparently I've written an average of about 13GB/day since my last reboot. This machine has a 256GB SSD, so if the write endurance scales linearly with the size as your numbers imply (assumes near-perfect wear levelling), this would give it a 24TB limit. I'd reach that limit in just over 5 years, which is a bit longer than the typical time that I use a laptop as my primary machine. It's probably adequate - I feel very nervous using hard disks that are over that age. I'd feel a lot happier with something a bit further away from the 5-year mark though...

Comment Re:Incredible (Score 3, Interesting) 297

Oracle is expensive, but if it were really overpriced then you'd see lots of cheaper alternatives. For a lot of workloads, something like PostgreSQL will get the job done for a fraction of the price. When you really need something at the high end, however, Oracle or a small handful of other companies will charge you similar amounts. The real problem for a company like Oracle is the same as the problem for SGI. In the '90s, a database with a few GBs of data was something you needed Oracle (or similar) and a lot of hardware for. Now, a cheap commodity machine can keep the whole thing in RAM for read-only queries and can write to an SSD (or a few in RAID-1) for a few thousand dollars, including the time it takes someone to set it up. The number of companies that have data of a size where an Oracle DB will work is increasingly small: at the very high end, you have companies like Google and Facebook that can't use any off-the-shelf solution, and at the other you have companies that can get away with cheap commodity hardware and an open source RDBMS.

This is why companies like IBM and Oracle are focussing heavily on business applications and vertical integration. They may be expensive, but there's a whole class of medium sized enterprises for whom it's a lot cheaper to periodically give a huge pile of money to Oracle periodically than it is to have a large in-house IT staff.

Comment Re:impossible (Score 1) 297

Companies have no incentive to invest in infrastructure if most of the benefits will be reaped by other companies. If one company owns an entire campus, town, or island, then they are generally good at improving the infrastructure. If such an area is owned by a diverse set of companies and individuals, then good infrastructure is rarely an emergent phenomenon, unless some organisation is responsible for collecting money to pay for it and for providing it. This organisation is traditionally referred to as a government...

Comment Re:Price (Score 1) 172

Even for sequential reads, SSDs can be an improvement. My laptop's SSD can easily handle 200MB/s sequential reads, and you'd need more than one spinning disk to handle that. And a lot of things that seem like sequential reads at a high level turn out not to be. Netflix's streaming boxes, for example, sound like a poster child for sequential reads, but once you factor in the number of clients connected to each one, you end up with a large number of 1MB random reads, which means your IOPS numbers translate directly to throughput.

Spinning disks are still best where capacity is more important than access times. For example, hosting a lot of VMs where each one is typically accessing a small amount of live data (which can be cached in RAM or SSD) but has several GBs of inactive data.

Comment Re:SAS SSD (Score 2) 172

SAS doesn't really get you anything useful with an SSD. The extra chaining isn't that important, because it's easy to get enough SATA sockets to put one in each drive bay. There's no mSATA equivalent for denser storage, and if you really need the extra speed then why not go all the way and get something like FusionIO cards that hang directly off the PCIe bus?

Comment Re:Summary of your post (Score 2) 372

I compile large projects on a regular basis. We have one machine with 12 cores (24 threads) 256GB of RAM, so I tried running builds entirely to and from a RAM drive. The speed difference between that and using a mid-range SSD was too small to measure (-j12 up to -j64). The difference in performance between an SSD and a RAM drive is significantly greater than the difference between any two SSDs. In contrast, the difference between using a hard disk and an SSD is easily a factor of 2 in terms of build speed and often more.

Comment Re:Yes (Score 1) 372

Video editing is typically done in a nondestructive fashion, so you do a big copy to get the initial data on, but then it's comparatively small transactions. It's been almost 10 years since I did any, but I think the basic approach is still the same. You grab the data from the camera (easier now - back then FireWire was essential because you were getting DV footage from tape with no buffering in the camera, so you needed isochronous transfer. Now flash costs about as little as tapes did). DV footage was 10GB/hour, which was a bit painful to edit with 1GB of RAM, but a modern system with 32+GB of RAM it's nothing. HD footage for consumer editing is about the same data rate. For pro stuff, I believe about 40GB/hour is still common, but even that fits nicely in 64GB of RAM.

You're then going to be streaming it through some filters (typically on the GPU, but sometimes on the CPU) and writing the results out to cached render files. These are fairly small (order of 100MB or so) files containing short composited sequences. When you play, you're doing a lot of random seeks to get all of these and play them in sequence (or just cache them in RAM - with 64GB that's quite feasible, with 128GB it's easy).

Finally, you'll write out the whole rendered sequence. Your cached pre-renders might be at lower quality than this, so you might not use them for the final step, in which case you do have something like a simple copy with some processing in the middle.

Comment Re:Phone-based ransom-ware? (Score 4, Interesting) 321

Ah, starting with an ad hominem, good job.

No, your plan isn't completely unworkable, but unless you are completely confident in your random number generator (possible, but hard), you have the potential for a really expensive recall when someone works it out. With 10 digits, you have about 33 bits of entropy. That's not a trivial search space, but it may be possible to brute force if it's something you can do over the local network. If you can do 1000/second, it will probably take about 1-2 months. 10,000/second, and you can do it in a week. Pretty obvious network traffic though. If, however, your random number generator is a lot less random than you think, then in this kind of thing you may end up with only 16 bits of entropy (random number generator errors in the past have resulted in a lot less than half the expected entropy). In that case, at 1000/second you could probably brute force it in about half a minute, and definitely do it in slightly over a minute.

And that's assuming the only flaw is in the random number generator. A more common error in implementing this kind of system would be a timing error in checking the code. If the time taken to process the key is related to the number of digits that you got right, then you can easily target a phone to disable, even with a strong random number generator.

Sure, it's possible to do it right. It's just a lot easier to do it wrong. There's only one way of doing it right and there are hundreds of ways of doing it wrong...

Comment Re:Phone-based ransom-ware? (Score 2) 321

If the pin is 10 digits then "they" are wasting their time

Assuming that they are generated by a strong random number generator. Of course, there are no recent examples of random number generators having a lot less entropy than was believed (or required for the application). Well, except for that whole chip-and-pin thing. And the Debian OpenSSL packages. And...

Slashdot Top Deals

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...