Comment Re:Already happened before (Score 1) 171
No superpower is useless for picking up chicks.
No superpower is useless for picking up chicks.
If you were really serious about security, you'd be using sha256 at least.
While the data within the encrypted volume should be indistinguishable from randomness, the metadata headers are quite distinguishable. It's pretty obvious if something is a LUKS volume, but within that you shouldn't be able to tell.
...is that most users of low-end CPUs won't notice the difference going from 2 threads to 4, or turning on extra cache. They'll just notice their Windows 7 system getting slow, as Windows systems are wont to do, and then pay $50 only to find out that it's still just as slow, because it did nothing for their memory-starved, I/O-bound, single-threaded workload.
Pi has the property that all binary strings of a given length occur with equal frequency, making it an excellent source of fair pseudorandom bits. There are plenty of applications in which 2 quadrillion pseudorandom bits is grossly insufficient.
It would get you nowhere near that. A substantial fraction of any mainframe architecture's instruction set is emulated in software. The actual MIPS ratings are way below the MHz ratings, whereas on most superscalar architectures, MIPS exceeds MHz.
Once you've paid that penalty as well as the qemu penalty, you're getting down to somewhere in the Doom/Quake I range, with no hardware acceleration.
While T-Mobile's towers may be capable of 21 Mbps HSPA+, the G2 itself can only do 14.4 Mbps, according to the fine print on T-Mobile's teaser site. Of course, you'll get nowhere near this in real life, but if you have a 7.2 Mbps HSPA device, and you're expecting it to be 3x as fast as whatever you get in real life on that, you'll be disappointed to only get 2x that, at best.
Given that the Z architecture doesn't even have PCI, that would be a no.
There's a first time for everything. When I was at Red Hat, a customer (maybe you?) experienced a SAN-wide outage due to an error, caused by a rare hardware failure mode, that the vendor's engineers told me in private they had never seen before. It was one of the more reputable SAN vendors, and they worked with us on a kernel patch to recover from that error more intelligently. There's now a patch in the Linux kernel to gracefully recover from an error that has only been seen once outside of a hardware lab.
I've also talked to plenty of engineers and support people who had simply never heard of a particular problem before, because their companies lacked sufficiently well-organized support and bug tracking systems, and couldn't hold on to their experienced employees long enough to have someone around who knew what was going on the next time the problem came up.
In the world of enterprise computing, the law of large numbers is working against you. Some vendors understand this, and treat each novel failure as an opportunity to harden the product further. You usually pay a premium for this, but it's worth it. Others just swap the bad board and update their resumes. It sounds like NG went with the lowest bidder.
Given that Blizzard monitors local weather in places where they have data centers, to be aware of potential power supply and cooling issues before the alarms go off, I'm going to take a shot in the dark and guess their SANs use redundant controllers.
http://www.crunchgear.com/2009/09/18/blizzard-reveals-some-technical-data-about-world-of-warcraft/
According to TV, NCIS can hack into your motherboard and reprogram the hard drive to act as a GPS receiver, in under two hours.
I'm not kidding. For a filesystem that's only going to hold a handful of very large data files, transported by sneakernet, there's not much benefit to journalling, directory structure optimizations, POSIX permissions, etc. You just want something that's marginally more structured than writing data directly to the raw block device, and FAT32 is the lowest common denominator.
I'm pretty sure I remember reading a previous
In all seriousness, given that it's being presented at a TeX conference, I highly doubt it's something so fundamental as P vs. NP. Since we're all flailing wildly at possible answers, I'm going to put my money on an average-case polynomial solution to an NP-complete problem. These already exist, but the average case is very fragile and rarely survives reduction to another NP-complete problem. Perhaps he's found one for one of the more popular and useful NP-complete problems.
Who's running the pool?
Always draw your curves, then plot your reading.