Indeed, nspluginwrapper is the only good way to run flash on any word size. Why would you want to run Flash in-process, regardless of 32/64 compatibility? nspluginwrapper makes the sun shine and the birds sing and the grass grow. It's awesome.
Actually there were some last-minute commits that screwed up those of us with Intel "Ironlake" graphics. Users of the ThinkPad X201 unfortunately need to pass kernel parameters to the live cd and must patch their kernels to make the installed system work.
That is a recipe for tragedy. The operating system itself upgrades perfectly well, but the GConf schemas are subtly incompatible and the GNOME people couldn't care less about solving this problem. If you're going from Hardy to Lucid I highly recommend a nuke-and-pave install and copy your homedir from a backup, without any of the dotfiles.
I had a great deal of mysterious behavior on my laptop that was upgraded to every Ubuntu release since Hardy, and all of that stuff disappeared when I reinstalled and got rid of all my dotfiles.
I never read Tom's any more, but maybe I'll start. I appreciate that they tracked down the cause of a performance regression between Hardy and Lucid. The only other site that routinely benchmarks Linux distributions is Phoronix, and those guys are prone to just throwing weird results out there with no explanation. The number of inexplicable, unrepeatable benchmark results posted over at Phoronix is huge and ever-growing. This benchmark from Tom's is much more useful.
In fact it's *exactly* the same API used for its competitor, Amazon S3. If you use boto to access AWS, you'll be right at home using boto to access Google Storage.
Any app on the blackberry requires user intervention before it's allowed to fetch URLs, open raw sockets, read email, dial the phone, get your location, manipulate the address book, or do any other damned thing. And 90% of the APIs require the developer to be vetted through the app signing process. It actually seems much less vulnerable to trojans and spyware than a PC.
Double true. Last time I looked at btrfs it was also thousands of times slower than ext4 (no joke). It's not ready for public consumption.
The answer is "yes". Transfer switches often fail and are rarely tested. This is also true of other power equipment. If it's rarely used the probability of it working in an emergency are somewhat low.
However, in this case the transfer switch worked fine, but it had been misconfigured by Amazon technicians. According to their status email from yesterday (posted in their AWS status RSS feed) the outage was a result of the fact that one transfer switch had not been loaded with the same configuration as the rest of the transfer switches in the datacenter. The "failed" switch performed as configured and powered down.
When you buy a thinkpad new in the box, it comes with a little bag of replacement pointer tips in various styles. I prefer the original dot texture, but I do hate that it collects filth easily.
Alternate theory: you're a total fucking idiot.
Not just quieter, but using just enough acceleration to get the head there in time for the data to come around also reduces power consumption and -- back on topic -- minimizes contributions to environmental vibration.
One of the things you should take away from all these papers is that "enterprise" disks have hardware and software that compensates for this type of thing, while "consumer" disks don't. If you fill a rack full of Seagate Savvio 15K disks and another rack full of Seagate Barracuda XT 2TB disks, you'll find that the latter suffers mightily from neighbor vibration while the former handles crowding much better.
Wrong. Vibration impacts seek times because the head has to settle over the track. If the track pitch is wider, the head has a bigger target and can settle sooner, but the capacity is less. If the tracks are smaller and closer together, the head takes longer to settle, but the capacity is more. In general disks of a given diameter with fewer tracks will be less impacted by environmental vibration.
Adobe Reader has had byte-range HTTP since ever. It would also have just downloaded the first page and then downloaded the rest on demand.
Adding features does not necessarily increase functionality -- it just makes the manuals thicker.