Even if the ingredients are listed, the preparation process and relative amounts aren't. There's more to a "recipe" than the list of ingredients!
It's not the computer, either. My old Mini (the Austin kind, not the BMW kind) could be made to move without using the accelerator too. It didn't have a computer. It barely knew what electricity was.
Correction: The cpu cache line size of modern cpus is 64 bytes. This means that any random RAM access will load 64-bytes (as a single read). The CPU is then capable of extracting 1, 2, 4, 8 or even 16 byte (SSE2 vector load/stores) sections of that into registers, as a single operation if the data is aligned to its size, or as a few micro-instructions if not. This is no different between 32-bit and 64-bit CPUs.
For #1, RAM is always read / written in a multiple of a cache line, which I believe is 128 bits in most modern CPUs, and is independent of the native data size of the cpu (32 or 64 bits).
I know some about
It's full of errors. Especially the spiel about alignment. In 64-bit mode you don't have to align everything to 64-bits for best performance, only 64-bit-sized values (including memory pointers). The example 16-bit value actually only needs 16-bit alignment for best performance, which is no different to the 32-bit version of the program.
2: The increase in the memory use of pointers doesn't explain Windows x64's extra 300MB of memory use. My bet is on it loading both 64-bit and 32-bit versions of a bunch of libraries in order to support various components of Windows that are still 32-bit (as well as any 32-bit software you run).
3: Saying that a 64-bit version of a program won't be faster... Two things are actually in favour of it being faster: 64-bit mode exposes more and larger registers to use, and also guarantees certain instruction set enhancements exist (SSE2). The latter especially is a huge speedup if you take advantage of it.
No, boiling is the substance itself becoming gaseous.
What you really want is failure rate within first N years of operation, which you can't calculate from the MTBF figures.
Actually MTBF is defined as failure rate in the "constant failure rate" stage of a components lifetime, which is between the initial failure rate (aka infant mortality) and wear-out failure stage. So it's actually the failure rate over the regular lifetime of the component (typically 1-5 years).
MTBF is not the failure rate of a single disk, it's the average failure rate of disks used in an array. If you have a type of disk with a 100,000 hour MTBF, and use 100 of them (whether in a raid array, a cluster, or 100 individual desktops in a company). Then you will (roughly) replace one disk due to failure for every 1000 hours (100,000 MTBF / 100 disks), or 40 days.
It doesn't try to pretend that a single disk lasts 100,000 hours. That's stupid.
"Buy land. They've stopped making it." -- Mark Twain