parent++ (I'm not saying much more than parent post has already said.)
We've more or less hit the limits of useful gains from increasing pipeline depth (and thus increasing clock frequency) or increased Instruction Level Parallelism (which gives you superscalar/multiple dispatch per clock cycle). The silicon required to do the book keeping starts being more of an overhead than you can get by simply rolling back to a simpler core and having more of them- which is precisely what has happened. As of about 2007 clock rates were generally down from their peak with increased throughput coming from the addition of multiple cores.
Multiple cores- full cores with FP and everything!- are useful for Task Level Parallelism, which can be difficult to achieve on a single job but is a very nice fit for many server loads (like web serving) where individual threads have very little interaction. Desktops will no doubt inherit many core (8+) CPUs from the server world, but I'd guess that we'll actually see desktop CPUs shrinking- requiring less power and following the laptop power curves. There may even be a more pronounced separation between the "power desktop user" who uses their CPU for intensive graphic rendering (i.e a graphics workstation or gamer machine) and everyone else (who ends up using a mere 4 or 8 core machine which requires little or no active cooling).
Servers will continue to pack more and more cores with more and more memory. The bandwidth bottleneck is RAM, not Disk as was mentioned in one comment (any serious server setup uses a variety of strategies to serve most content from RAM and only writes to Disk for persistence or tail end performance). This also means they'll have more NICs, and there will be pressure to push the network speed up to keep the CPU and RAM busy.
The reference book on this sort of thing (and apologies for anything I got wrong) is "Computer Architecture: A Quantitative Approach" by Hennessy and Patterson. Very readable and amazingly comprehensive.
Lets be clear here: Google "will be able to moderate the content of its book scans". There is not yet any indication that Google *will* do anything bad or evil with its moderation powers. And you'd have to be mad to think that any non-Government entity could go live with a service that didn't allow them some editorial control.
Lets say you have these rights and publishing everything you can get your hands on- and you don't reserve any editorial rights. Eventually you publish the back editions of playboy- bam! your site load goes up ten fold and your servers start folding at the knees. What do you do? Well, you can't take them down! Isn't that censorship? Or if the US government comes to you and tells you to take something down? Or you publish "The Old Mans Guide to Pedophilia- Now With Street Addresses"- can't take that down! Or the "Bumper List of Presidents of the World and Movie Stars Phone Number". Or "Tax Statements of the U.S.A. 2008". In some kind of ideal world it may be that all these things should remain uncensored but that isn't the current world.
In the real world you have to have control of the service you are running for all sorts of horrible technical and political reasons. You would have to be hopelessly naive to believe otherwise.
By all means complain after it turns out that Google is being evil. Complain about the basic idea (scanning in copyright but out of print books). But complaining about "censorship" without any evidence of poor editorial behaviour? For fucks sake.
"In matters of principle, stand like a rock; in matters of taste, swim with the current." -- Thomas Jefferson