Agree with 1984, Brave New World, The Road, and many others above, but no one has mentioned Stanislaw Lem. Memoirs Found in a Bathtub is pretty dark. The Futurological Congress has a veneer of psychedelic humor in it, but the underlying sentiment is quite grim. Then there's Solaris, so grim they had to film it twice.
This is certainly true, but only half the issue. Wikipedia is justly distrusted because, at any given moment, an article may have been subtly vandalized, astroturfed, tilted in tone, or just plain wrong. Far more important is the ludicrous idea, central to Wikipedia, that any given editor is just as likely to be accurate as any other, without regard to knowledge or experience, that any editor may edit anonymously, and that any system for establishing identity, real-world reputation and (crucially) expertise (even if it is only expertise in interpreting the citations) is anathema.
This gets you teen-agers arguing with Math PhDs about math, and zealots and partisans of all stripes arguing with everyone. Expertise is central to the concept of an encyclopedia, and Wikipedia and its community thoroughly reject and repudiate it. This, indeed, may be well-adapted to some things, but writing a true encyclopedia is not one of them. As someone once said, on Wikipedia, twenty teenage idiots and one expert are indistinguishable from twenty-one teenage idiots.
Wikipedia is a big old pile of trivia, opinion, gossip, libel, and misinformation. That it is sometimes correct is happenstance, not planning.
If you can't measure it, you can't manage it. You haven't taken the first and most essential step in analyzing your problem: measuring it. Is your problem caused by network failure? By power? By software failure? Hardware? If hardware, by server hardware, disks, or something else?
If software, by OS, database, or application software? All of these have different solutions. Going to the cloud won't solve a network failure, it will make things worse. Going to the cloud may improve persistent hardware failures. but the MTBF of most decent hardware is pretty good, so are you sure you have clean power and a good (cool, clean) environment?
If your software or system is crashing, then that's its own problem.
This strategy will protect you to varying degrees against fire, natural disaster, failure of digital media, bankruptcy of online services, bit rot, password loss, and just about everything else, but it's a lot of work.
I make these "yearbooks" once a year, plus a book for significant birthdays and anniversaries, major travel, and other big events. I store on two photo services and an online backup service, and I have local online copies on RAID, a backup on another RAID, and a third RAID at a separate physical location, updated monthly via rsync.
Unless you are interested in a pretty small class of problems, the inherent parallelism of most applications continues to be somewhere in the range 2.1 to 2.5 (i.e., you can speed them up by a little over 2x with the addition of more processors). Thus, in most real-world applications, most of those cores, or vector units, or any other "supercomputer" features will go unused.
If anyone here observes a quad-core chip running any particular load anywhere close to 4x the speed of a single core should write a paper about it, because this has been the holy grail of parallel computing for going on 40 years now.
That Intel thinks this is a solution is sadly typical -- the problem is a software one, not a hardware problem, and they do not know how to solve it.
When speculation has done its worst, two plus two still equals four. -- S. Johnson