I have seen both Dell RAID-5 and Sun RAID-6 arrays fail with 3+ simultaneous disk failures each. Google ran a Petabyte Sort benchmark in 2008 (6 hours to sort 10 trillion 100-byte records) and was not at all surprised that they had at least one hard drive failure on every attempt (4+ drive failures per day). I have seen enterprise tape systems fail to read their data (hopefully there was redundancy, but I don't know). I have seen backup systems have major performance glitches and fail to restore within their needed time frame. Facebook, for example, only has a few seconds to recover from a failed server before customers might get angry, and has built systems to handle it because it's necessary to provide a good service. The major players who are succeeding and profiting at giving away free services to hundreds of millions plan that all data storage will fail regularly, and plan accordingly.
A little primer for those of us who haven't kept up with new storage technologies since the 90's.
Google deals with enough data that they cannot consider any of your technologies reliable enough. Five years ago, they were already processing 20PB of data every single day with map reduce, and if you have to buy enough systems, even the best RAID6 SAN systems will break regularly. Statistically, a small chance repeated often enough gives you a virtual guarantee of probability. Google generally doesn't bother with expensive technologies like SAN's and RAID, or even bother with enterprise drives (spinning disks -- they probably use an enterprise PCIe flash). You can make what you want of the enterprise drive decision, but I'm pretty sure I've read from at least a couple of sources that enterprise drives are just as prone to failure as regular drives. The major differences are warranty and firmware (e.g. supporting RAID friendly reads). Numerous sources have substantiated that the manufacturers' MTBF numbers are pure marketing fiction. They probably boast a lower error rate, but I have not seen a comparison, only reports that they are off by several orders of magnitude.
What Google does is avoid any redundancy in their machines and take the "redundant array" to a whole new level: Redundant Array of Inexpensive Servers. Multiple copies of the data are written to different servers in different cabinets, and with each data block a checksum is stored. Every time the data is read, the checksum is verified. This way you know with 1 single read if you have bitrot, and can correct it with 1 good read. Now you no longer have to keep comparing 3 copies of the data to correct bitrot. The Hadoop project copied this with their HDFS, and many other large scale technologies have followed suit.
At a desktop level, ZFS, BTRFS and (I think) Windows Storage Spaces do something similar, combining RAID technology (0/1/5/6 maybe 1E) with checksums inside the file system. If a drive fails or even just that the checksum doesn't verify there can be redundancy to attempt to rebuild from automatically in the file system, giving you a better data guarantee than any RAID card I have seen. If the journaling is done correctly, it shouldn't be susceptible to losing data from a power loss either, but home battery backups aren't too expensive. The OP was asking specifically about bitrot. A lot of URE's (uncorrectable read errors) get labeled and treated as bitrot, but it sounds like data he has previously verified is now corrupt (actual rot), not that the reason for corrupt blocks matters once they are corrupt. Bitrot happens more frequently when you don't have such stringent environmental controls in your home as you would in a data center, and I have personally seen it with only 10's of GB of my data.
In my experience, data that is backed up and archived, isn't a prime target for user error nor gross negligence regarding data backups. The user is definitely experiences some sort of URE. In this case, a proper file system is quite important for protecting the data. I would recommend setting up a multi-drive NAS using ZFS (there is probably a distro build for this) and a second set of drives for archiving data to, where you cannot delete the contents. Then you need to regularly scrub the data to correct errors and you might want to check for SMART errors (scan errors and reallocation counts) as well. You should be able to get this information emailed to you, but it probably requires a little scripting. Storage Spaces on Windows might be an alternative, but I know little about it. If nothing else, writing scripts to manage everything will be a pain in Windows if Microsoft hasn't built it as a feature. If you want to protect against a disaster in your house, you may be able to synchronize the data to another ZFS array in another location, preferably an automatic solution over the Internet.
An online solution will handle all of this for you, but I do not expect it to be cheap. With some document retrieval trade-offs, Amazon Glacier would probably be the cheapest at $20/month. If you don't mind a maximum photo size of 2048x2048 (800x800 w/o Google+) and maximum video length of 15min, then these don't count towards your storage limits on Google Picasa.