There you go again. Acting like you know what you're talking about, but you don't. ZFS and BTRFS have ...
Exactly dick to do with what I said. The filesystem doesn't matter. The operating system doesn't even matter.
Modern drives don't store the bits that you feed them exactly as you give them. Instead, they use CRC and error correcting codes, so they
... Which again counts for exactly dick. I'm talking about infrastructure and architecture, while you're blubbering on about the hardware.
Which, I guess, is better than getting a corrupted picture. Ideally, a RAID would be able to recreate the missing block, but I can't find any reference to a RAID doing that.
That's because you have no experience as a network administrator in a professional environment. Because then you'd know that's the very thing RAID was designed to do: Recover from hardware failure, which includes sectors becoming unreadable. You are clearly confused both which what level of abstraction is being discussed (architecture versus hardware), as well as the different types of failure modes each of these solutions presents. Bit rot is a physical process that occurs in all magnetic media, and at sufficiently small-scale, can also affect non-persistent storage such as RAM.
It surely doesn't help that modern computers have many gigabytes of memory, but almost none have ECC on that memory.
That's because ECC adds an extra layer of complexity to solve a problem that doesn't occur very often in computers, and when it does, the most severe consequence is usually that the computer crashes or behaves abnormally. For residential, and even most commercial uses, ECC memory just isn't needed. But for a select few use scenarios where data integrity is absolutely critical -- such as, say, nuclear power plants, air traffic control systems, certain types of hospital equipment, or financial processing systems, the added cost is justified because they need high availability/high reliability of those systems. It's also used in certain aerospace applications because the physical mechanism that causes bitrot -- high energy radiation, increases quite a bit at higher altitudes, and in space increases several orders of magnitude -- and if you're going to put something in geostationary orbit, it then takes the full brunt of solar radiation with no mitigation. Correcting for memory problems in these situations is better done at the hardware level; hence ECC memory.
Your consumer-grade computer's memory is a piece of shit. It's made with commodity capacitors and ICs that are stamped out in bulk for super cheap. And, big surprise -- super cheap doesn't mean super reliable. But we don't need super reliability -- when our system shows obvious signs of a failing memory stick, we just drive to the store, plunk down a $20 and abscond with a new one. Problem solved.
I'm not optimistic about the long-term storage of electronic data.
That's because, as previously pointed out, your experience comes from consumer-grade hardware that you don't fully understand the design considerations made. NASA has had great success in the long-term storage of magnetic media -- in fact there was an article not long ago about how they had to reverse-engineer equipment designed during the 1960s for the Apollo program to recover data on tape reels, when they lacked the original equipment it was recorded from. They discussed how the tapes themselves had become brittle and the ferrous oxide would actually peel off in chunks while reading, much like how paint peels off a house, but they were able to recover this data anyway. The technology we have today is far more sophisticated and unlike old tape-technology doesn't require physical contact with the source media to read it. There are companies like OnTrack that specialize in data recovery from harddrives and boast a remarkable success rate... albeit a very expensive proposition for the home user.
So while your experiences with your personal home equipment may have led you to not be optimistic, my professional experience with industry-grade equipment suggests that, if you follow best practices regarding data storage and disaster recovery, you can ensure reliability far beyond what the OP requires for a reasonable cost.
The single biggest cause of data loss is user error, followed by not having a backup, or not validating the backup, prior to the disaster. Not bit rot. Not even weedy hardware, which comes in a distant third place.
I have yet, in my professional career, to encounter a data loss event that wasn't due either to human error, or gross negligence regarding data backups. I am aware of a few near-misses however due to early implimentations of RAID5 -- typically drives are all purchased at the same time, are of the same make and model, and thus tend to degrade at about the same rate. While RAID5 promises recovery from a single drive failure, designers did not at the time consider that the time between failures would not be randomly distributed over the drive's life expectancy -- multiple drives often fail in a RAID5 within days or weeks of each other. "hot swap" backup drives can mitigate this problem, but in a large array, the rebuild time can exceed the time before the second drive fails. This is why most IT professionals now recommend RAID6 -- because multiple simultanious drive failure happens much more than originally predicted. But this is not to say my optimism for long-term storage has been in any way affected; that is simply industry experience and the inevitable delineation between theory and practice -- between knowledge, and experience. You have knowledge, but I have experience. My experience has given me the optimism your knowledge has been unable to.