Its not the only solution of its type, but it is imo the best:
It is perfect for your kind of situation - long term, reliable, efficient storage of lots of data that seldom changes. Think of it as offline RAID backup, it works like RAID, but it computes parity during your backup operations "offline"..
The beauty of it, imo, is that is is not file system dependent. It works with NTFS, EXT2, HFS, whatever. It works on Linux, Windows, Macs, whatever. You don't need special controllers, and your hard drivers do not have to be matched to each other. You can even include drives on different buses (some on USB, some on SATA, whatever).
It doesn't mess with your data at all - your files are stored normally and can be accessed normally, there is no difference between using it and not using it under normal operation - there is no performance impact at all (it only does anything during backup operations - and even then it is very lightweight if your data doesn't change drastically day to day). You just schedule it to run on a regular basis and it does it thing. It detects and recovers from bit rot in much the same way as ZFS (although you need double parity or more to really ensure full protection from multiple drive failures). You can be as paranoid as you want, it just takes more storage to be more paranoid
It isn't good for frequently changing data, and it isn't so great for huge amounts of small files either. It takes a long time to generate parity setup if you have lots of data. You have to be comfortable with command line usage and you have to have some way to schedule jobs. Those issues aside, for things like media libraries and archival storage, it is easily the least painful, most effective solution I have ever used. And its free to boot (and opensource).