Without parity checking, you simply aren't addressing bit rot. Period. It could be Raid 9 Million(tm) and if all it's doing is copying the data, and not comparing it, bit rot will still proceed apace, silently eating your data. But let's say you're a good administrator that has enabled parity. Great! But there's still a problem: parity cannot restore data that has become corrupted due to bit rot -- it is a detection-only mechanism.
This is incorrect for Reed-Solomon based RAID (levels 6 and higher such as RAID Z3). RAID6 can correct bit rot on a single disk and in general for t parity disks, floor(t/2) random errors per RS code can be corrected. All the RS-based RAID systems I've seen essentially store the RS code across devices using a GF(2^8) code, meaning that up to an entire byte could be corrupted by bit rot at a given logical address across all the stripes and still be corrected. All the details are on Wikipedia. Not all RAID-6+ implementations actually check the parity when reading, and I have no idea how many can solve the error locator polynomial for each RS code to actually identify and correct bit rot in multiple locations in different codes versus just dealing with known bulk errors (e.g. failed disks).
Now that I've explained all the ways that you're wrong, let me say that bit rot is probably not the cause of the OPs problems. Infact, USB devices are well-known for corrupting filesystems because of spontanious disconnects, power loss events, etc., and this is simply what can be expected in a typical residential environment. Even a RAID configuration in a residential environment isn't invulnerable to the "write hole" problem -- where data is partially committed to disk, but then the array suffers a power loss event.
Any proper file system will have a large enough transaction/intent log that can be replayed to correct partial data/metadata writes due to power failure and the RAID write hole, etc.. Most file systems in use are not proper, of course, but at least a few are available.