As someone who has 100's of TB's of data stored in ZFS I couldn't agree more. In most cases if ZFS spits out a drive because it's convinced it's writing bad blocks, I believe it. In most cases (if it's a seagate drive) seatools backs me up on this... in several cases sea tools doing a quick check says the drive is fine... it never fails if I do a "full" scan of the drive it'll eventually throw an error.
I've found damaged SAS cables, JBOD enclosures with dodgy bridges, etc. because of ZFS.
With that all said, now that you've gone out and bought a small PC, stuffed 4, 4TB drives into it and set it up as a raid10 using ZFS you now need to ask the next question... what's more likely... I'm going to have two drives fail simultaneously or that my house is going to get hit with a {flood, lightning, fire, thieves, etc}
Honestly, I'd build two of these devices, one for local backups and I'd put one at a buddies house and do remote backups from your local device.