So "p" is the probability of a drive being down at any given time. A hard drive takes a day to replace, and has a 5% chance of going dead in a year. A given hard drive has a "p" of ~1.4e-4.
For RAID6 with 8 drives, you can drop 2 independent drives: failure = 1.4e-10. It's out in the 6+ nines.
It would take 6x sets of mirrors to get the same space. Each mirror has a failure probability of (p^2), 1.9e-8. Striped over the mirrors, all sets have to stay active: success = (1-p^2)^6, failure = 1.1e-7. Way easier to calculate without the binomial coefficient, by the way.
Technically, the mirrors are 3 orders of magnitude more likely to fail, but the odds are still ridiculously good. Fill a 4U with 22 drives (leave some bays for hot-swap) as mirrors and it's failure = 2e-7. Statistically, neither of these is going to happen: you just won't see two drives happen to go down together by random chance.
People already know this. There are much more advanced models that account for the what-happens-next situation after you've already lost a single drive, and of course it non-linearly worse. But just to keep it simple, going back to the naive model, for the RAID6 with 7 remaining drives, the failure probability is now up to 4e-7 during the re-silver time. The mirror model stays at a "huge" failure = 1.4e-4 during a re-silver, but it's brief, predictable, and with low system impact. It's my stance that that kind of probability keeps it in the category of less-important compared to many other factors for a risk analysis.