You can forego having a real UPS on your live servers too, but that doesn't mean it's a good idea.
You can have all your production servers be z10 mainframes too, doesn't mean it's a good (or cheap) idea.
RAID5 write hole due to system crash (or power loss) between data and parity updates. Resulting in loss of redundancy and eventual data corruption.
It's easy to have pairs of RAID1 drives in a RAID0, no RAID5 no RAID5 write hole.
if your boot drive fails in a manner that allows access to bootsector but blocks access to the kernel image on Drive0, the system will not boo
Why would /boot not contain the kernel you need to boot? This is an automatic default setup if you choose SW RAID in Eg. RHEL.
Array health monitoring does not display red lights on failed drives, as it does on an integrated RAID controller.
Bullshit. In fact it's not too hard to setup soft. RAID setups where when you issue commands to drop a drive the red led above the drive starts flashing, until the tech. replaces it. It's also possible to have hot swap.
Integrated RAID devices typically integrate with system monitoring software and can send proper alerts to admins via SNMP and e-mail, in a manner that integrates with common production grade monitoring solutions. On a system running mdadm, there is no method of doing so, short of cobbling together an ad-hoc script, that would be error prone.
Riiight, madm etc. doesn't integrate with SNMP.
Of course the "HW" RAID is much more expensive, operates like a black box sometimes leaving you totally screwed if the HW dies (esp. any of the cheaper solutions, which I don't think you were advocating but is what people tend to use instead of SW RAID when seeing rants like yours).
I'm not saying it should always be used, sometimes the cost really is worth it, just as it is with PostgreSQL vs. Oracle ... but to dismiss it out of hand like you do is insane, IMNSHO.