But mdadm *does* beat at least some of the enterprise $700-$1500 ones as well. My LSI MegaRAID SAS 9261-8i cost me about $900 (the battery alone was around $300) and it's slower than snot.
I was raking in 800 MB/s seq with mdadm on an empty 8-disk RAID-50 using a bunch of $30 "cheapy" SATA HBA, but when I switched the exact same drives to hardware raid, the most I could get was 250 MB/s (seq) on an empty array and 160 MB/s at 85% full. Not to mention the random read I/O of 1 MB/s (yes, one MB per second -- not a typo). This is after spending a few weeks optimizing things: stripe-aligned partitions, block-aligned stripe sizes, and both controller and disk cache enabled. The latter of which I'd prefer to have turned off (even with a battery).
I certainly wont make that mistake again. Of course, it's partly my fault for buying something without waiting for reviews (several other newegg buyers found it to be ludicrously slow as well), but I thought that after all these years it was a sure bet that *anyone* could turn out a decent hardware raid card if you give them over a grand. Apparently not. And I should have really researched the raid-5 write hole more before blowing $1200 on a supposed fix for the problem when a much better solution is to just use RAID-6 and the write intent bitmap (or ZFS).
Of course, I'm not trying to say *all* hardware raid cards are bad. I'm sure that most of them are just fine. But I just don't see any benefit to them any more. Linux has mdadm, *BSD/Solaris have ZFS. The only reason for hardware raid is if your operating system's software raid implementation is completely braindamaged. In other words, it's for Windows.