Comment Re:Reflections (Score 1) 960
Way late to answer, but you'll probably be notified.
Typical consumer drives are intended for relatively low-heat, low-vibration environments. The firmware on the drives is typically optimized for desktop access patterns, and will automatically slow or stop the motor to save power. The drive assembly itself is quite a bit different -- lower quality bearings, less isolation on the heads (protection from vibration). Datacenters are hot, noisy, and vibrate badly. Consumer drives fail in that environment at a lot higher rate.
Firmware is typically optimized very differently, for different access patterns, power usage, etc.
The same model consumer drive, over different revisions, may have different capacities. In a RAID-1 config, if the replacement drive, or the drive you buy to create the mirror, is a few hundred sectors smaller, there's no joy and no mirror. If I remember correctly, some consumer-targeted RAID controllers actually reserve a bit of the disk and don't present it to try to protect against that particular problem. I ran in to that a lot in the past, not as much recently, but it still happens. Hell, back in the mid-90s I had that happen with enterprise SCSI drives that weren't vetted through a vendor that pushed it -- same model of the Barracuda from a random cheap-ass vendor (Dirt Cheap Drives, if I remember correctly), different capacities. Ruined my bloody weekend.
Going outside the facts, and moving to the artificial reality of vendor contracts, HP or Oracle may well respond that they won't support something until you pull the consumer-grade shit out of the machine.
Now, after all of that -- I do use consumer drives in servers when it's worth it, and when I can afford the risk. My backup media servers (Netbackup) are Sun x4500s with 48 internal disks -- those disks have been swapped with cheap-ass WD 2TBs and have close to 100TB of available space. The disk is managed by ZFS with single-parity RAIDZ and is used for staging backups before pushing to tape to move offsite (weekly/monthly), and duplicated storage of short-term backups (daily).
I'll use it for scratch space, and I'll use it when I can afford to lose it (or at least lose access until I rebuild and restore). If there's data I care about on there, it's typically using ZFS so that block-level checksums are done and I'll at least know that the data is bad without silent corruption.
I've got shit to work with for budget (public higher education), and the cheapest reasonable "enterprise-like" disk I can get runs us about $400/TB usable -- Dell MD3200 SAS-connected array with dual controllers (four hosts redundant, 8 non-redundant, and it's really an OEM LSI Engenio (sp)). Best I can do for disk on the SAN is more like $600 (Nexsan, Dell/LSI MD32xx), and those prices aren't for a single TB purchase. Most SAN-connected disk is still in the $1000/TB range and higher. I'm counting these prices including support (NBD response, usually) for three years or so. The other constraint is that I want the vendor to exist in a few years and have some track record, and I need to be able to get it past purchasing, which usually means state-or-U-level contract -- I've had to support some random shit bought from HPC vendors, usually OEM'd Infortrend or similar, and don't want to deal with that shit ever again.
It's all about the application and level of risk that's acceptable for that app/system. I'll never stick shit disk on a SAN to use with a VMware cluster, but I will happily throw a pair of cheap disks in a standalone ESX server that's running developer VMs or testing. Prod systems need to be expensive shit, sadly, to avoid giving the vendor an excuse (I'm looking at you, Oracle).
The speed increase matters once in a great while too -- more RAM is usually more effective, and cheaper.