Semi-OT: A word of friendly warning:
A couple of months ago (year?) I bricked a 120GB Intel 520 w/ the latest firmware (not sure
if it was 400i) w/ ext4 on Fedora x86_64. (Second bricked SSD in 12
A *very* short power shortage crept under my APC UPS and bricked the SSD.
Amazingly enough, the power shortage didn't crash the machine - which
continued working off the main HDD software RAID array.
Luckily for me I rather distrust SSDs (see below) and use it as fast
cache-of-sort, so I only lost a couple of hours of work. (If any)
IMHO SSDs have one huge drawback: Unlike HDDs that can be partially
recovered from more-or-less any type of damage by recovering data
around bad sectors or replacing a fried controller board, SSDs complex
write scheme and the resulting complex firmware usually means that any type of
damage / firmware error will completely bricks it leaving more or less zero
chance of getting the data back.
On the top of that, we (as in all of us) have 40+ years worth of
experience in predicting the life cycle (and death) of HDDs. There's
far less information about the life cycle of SSDs.
Case of point: A couple of days after this incident a family member lost one of his HDDs.
Unlike my dead SSDs, with some work I managed to recover 95% (or more) of his files.
Don't get me wrong. SDDs will replace HDDs in the end - but in the mean time, I'd keep SSDs for non-critical tasks.