- NTFS and ext3 have journaling, FAT12/16/32 and ext2 don't have journaling.
- FAT12/16/32 have a central structure (the FAT). Damage it and your data is lost. ext2 and ext3 store their meta data redundantly.
- RAID is no replacement for Backup.
- A real hardware RAID is expensive, and appears to be a single disk to both BIOS and OS. Its on-disk meta data is propritary, i.e. if your HW RAID controller dies, you need exactly the same controller again to get access to your data. HW RAID works with every OS, because it appears to be a single disk (typically, SCSI). Booting from complex RAID configurations is no problem, as each RAID appears to be a single disk. The RAID controller is a small computer on its own, taking care of the reqired calculations for non-trivial RAID levels, of switching to hot-standby disks, and of detecting broken disks.
- A software RAID is cheep as dirt, every single disk of the RAID appears in BIOS and lower levels of the OS. The on-disk meta data depends only on the OS, so you can mix controllers as you like. A broken controller is no problem, replace it with any controller that has the same connectors and your data is back. Booting can be a problem, because the BIOS does not know anything about the RAID. Usually, booting is only possible for RAID-0 and RAID-1. Booting another OS is problematic, because there is no standard for Software RAIDs. Linux may be able to work with Windows RAID volumes, but Windows can't work with Linux RAID volumes. Calculation and monitoring is done by the host CPU.
- A host RAID is nearly as cheep, the only difference to a software RAID is that the BIOS decides about the on-disk meta data. Special drivers for each supported OS know the structure of the meta-data, but they don't allow to use other controllers in the same RAID. A broken controller is a problem, because drivers will refuse to work with other controllers. Booting is no problem, because the BIOS knows about the RAID.
I prefer pure software RAIDs, for a simple reason: They do not depend on available hardware. If one controller dies, switch to another one: Other brand, other type, other drivers, and the RAID still works. If you insist, you can even mix an IDE drive, a USB drive, a SATA drive and a SCSI drive into a single RAID. Try that with a hardware or host RAID. Some people even built RAIDs of floppy disks or USB sticks (not for pemanent use, of course).
My faithful old Linux home server runs two RAIDs, both in software: a RAID-1 for the OS (remember: the BIOS does not know about the RAID), and a RAID-5 for the data. The RAID-1 used to run on old SCA drives, but recently, I switched to two small IDE drives due to unrecoverable SCA cabling problems. The RAID-5 is composed of four IDE drives, connected to two IDE controllers, each disk on a single IDE cable. An external USB disk is used to back up my data, rotating through 10 days. All filesystems are ext3, all disks are monitored using SMART, all RAIDs are monitored. If anything wents wrong, I will get an e-mail from the monitoring software.
Until recently, one of the controllers was an el-cheapo non-RAID controller, and the other one was a donated, expensive, well-known brand, RAID-capable controller running in non-RAID mode. The latter one decided to randomly take some free time on the job, and either disconnected from the PCI bus or disturbed it, causing panics in the OS above. Only pure luck protected me from data loss. I ripped it out of the machine, kicked it into the trash bin, rewired the RAID to use two disks per IDE cable, and verified and reconstructed my data. Some days later, another el-cheapo non-RAID IDE controller arrived, the same brand, model and type that already sat in the next PCI slot. So I rewired the RAID again to work with one disk per cable, everything was fine again.
For a new small business or home server, I would use nearly the same setup again: Two software RAIDs, one for the OS, and one for the data. Upgrading the OS is just fun when you can simply copy the entire known-good OS to a backup directory on the data RAID. I would use SATA instead of IDE, simply because it is the current standard. I would use the second-largest disks available for the data RAID, and the cheapest disks available for the OS RAID, like I did some years ago. SATA has a dedicated cable for each disk, so there is no more need to avoid running disks configured as slaves, and so one controller with four SATA ports is sufficient for a RAID-5. I would pick a mainboard with lots of SATA ports, gigabit ethernet, and a mid-range CPU. For backup purposes, I would use one or two hot-pluggable, external SATA drives.
For a new, smaller home server, I would use only two disks in software RAID-1 configuration, but still split into a small OS and a large data partition, a low power CPU (Intel Atom or VIA C7), gigabit ethernet, and a USB-2.0 or e-SATA external backup disk.