There are two uses for SSDs in a ZFS pool. The first is L2ARC. The ARC (adaptive replacement cache) is a combination of LRU / LFU cache that keeps blocks in memory (and also does some prefetching). With L2ARC, you have a second layer of cache in an SSD. This speeds up reads a lot. Data that is either recently or frequently used will end up in the L2ARC and so these reads will be satisfied from the flash without touching the disk. The bigger the L2ARC, the better, although practically if it's close to your working set size you'll start to see diminishing returns if you make it bigger.
The second use is as a log device. The ZIL is the ZFS Intents Log, which is effectively a journal. Transaction groups are written there first so that the filesystem is always in a consistent state. It's usually on the same disk as the storage, which means that writes can involve a lot of seeks. With the ZIL in a different drive (SSD or otherwise), you reduce the number of writes required. Because you can generally write to a ZFS pool significantly faster than to a single disk, putting the ZIL on an SSD stops it becoming a bottleneck. The rule of thumb for the size of the log device is that it should be as big as the maximum amount of data that can be written to your pool in 10 seconds. If you can do 100MB/s writes, you want about 1GB of log device.
Once a zfs filesystem is created that's it. No resize support
Minor correction: Once a ZFS pool is created, that's it. Filesystems are dynamically sized. You can also add disks to a pool, but not to a RAID set. You can also replace disks in a RAID set with larger ones and have the pool grow to fill them. You can't, however, replace them with smaller ones.
Ideally, in something like ZFS you'd want background defragmentation. When you a file that hadn't been modified for a while into ARC, you'd make a note. When it's about to be flushed unmodified, if there is some spare write capacity you'd write the entire file out contiguously and then update the block pointers to use the new version.
That said, defragmentation is intrinsically incompatible with deduplication, as it is not possible to have multiple files that all refer to some of same blocks all being contiguous on disk. It's also not a problem if you've got a decent sized L2ARC, as the random reads on the disk are fairly rare.
That depends on the reason for the failure. If it's because there's a little bit of dust on the platter, or a manufacturing defect in the substrate, then it's very unlikely. If it's because of a bug in the controller or a poor design on the head manipulation arm, then it's very likely.
This is why the recommendation if you care about reliability more than performance is to use drives from different manufacturers in the array. It's also why it costs a lot more if you buy disks from NetApp than if you buy them directly: they're the same commodity drives, but NetApp tests batches, discards the least reliable ones, and ensures that you don't have two disks from the same production run in the same array. You're still getting the same drives you can buy elsewhere for a fraction of the price, but you're getting more diversity.
ZFS doesn't have ECC, but it does checksum each block, so it can detect per-block errors. If you have valuable data, you can set the copies property to some value greater than 1 for that data set and it will ensure that each block is duplicated on the disk so if one fails a checksum then the other will be used to recover. If you have three disks, you can use RAID-Z, which loses you 1/3 of the space (not 1/2) and allows any single-disk failures to be recovered. Running zfs scrub will make it validate all of the data and when any read fails the checksums recover the data from the other two.
The reason it doesn't use ECC is that ECC doesn't mesh well with the failure modes of disks. ECC is used in RAM because when it gets hot, hit by a solar ray, or whatever, it is common for a single bit to flip (in a single direction, which makes the error correction easier). In a disk, you typically have an entire block fail, not a single bit. Modern disks use multiple levels, so the smallest failure that is even theoretically possible might be a single byte (or nibble) in a block. And since the failure isn't biased, you'd need a fairly large amount of space. A better approach would be for the filesystem to generate something like Reed–Solomon code blocks for every n blocks that are written. This would allow single-block errors to be recovered, as long as the other blocks are okay. The down side of this approach is that the error correcting block would need to be rewritten whenever any of the other blocks is modified. this might be relatively easy to add to ZFS, as it uses a CoW structure, so block-overwrites are relatively rare (although erasing a lot of data would require a lot of checksums to be recalculated). This would mean that a single-block write would end up triggering a lot of reads and that would hurt performance. For ZFS, this might actually be easier to implement, as blocks are written out in transaction groups and so including an error correction block at the end might be a fairly simple modification.
In those parts of the country where there was rioting, there is a tendency for the police to smash people's heads and shoot them in their beds
Can you site a single instance of the police shooting someone in their beds?
But basically no individuals are equipped to leverage Android on their own.
Applications are the easy bit. See F-Droid. The hard part is getting device drivers for your hardware...
Never test for an error condition you don't know how to handle. -- Steinbach