Comment Re:google doens't need to stir up dissent (Score 1) 74
I just want to point out, *everyone* does not like this law; just like *everyone* does not like handguns.
I just want to point out, *everyone* does not like this law; just like *everyone* does not like handguns.
Well, the G in GNOME stands for GNU and it is part of the GNU Projects.
If only you could remember the keyboard commands to use them though...
The file system can do quite a bit if it actually does consistency checks on the data when reading it. ZFS does this and will alert you if the contents of a file has changed after it was last written, allowing you to restore a good copy from backup and verify that it is still valid.
Then it's much simpler. This ECC issue has absolutely nothing to do with ZFS. You should use ECC RAM if you are doing any form of disk IO no matter which file system you're using, or you are under the risk of data loss.
HFS+ was just an extension to HFS, which goes back to the System 2 days. HFS suffered from a number of limitations which made in unsuitable on volumes larger than 2 GB.
That depends on your shell. Bash works that way, but zsh does not; at least not by default as far as I know.
I see what you mean now, but I must say that I really don't agree with these non-ECC horror stories. You have much bigger problems if you have memory corruption.
The ones you mention are American companies and thus does not have to follow European law.
ZFS does not require ECC memory more than any other file system. I have no idea where you got that from.
So how is FreeBSD able to license ZFS by simply importing it into the source tree and Apple is not?
How could the RAM be responsible for damaging a file between the time it was written to disk and when it was read from disk?
The point is that there are good file systems that can detect when the storage unit fails, give you an alert and allow you to restore the file from a good backup. Without this feature the corrupted file will just get backed up like any other file and eventually replace the good backup.
At least you would know that the file was corrupted, so that you could restore it from a good backup.
The problem with bit rot is that backups doesn't help. The corrupted file go into the backup and eventually replace the good copy depending on retention policy. You need a file system which uses checksums on all data block so that it can detect a corrupted block after reading it, flag the file as corrupted so that you can restore it from a good backup.
Suggest you just sit there and wait till life gets easier.