I'm sure it will, but I'm afraid that doesn't mean that made it practical for Apple to integrate into OS X or that it fitted the use cases they needed for many desktop scenarios.
Um, the technical work was already done. It could have shipped with Snow Leopard. Again, the reason it didn't has nothing to do with the technical feasibility of it.
Ultimately, the only way to deal with silent data corruption or 'bit rot' is to have multiple levels of redundancy several times over for your data - which ZFS has and deals with. No desktop Mac can ever have that.
Why? Because you say so?
Anyone who thinks that is anywhere near being practical to deal with on a desktop system is an idiot
While I may be an idiot, you have to convince me that ZFS is not practical for a desktop. Again, just because you say so is not reason enough. I stand by my statement that ZFS is the only file system with enough benefit to make me explicitly choose it for building servers. You may argue that there's a difference between a server and a desktop but those really are nothing more than abstract concepts. A file system that has too much overhead for my desktop has too much overhead for my servers. Performance matters. ZFS may not be the fastest, but it is no slouch either and the other benefits it brings to the table far outweigh miniscule performance concerns.
By no stretch of the imagination does ZFS handle this 'magically'. There is a severe price to be paid.
What exactly is this severe price? Can you spell it out? Exactly? "Any sufficiently advanced technology is indistinguishable from magic." In that respect yes I will say it is magic because it is head and shoulders more advanced than anything else I've had the pleasure of working with. File systems have not had this kind of improvement in decades.
I'm afraid that hardware, bad sector and disk issues are far, far more prevalent problems than data corruption at an OS level...but I'm afraid it's just not a primary concern for everyone else or for those developing desktop operating systems.
How do you know? It's not a significant problem till the data you need is unavailable when you need it. At home my own modest media library sits on just 500 Gig with no guarantee that any of it will still be whole in 6 months. Sure I back it up. Routinely. But until you access the file you don't know if it's been corrupted. Then how long as it been corrupted? Do your backups go back far enough to compensate? Yes you can checksum everything routinely and maintain a database of checksums to validate file change. Part of the beauty of ZFS is it does this with every thing you put in it, at the block level, and it validates the checksum every time you read the data. If a block fails the check, it not only sends you the valid block from the mirrored copy (You do have redundancy right? Even ZFS won't save you if you only have one copy.) but also replaces the bad block with a copy of the good one.
Storage capacity is skyrocketing. Going to backup to fix problems is a real problem in itself. Are the tapes on-site? Do we have to go to the vault to find an uncorrupted copy? Did the media pool get recycled and now there is no uncorrupted copy? Do you enjoy explaining to an executive why the data they want is unavailable despite spending millions on enterprise class storage and backup solutions? The problems of enterprise storage are becoming problems of home users. I have three terabytes of storage just to backup my home system in a replication layout I'm okay with, but I really would have loved the protection ZFS offers against bit rot to top it off. Stick your head in the sand if you want, but I consider my data and it's availability a little more important. ZFS handles it elegantly, in the background, with negligible performance hit.