Quotas we barely ever use anymore, to the extent they exist they tend to be integrated into applications if there's a point to them. Wasting employee time is extremely expensive compared to disk. Most systems support snapshots on multiple levels already, from OS/LVM and virtualization layers down to the SAN/NAS. ACL's, after 30 years, managing probably about 10k unix systems, I have run across a handful of situations where it would have been useful, and exactly zero cases where the cost benefit ratio would have made it economically viable. Most modern filesystems support them, but for applications that need that access granularity, the functionality tends to end up in application or database layers. Minor cache improvements pale in comparison to simply throwing the entire performance demanding application on flash-only, FusionIO or NVME disks.
Boot environments like that have been done in various ways for as long as I can remember, where the earliest were basically the diskless NFS based clients where you could simply copy the filesystem and run the update on that. After that, any simple disk mirror could be split off and cloned for a snapshot system to work with. Thankfully, such functionality is approaching irrelevance as well, as application design is growing up enough to actually build redundancy into the application layer so you can take any number of servers offline at any time. Snapping a root filesystem isn't exactly necessary when the application is built to live on a server instance that will disappear and be replaced with a fresh disposable instantiated image on next reboot...
And yes, most features mean pretty much raid, compression, caching, deduplication, snapshots, etc.
It's not that it's a bad filesystem, it's a quite good one. But the problems it solves are becoming legacy issues of less relevance in an industry where the discussion is shifting toward whether there will be an OS as we know it underpinning the application infrastructure at all.