No. Because it doesn't violate it. And being a monolithic blob is the least of the criticisms which we could make about systemd, when there's an entire book's worth of bad design in there. ZFS was designed by competent and expert professionals, rather than unprofessional prima donnas, and it shows.
It's a fundamentally different design to traditional UNIX filesystems and disk management, but that doesn't automatically make it a monolithic blob. Is Linux LVM a monolithic blob? That's the level your question is at, as well as being flamebait.
Internally, ZFS is layered similarly to a Linux raid/lvm/filesystem setup. Here, you would have raw block devices managed by hardware or software RAID, with LVM using these devices as physical volumes. It would then provide logical volumes upon which you could create filesystems.
With ZFS, you would have block devices aggregated into "vdevs", which would be the equivalent of RAID0/1/5/6 RAID sets. These are the equivalent of LVM physical volumes. Next, you would use one or more vdevs to create a "zpool", which would be the equivalent of an LVM volume group. Finally, you would create datasets in the pool, which are the equivalent of a logical volume plus a filesystem, or a zvol which is the equivalent of a logical volume--a raw block device. So it's cleanly and logically layered. It's using plain block devices as the backing store as for any UNIX filesystem, but it's not creating intermediate block devices as LVM does--it's managing that internally.
The layering is pretty much the same--it's a well separated design. What's different is that ZFS has knowledge of all the layers and can use that to do things much more efficiently and much more robustly. For example, when doing a RAID rebuild ("resilver") it only needs to resync bits of the disk that actually have data on which can dramatically reduce the statistical likelihood of encountering an unrecoverable error. A dumb RAID setup doesn't know that, and will fail if it encounters an error during a full rebuild; ZFS will succeed if those errors were in areas weren't in use. And it can also be instructed to keep more than one copy of data for important stuff, which gives it an even higher chance of rebuilding in the face of corruption. There are a whole pile of other benefits as well, but as an admin the main benefit is that it's a dream to manage on a day to day basis, and you can even delegate management of sub-datasets to other users and groups, so they can snapshot their own data at will, send and recv data, create new datasets etc. The design is clean, well thought out and brings features which are completely missing from anything else.