There are a lot of comments asking "why ZFS" but most of them don't really understand the main killer feature of ZFS (and ZFS on Linux): the ability to efficiently use tiered storage.
See, there's currently a problem overall with the storage industry. Flash storage, aka SSDs, is hideously expensive per gigabyte. Magnetic storage, aka HDDs, is hideously slow in terms of IOPS. For difficult workloads that require an optimization of server purchase price, high IOPS, and large quantities of local storage, Tiered Storage is the only real option.
This allows you to buy both: (1) relatively inexpensive, high write endurance but fairly low capacity SSDs -- usually on the order of 128 to 512 GB, depending on the size of the HDDs behind them; and (2) relatively inexpensive, high capacity but slow HDDs -- usually 8 TB or larger -- and combine them into one logical block device that *behaves* as if it were an SSD with many terabytes of storage. You get about 98% of the IOPS performance of the SSDs, while all the data ultimately persists to the HDDs behind the scenes. This is remarkably good for large databases and file storage servers, and the price of building all of that capacity in datacenter-grade SSDs is going to run you about $1000 more per terabyte of capacity, assuming RAID-1 redundancy.
With ZFS, you can set up your ZFS Intent Log (ZIL) -- basically a write buffer for the HDDs -- on a partition about 25-50% of the SSD capacity (depending on how write-intensive your workload is), and set up the ZIL in RAID-1 mode for data safety. ZFS will then efficiently create large batch sequential writes to the HDD that convert what could be thousands of SSD IOPS (small writes from a database, for instance) into a few dozen HDD IOPS. This allows your storage array to absorb even tens of gigabytes of random writes at hundreds of megabytes per second into the SSDs, which then get reorganized, optimized, and streamed to the HDDs sequentially in a way that optimizes throughput for the HDDs. And program-level calls to sync() or fsync() can legitimately return after completing the writes to the ZIL, even if the writes are still pending to the HDDs, because the data is genuinely on persistent storage that will survive a power outage.
You can also have an L2ARC (Level 2 Adaptive Replacement Cache) with ZFS, which is basically a page cache for *reading* from the HDDs that sits on a partition in the SSDs. For my servers, I set up the L2ARC to consume about 75% of the space of the SSDs because I don't tend to get very large bursty writes on my workload, but for those with a much higher write workload they will want to increase the percentage of ZIL vs. L2ARC.
Once again, like the ZIL, the advantage of L2ARC is to reduce the workload on the HDDs and reduce the IOPS demanded of them. The ARC algorithm has also been mathematically proven to be generally more efficient at common page cache workloads than the page cache algorithm the Linux kernel uses for other filesystems, so there's a "Layer 1 ARC" in RAM, too. And it's adjustable in size so you can tune whether you want to suck up lots of RAM with ARC, or leave more RAM for application data.
For those who would just have HDDs and use RAM buffers to insulate the storage from high IOPS, RAM has three major limitations: one, it's volatile, so it can't safely cache writes for very long; two, using RAM for filesystem caching competes with applications that want to allocate RAM for their own purposes; and three, RAM is very expensive. Also, it's much easier to expand the amount of storage in a server than to expand the amount of RAM: if you have the max. amount of RAM your motherboard supports, you'd have to buy an entire new system to get more. With storage, you can usually just attach another drive, pair of drives, or worst case attach another SATA, SAS, or NVMe card onto the PCIe bus using a spare slot for even more storage. Long story short, you can have a much smaller scale system with terabytes upon terabytes of HDD or even SSD storage, but servers with a couple terabytes of RAM are absolutely enormous, come at a massive cost premium, and require special planning for power, rack space, and system administration, which usually isn't required if you just add a few storage devices.
So, if you want the best of all worlds, being able to use relatively inexpensive commodity hardware (like single-slot Xeon, or even desktop-grade hardware like Threadripper) but have excellent performance for workloads like databases, game servers and so on -- anything that demands a lot of small writes -- your most affordable path is to use Tiered Storage.
You would think that Linux would have a stable, mature, tested, highly optimized filesystem in-house for handling Tiered Storage properly, but it actually doesn't. Not at all. None of the solutions available with Btrfs, XFS, Ext4, LVM2, MD, and family even come close to the performance and feature-set of ZFS with tiered storage. Not to mention that the closest feature competitor, Btrfs, is still such a boondoggle stability-wise that Red Hat is abandoning it as a supported filesystem in RHEL. They also don't have any engineers to work on it, but if it were stable, they wouldn't need to.
I will continue to use ZFS on Linux (at my own peril? Fine.) until Linux offers an in-kernel alternative that matches its performance, featureset and maturity. LLNL has the right idea -- they knew what they were doing when they invested so many dollars into the development of ZoL. They needed a tool that didn't exist, so they built one.
And no, running a Solaris or BSD kernel probably isn't a viable alternative, when most all software is designed and tested for Linux, and the BSD and illumos compatibility layers for Linux are sketchy at best.
For Linux laptops and home gamers, using XFS on a single HDD or SSD is fine, and even if you have a system with both HDDs and SSDs, tiered storage probably isn't of major benefit to you, because you don't have a workload that justifies it. A lot of people do, though, so they're either paying out the nose for way more SSD storage than they need, way more RAM than they need, way bigger servers than they need, ... or they're smart and they use ZoL.