I'm late to this party, but just so anyone who stumbles upon this thread by some quirk of Google future, the views expressed above are not reliable. It's not apparent that the author knows much of anything about the ZIL or the SLOG. There are trade-offs involved with ZFS, no question. But none of these are anywhere as inane as this post would seem to have it.
If the vast majority of your work load is synchronous write, you do have to provide a SLOG with as much write bandwidth as the rest of your pool. Except during recovery, the SLOG is pretty much sequential write-only (not a demanding case for any enterprise-grade write-optimized storage device). These writes take place concurrently to the synchronous writes (latency matters). Under ZFS, the primary pool still batches writes into transaction groups every few seconds. The comment about the inability of ZFS to use RAM for write caching is simply incoherent.
What ZFS can't do, for synchronous writes, is use RAM for write coalescing, eliminating writes to stable storage (the SLOG, in this scenario) if one write immediately replaces another (a fairly common traffic pattern). Well duh you can only do this if your RAM counts as stable storage for any file system, and if you even have this, it's usually a device requiring I/O traffic to access, the same as any other persistent storage device.
What hardware RAID does potentially buy you is combining both the persistent RAM and the persistent storage onto a single device channel, allowing the OS to kill two birds with a single I/O write operation. And for this, you buy yourself a really really complex layer of extra device firmware, which historically has been far from entirely bug free. Your surface area of failure increases enormously (though you do have fingers to point at the extremely well healed—all that internal firmware testing is baked into the price with a healthy insurance multiple—should the worst come to pass).
Do you really need synchronous write coalescing? A basic Xeon these days has 40 lanes of PCIe 3.0. Does that look like a rate-limiting resource on sustained synchronous write traffic to your storage pool? I wish. And if it does, I'm pretty sure your first response is this: more sockets, please.
As it happens, there are giant industry plans afoot to add a non-volatile memory type into the system memory hierarchy. ZFS will like this—a lot—should any of this chortling evil land-grab vapour come to pass.