Sure thing: You need lots of RAM when running deduplication as the hashes are stored in the ARC. Estimates run from 1 to 5 GB of RAM per TB of storage you have deduplication enabled on. I'm running it on a NAS4Free box (FreeBSD) with 16 GB of memory. The 2 TB I have deduped works fine, but if memory gets right for the ARC, it will go to spinning disks; you don't want that. When working with ZFS, it's my own personal rule-of-thumb to use an SSD for the ZIL and L2ARC. Your memory might get tight, but at least you'll falling back to an SSD and not any of the conventional drives in your system.
My sysctls are tweaked with:
vfs.zfs.txg.timeout = 2 (helps with burstiness)
vfs.zfs.l2arc_noprefetch = 0 (enables read ahead to the L2ARC)
vfs.zfs.l2arc_write_boost and vfs.zfs.l2arc_write_max are also tweaked well above the default (ymmv with this, test on your own rig)
vfs.zfs.arc_max is set in loader.conf to keep memory free. Even though ZFS is pretty good at handing back memory, this helps for my situation.
FWIW I tried ZFS on Linux and found it very slow and it would choke under high load. Even a SATA expander to a small 4 disk chassis running RAIDz1 would have the pool to detach under high load. I've been sticking with FreeBSD for any ZFS work for a few years now. ZFS is pretty good at running stock with defaults, but when you play with deduplication, it can benefit from some gentle hand-holding.