Additionally unfounded. Given that BSD sources can be downloaded, modified, and their changes never see the light of day the loss of information is virtually guaranteed. Not to say it doesn't happen with the GPL, but it's actually a legal risk to allow it to happen.
Take a look at the donors list to the FreeBSD Foundation and see how many of them are big companies (e.g. NetApp, Juniper) that ship proprietary products built on FreeBSD, yet still contribute back changes. And then look at companies like Google, which build their infrastructure on Linux but keep a lot of changes public. The GPL doesn't force them to give anything back unless they distribute the modified version, and they don't distribute the modified Linux that they run on their servers. It's only a legal risk if you are distributing the software, but given that 90% of all developers are working on in-house software that is never intended for distribution then that means that the GPL only ever forces the 10% of potential developers who are working on commodity off-the-shelf software to release code, and they are the ones who are least likely to touch the GPL in the first place.
Over the years, I've worked with companies that have maintained private forks of GPL'd projects, because they don't want the potential liability of distributing things under the GPL. When they take some of our BSDL code, however, they'll push back patches because there's no possible legal obligation arising from their doing so, and it's cheaper to have all of their changes upstream than maintain a private fork. I've also worked with companies that have done a clean-room reimplementation of a project rather than touch the GPL (in many cases, it's remained private, in some they've released it under a permissive license).
That's well and good, until you realize that a typical email server usually has an MTA (postfix, courier, sendmail, whatever), some sort of spam trap/filter (in addition to external ones), maybe a means to more efficiently handle distie lists, SASL auth (postfix typically handles that nowadays, but...), and probably some sort of webmail thingy. That's way more than "one app".
And in the deployment scenarios that this is intended for, each one of those would be running in a separate VM. If you have lots more incoming mail, you might spin up more spam filter instances dynamically. You'd probably only have a single persistent VM for the storage, but everything else would be scaled dynamically.
There is a big difference between getting a single process exploited (maybe just one of the httpd workers) and having a full system-breach.....
There really isn't on most cloud systems. You compromise the web server, and now you've got the credentials to access the db server. That's far more important than anything on the local filesystem. Sniffing all traffic going to the system? There isn't any traffic going to anything other than the (single) running app. And even with a compromised kernel, you can't put the interface in promiscuous mode because the paravirtualised device doesn't support it.
So the question is whether you'd rather have a slimmed-down FreeBSD kernel in your TCB or a full-featured Linux kernel and GNU userland. If you have an OS where you can spin up new instances in a second, that makes it possible to compartmentalise your system much more than if starting up a new VM takes a minute. It also makes scaling easier.
There are two uses for SSDs in a ZFS pool. The first is L2ARC. The ARC (adaptive replacement cache) is a combination of LRU / LFU cache that keeps blocks in memory (and also does some prefetching). With L2ARC, you have a second layer of cache in an SSD. This speeds up reads a lot. Data that is either recently or frequently used will end up in the L2ARC and so these reads will be satisfied from the flash without touching the disk. The bigger the L2ARC, the better, although practically if it's close to your working set size you'll start to see diminishing returns if you make it bigger.
The second use is as a log device. The ZIL is the ZFS Intents Log, which is effectively a journal. Transaction groups are written there first so that the filesystem is always in a consistent state. It's usually on the same disk as the storage, which means that writes can involve a lot of seeks. With the ZIL in a different drive (SSD or otherwise), you reduce the number of writes required. Because you can generally write to a ZFS pool significantly faster than to a single disk, putting the ZIL on an SSD stops it becoming a bottleneck. The rule of thumb for the size of the log device is that it should be as big as the maximum amount of data that can be written to your pool in 10 seconds. If you can do 100MB/s writes, you want about 1GB of log device.
Once a zfs filesystem is created that's it. No resize support
Minor correction: Once a ZFS pool is created, that's it. Filesystems are dynamically sized. You can also add disks to a pool, but not to a RAID set. You can also replace disks in a RAID set with larger ones and have the pool grow to fill them. You can't, however, replace them with smaller ones.
"Conversion, fastidious Goddess, loves blood better than brick, and feasts most subtly on the human will." -- Virginia Woolf, "Mrs. Dalloway"