Don't forget the (accurate) presumption that securing one device well (the nat device) is a lot easier than securing all the internal devices equally well.
Your post was right, except for the word 'accurate'. Perimeter defences on networks really don't work, especially when you have things like Cisco phones, HP printers, and Dell management interfaces, with known and unpatched security vulnerabilities all connected to the network. It just needs one person to bring a compromised machine inside the perimeter and an attacker has full control (and good luck getting rid of them once they've got their botnet control software running on your printer). With a lot of people using mobile phones on their home WiFi, the perimeter isn't even a perimeter anymore, because the phone also allows connections over the mobile network.
Capsicum, POSIX and NFS4 ACLs are all about adding complexity to allow for greater administrative policy enforcement
This is almost true for ACLs. ACLs are no more expressive than standard UNIX permissions, but they are significantly simpler for implementing the same thing - you no longer need to create a group for every set of people who want to share things. This lets you leave your default at share-nothing, and explicitly share the things that you need to share with the people that you need to share it with. The code for implementing them is significantly less complex than the work arounds that you need for their absence if you want the same level of access control, and if you don't want the same level of access control it's because you're fine with leaving things more widely readable than they need to be. Neither of these attitudes is good for security.
Capsicum is definitely not about adding complexity. The implementation adds an extra bitmask check on file accesses and restricts system calls to a whitelisted set. The total code changes in the kernel are very small and easy to audit (and have been audited by several groups). The code changes in userspace code are far more significant. The sandboxing in Chromium, for example, is six times more lines of code on OpenBSD using chroot() than it is on FreeBSD using Capsicum, and offers less isolation (for example, the renderer processes on OpenBSD can create network sockets, so an image in an email that exploits libpng or libjpeg vulnerabilities can phone home and send copies of all of your emails if you use webmail from OpenBSD, with Capsicum is can't). The privilege separation code in OpenSSH is also cleaner and easier to audit when it uses Capsicum.
In OpenBSD, security means that you eliminate bugs so that the most basic promise is held true.
In FreeBSD, we care about mitigation. Useful software is never bug free, no matter how simple you make it. The goal is to ensure that once an attacker finds a bug, they can't use it to exploit the system. That doesn't mean 'they can't get root', because on a huge number of modern systems, from single-user laptops to single-service VMs, getting ambient authority for a single user can mean the same as getting root, when it comes to having access to the data that the user cares about. Jails, Capsicum, and so on are all about enforcing the principle of least privilege, so when a bug is discovered the attacker only gets control of a sandbox with no access to the rest of the user's data. This used to be something that OpenBSD people cared about.
OpenBSD's goal is security above all else.
This phrasing implies that FreeBSD doesn't care about security, which is somewhat misleading. Take a look at the number of papers at Oakland that use FreeBSD as a base and the number of those that we've accepted into the base system sometime.
For example, FreeBSD has a modular mandatory access control framework with pluggable policies. This is used by some downstream projects for things like the OS X / iOS sandboxing subsystem and the JunOS code signing infrastructure. It's also used for the type enforcement mechanism in FreeBSD and a few other things.
FreeBSD has had jails for a long time, which are an easy way of creating a completely isolated environment (distinct root user, filesystem hierarchy, and so on), sharing the same kernel (so much cheaper than a VM). With ZFS and cheap clones, it's easy to keep a stock FreeBSD install around, clone it, and have a new jail up and running in a few seconds. Poudriere (French for Powderkeg), the package building program uses this mechanism to create a pristine environment for building each package, allowing untrusted intermediate binaries to be run on the build server without giving them any access to the host environment.
In FreeBSD 10, Capsicum is enabled by default in the kernel, and a number of base system utilities are sandboxed out of the box. Capsicum sandboxing means that things run without the ability to create file descriptors and have to request a more-privileged daemon pass them the ones that they need. For example, tcpdump (which doesn't have the best security record in the world and is often used to inspect malicious network traffic) runs in compartmentalised mode and just delegates DNS lookups to a daemon that runs outside of the sandbox. The daemon only accepts records in fixed formats (IP addresses) and returns a string, so it has a very small attack surface.
FreeBSD supports both POSIX and NFSv4 ACLs, for people who want finer-grained filesystem security. With 10 (and, I think, 9.2?), the audit log daemon is complemented by auditdistd, which allows you to distribute cryptographically signed audit logs across machines, allowing you to detect suspicious activity.
The binary packages that we're distributing are signed, by a key that is distributed via a different mechanism to the packages, and the pkg audit command allows you to display known vulnerabilities in any of your installed third-party packages.
ZFS now is pretty solid, but there were some interesting teething problems with ZFS. The rule of thumb that filesystems people tell me is that it takes 10 years for a new FS to be solid. ZFS is now 9 years old, so is very close...
In early versions, there was a problem that a write, including a delete, would cause the filesystem to grow (if it's a delete, it would then shrink) and you could get into a state if you let your filesystem (almost) completely fill up where you didn't have enough space to be able to do the copy-on-write deletion and the only way to recover was to copy all of the data off, recreate the pool, and copy it back on. You didn't lose data, but you did lose the ability to write to the pool. There have also been some cases where data loss (including the entire pool) could happen, although those are far rarer.
For the P3, I'd recommend using freebsd-upgrade and pkg, unless you really need a custom kernel. You can also do make toolchain on a faster machine and then copy your obj tree across and use the XDEV stuff if you really need to be building kernel and world on it.
The en0 becoming xn0 thing surprised me too, when I switched from a GENERIC kernel to a XENHVM one on 9.0. With 10.0, I think we're compiling the Xen HVM drivers into the GENERIC kernel, so you'll get the new devices. In the Xen block device drivers, I think there's some extra magic so that they'll appear with a different device node name if the device was previously used with the emulated devices, but that isn't present in the network drivers, which I think is a shame.
For the Geode, it shouldn't be an issue since September. Prior to that, clang would emit long nops for some things that would break the Geode.
Some guys at Spectra Logic and Citrix have been pushing Xen PVH support into FreeBSD recently. It didn't make it for 10.0, but hopefully should be in 10.1. In PVH mode, a guest boots as if it is a PV guest, with Xen's entry point and event channels instead of interrupts, but then uses the hardware page tables and either PCI pass-through devices or PV devices. This is important, because PVH guests can also run as dom0, if they implement the management interfaces (which are mostly userspace and shared across platforms). Getting PVH working in domU is much more effort than going from a working domU PVH to a working dom0 PVH.
That said, with bhyve and the new vps work (shared kernel like jails, but with some nifty features like live migration between hosts), there's a lot less of a reason for me to care about Xen, in relation to FreeBSD.
The distinction between Type 1 and Type 2 hypervisors doesn't really make sense anymore, and is largely marketing. For example, Xen running on a system with no IOMMU is still a Type 1 hypervisor, but the whole of the dom0 kernel and a chunk of its userspace are in the trusted computing base (TCB), where a compromise can extend to every other domain. In the case of Linux's KVM and bhyve, the host kernel (and anything that can poke kernel memory, or exploit kernel vulnerabilities) are in the TCB.
The distinction makes more sense for certain ARM, SPARC, and PowerPC hypervisors, where the hardware handles most of the virtualisation (devices all support multiple isolated command queues and run behind an IOMMU), so a small Type 1 hypervisor can be a few tens of thousands of lines of code and that's the entire shared TCB. Guests run their own device drivers, the hypervisor just handles the control plane.
The important distinction is between hypervisors that do software partitioning of devices (either via emulation or paravirtualised devices), and ones that don't. Xen, KVM, and byhve all fall into the former category and so have a TCB that's several MBs of object code.
Dinosaurs aren't extinct. They've just learned to hide in the trees.