Don't forget the (accurate) presumption that securing one device well (the nat device) is a lot easier than securing all the internal devices equally well.
Your post was right, except for the word 'accurate'. Perimeter defences on networks really don't work, especially when you have things like Cisco phones, HP printers, and Dell management interfaces, with known and unpatched security vulnerabilities all connected to the network. It just needs one person to bring a compromised machine inside the perimeter and an attacker has full control (and good luck getting rid of them once they've got their botnet control software running on your printer). With a lot of people using mobile phones on their home WiFi, the perimeter isn't even a perimeter anymore, because the phone also allows connections over the mobile network.
Some people have a HUGE problem with its collection and storage by greedy, sleazy, single minded corporations.
Greedy, yes. Sleazy... maybe, highly probable. Single minded? In Google's case, I doubt it: they are a too intelligent bunch.
Yep they will pay the Russian mob under the table to do it illegal through a subsidiary that funnels the money back to Google corporate. American companies do this all the time for tax evasion and patent lawsuits. They sign a contract that they wont sue anymore for a settlement for additional patents. The patent troll then looks at the patents and opens up a secret shell subsidiary and sues under that for the rest etc.
The ad companies will cry foul and make websites give messages saying how the evil socialist EU regime is taking this website away. Please email X to tell them to reverse this law etc.
Since they are injecting Chrome with malware and adware through buying extensions and now circumventing adblock plus and making javascript fail to load if they detect blockers I would not put this past them.
Capsicum, POSIX and NFS4 ACLs are all about adding complexity to allow for greater administrative policy enforcement
This is almost true for ACLs. ACLs are no more expressive than standard UNIX permissions, but they are significantly simpler for implementing the same thing - you no longer need to create a group for every set of people who want to share things. This lets you leave your default at share-nothing, and explicitly share the things that you need to share with the people that you need to share it with. The code for implementing them is significantly less complex than the work arounds that you need for their absence if you want the same level of access control, and if you don't want the same level of access control it's because you're fine with leaving things more widely readable than they need to be. Neither of these attitudes is good for security.
Capsicum is definitely not about adding complexity. The implementation adds an extra bitmask check on file accesses and restricts system calls to a whitelisted set. The total code changes in the kernel are very small and easy to audit (and have been audited by several groups). The code changes in userspace code are far more significant. The sandboxing in Chromium, for example, is six times more lines of code on OpenBSD using chroot() than it is on FreeBSD using Capsicum, and offers less isolation (for example, the renderer processes on OpenBSD can create network sockets, so an image in an email that exploits libpng or libjpeg vulnerabilities can phone home and send copies of all of your emails if you use webmail from OpenBSD, with Capsicum is can't). The privilege separation code in OpenSSH is also cleaner and easier to audit when it uses Capsicum.
In OpenBSD, security means that you eliminate bugs so that the most basic promise is held true.
In FreeBSD, we care about mitigation. Useful software is never bug free, no matter how simple you make it. The goal is to ensure that once an attacker finds a bug, they can't use it to exploit the system. That doesn't mean 'they can't get root', because on a huge number of modern systems, from single-user laptops to single-service VMs, getting ambient authority for a single user can mean the same as getting root, when it comes to having access to the data that the user cares about. Jails, Capsicum, and so on are all about enforcing the principle of least privilege, so when a bug is discovered the attacker only gets control of a sandbox with no access to the rest of the user's data. This used to be something that OpenBSD people cared about.
OpenBSD's goal is security above all else.
This phrasing implies that FreeBSD doesn't care about security, which is somewhat misleading. Take a look at the number of papers at Oakland that use FreeBSD as a base and the number of those that we've accepted into the base system sometime.
For example, FreeBSD has a modular mandatory access control framework with pluggable policies. This is used by some downstream projects for things like the OS X / iOS sandboxing subsystem and the JunOS code signing infrastructure. It's also used for the type enforcement mechanism in FreeBSD and a few other things.
FreeBSD has had jails for a long time, which are an easy way of creating a completely isolated environment (distinct root user, filesystem hierarchy, and so on), sharing the same kernel (so much cheaper than a VM). With ZFS and cheap clones, it's easy to keep a stock FreeBSD install around, clone it, and have a new jail up and running in a few seconds. Poudriere (French for Powderkeg), the package building program uses this mechanism to create a pristine environment for building each package, allowing untrusted intermediate binaries to be run on the build server without giving them any access to the host environment.
In FreeBSD 10, Capsicum is enabled by default in the kernel, and a number of base system utilities are sandboxed out of the box. Capsicum sandboxing means that things run without the ability to create file descriptors and have to request a more-privileged daemon pass them the ones that they need. For example, tcpdump (which doesn't have the best security record in the world and is often used to inspect malicious network traffic) runs in compartmentalised mode and just delegates DNS lookups to a daemon that runs outside of the sandbox. The daemon only accepts records in fixed formats (IP addresses) and returns a string, so it has a very small attack surface.
FreeBSD supports both POSIX and NFSv4 ACLs, for people who want finer-grained filesystem security. With 10 (and, I think, 9.2?), the audit log daemon is complemented by auditdistd, which allows you to distribute cryptographically signed audit logs across machines, allowing you to detect suspicious activity.
The binary packages that we're distributing are signed, by a key that is distributed via a different mechanism to the packages, and the pkg audit command allows you to display known vulnerabilities in any of your installed third-party packages.
ZFS now is pretty solid, but there were some interesting teething problems with ZFS. The rule of thumb that filesystems people tell me is that it takes 10 years for a new FS to be solid. ZFS is now 9 years old, so is very close...
In early versions, there was a problem that a write, including a delete, would cause the filesystem to grow (if it's a delete, it would then shrink) and you could get into a state if you let your filesystem (almost) completely fill up where you didn't have enough space to be able to do the copy-on-write deletion and the only way to recover was to copy all of the data off, recreate the pool, and copy it back on. You didn't lose data, but you did lose the ability to write to the pool. There have also been some cases where data loss (including the entire pool) could happen, although those are far rarer.
For the P3, I'd recommend using freebsd-upgrade and pkg, unless you really need a custom kernel. You can also do make toolchain on a faster machine and then copy your obj tree across and use the XDEV stuff if you really need to be building kernel and world on it.
The en0 becoming xn0 thing surprised me too, when I switched from a GENERIC kernel to a XENHVM one on 9.0. With 10.0, I think we're compiling the Xen HVM drivers into the GENERIC kernel, so you'll get the new devices. In the Xen block device drivers, I think there's some extra magic so that they'll appear with a different device node name if the device was previously used with the emulated devices, but that isn't present in the network drivers, which I think is a shame.
For the Geode, it shouldn't be an issue since September. Prior to that, clang would emit long nops for some things that would break the Geode.
Old programmers never die, they just hit account block limit.