Let's start with the premise of TFA, which cites the article on Ars that was covered here a few days ago and was complete nonsense about the new random number infrastructure in FreeBSD. We are not moving away from using the hardware random number generator directly, we have never used the hardware random number generator. The new code that the Ars article was talking about is to allow the PRNG to be easily switched. In 10 we're shipping both Fortuna and Yarrow and the infrastructure allows more to be added. The code has been reviewed by two cryptographers that I know of and possibly others. Neither the old nor the new implementation is vulnerable to the attack against random number generators that was published a couple of months ago (Linux was the subject of the paper, not sure if OpenBSD was vulnerable).
If Theo is going to make such remarks as this, he should think more carefully first:
"Basically, it is 10 years of FreeBSD stupidity. They don't know a thing about security. They even ignore relevant research in all fields, not just from us, but from everyone."
He'd be advised to take a look at the transactions for the IEEE Symposium on Security and Privacy over the last 10 years and see how many papers are describing techniques that were both originally implemented on FreeBSD and are now part of the default install. Let's take a look at the two systems, from a security perspective. Both FreeBSD use SSP and non-excutable stack by default, so I'll skip those. To begin with, OpenBSD features missing on FreeBSD:
W^X enforcement. Definitely a nice idea, but it breaks some things (JITs mostly). The default memory map in FreeBSD is W^X, but it is possible to explicitly mmap() memory both writeable and executable. It's generally considered a bad idea though, and we don't ship any code that allows it. We permit third-party code to shoot itself in the foot if it really wants to and provide mitigation techniques to reduce the risk.
Then there's ASLR. This is a pretty nice technique, which is currently not implemented on FreeBSD. We do support PIE, so it would not be a horrendously difficult thing to add, but current implementations (including OpenBSD) use a surprisingly small amount of entropy in the address layout and so don't provide as much mitigation as you'd hope (which, of course, Theo knows, because he's very familiar with 'relevant research'). This is especially true on 32-bit systems.
And that's it for OpenBSD. Well, unless you want to count , but since that's vulnerable to a timing attack (still not fixed), which was published in the USENIX Workshop on Offensive Technologies, and Theo is aware of all 'relevant research' in security then it can't really still be there.
Now let's look at FreeBSD security mechanisms:
First up, jails. Jails are somewhere between a chroot and a VM: a shared kernel, but all of the global namespaces (filesystems, IP addresses, users) are separated and so you can completely isolate a service, such as a web browser, from the rest of the system. Scripts like ez-jail in the ports tree make it easy to set up lightweight service jails.
Then there's the MAC framework, which allows modular access control policies. This is used by a couple of FreeBSD derivatives: JunOS uses it to implement code signing, OS X and iOS use it for application sandboxing. You can also use it for traditional type enforcement policies, as in SELinux and a variety of other things.
And then there's Capsicum, which adds a capability model on top of the traditional UNIX file handle model. A process can call cap_enter() and then can't create any new file descriptors, it can only use the ones it has and receive others from a more-privileged process. A number of things in the base system now run in sandboxed mode by default and so a compromise in any of them is contained.
There's also some simple stuff, such as support for both POSIX and NFSv4 ACLs (OpenBSD has no ACL support). If you really care about security, then you probably also care about checking that your system is secure. FreeBSD provides auditd (which records auditing events from the kernel) and auditdistd (which distributes them in a cryptographically-secure, tamper-proof way to other machines). OpenBSD provides nothing comparable.
I could also talk about the fact that FreeBSD signs packages and distributes the signatures via a different mechanism to the package retrieval, whereas OpenBSD still distributes them via FTP with no signatures. Or how FreeBSD provides the pkg audit command (run as part of the daily security check and emailed to root by default) to check for known vulnerabilities in installed packages and provides updated packages throughout the life of a release, whereas OpenBSD requires you to compile everything from ports if there's a security advisory (great on that 200MHz Soekris board that's your firewall), but since that infrastructure is quite new in FreeBSD I won't talk too much about it.
I'd take issue with your second point. All binary updates using freebsd-update are signed and that mechanism is used to distribute the signing keys for packages. When you do 'pkg install' on a recent FreeBSD system, it will bail if the packages don't match the signature. We also have a revocation system in place that allows us to easily revoke keys if the package building system is compromised. We just received a large grant from Google to work on package transparency, a mechanism akin to certificate transparency that allows you to validate not just that your packages are signed, but that they're the same packages everyone else is getting. We do have deterministic builds for the base system (they're needed for the binary update mechanism to work), but not currently for ports - that's something we're working on though, as it's a prerequisite for package transparency.
The authoritative repository is svn, but there are numerous git mirrors, and we did use them to validate svn after the compromise last year. svn is actually not that hard to audit, but cvs (which OpenBSD uses) is a nightmare - we gave up trying to audit it and just re-exported the cvs mirror from svn.
Did you know that with some kind of explosive (preferably one that you can remotely detonate) and some coins (easily available) you probably can kill or severely injure a lot more people than you can with a firearm? The ensuring explosion is like a frag grenade, except you can make it a lot bigger and lethal. Bonus points for triggering it in a cafeteria o some other kind of eating place with lots of people.
As another poster said, this requires a lot more premeditation. A nail bomb is pretty easy to assemble with ingredients that are readily available in most industrialised nations, but doing so requires (at the very least) a few hours of work. If you want timed detonation, that's more thought and planning, and you need to be quite calm while building the bomb or you're likely to just blow your hands off. Most people, by the time they've even got as far as thinking through how they'd go about blowing up their school will have calmed down enough to realise that it's a bad idea.
In contrast, if a gun is readily available then you just need to pick it up and, while still angry, got back to the school and start shooting. You don't need to think hard about what you're doing. That's one of the rationales behind laws that require a 24 hour or 7 day period between ordering a gun and getting it - if you want to kill someone in cold blood, you'll find a way of doing it with or without a gun, but if you're thinking of doing it because you're stressed or angry, then there's a good chance you'll have changed your mind by the end of the cooling off period. Of course, if you go ahead and buy the gun, there's nothing to stop you the next time you get stressed...
Oh, and it's not once a year, it's once a year on average, over four years. So if you work on a big project for 2-3 years and then get a flurry of papers out at the end, then that's fine too.
Now you cost us a good book !
Wait, I thought TFA was about Charles Stross?
The two most beautiful words in the English language are "Cheque Enclosed." -- Dorothy Parker