Slide 28 -- I'm not particularly clear on why you would want ASLR or DEP to be configurable -- that just opens another avenue of attack. It should be always on every process all the time to be meaningfully effective.
It's unlikely that any consumer OS will ship with these protections on all of the time. By default, both OS X and Windows 7 apply ASLR and NX protections to binaries that "opt-in". The difference is that on Windows you can force these protections on binaries from legacy compilers and linkers. This will often result in the process crashing, but in an enterprise environment you might prefer to crash old programs than to allow somebody to run Firefox 2, for example. This would be a simple fix for OS X and I wouldn't be shocked if they slipped it into a future patch quietly as a sysctl.
Slide 38 -- you keep calling the attack on the Keychain credential store a "brute force," but it isn't -- it's a simple social engineering attack to get a password. Unfortunately the Keychain keeps (encrypted) passwords in the clear rather than hashes only, but this is so users don't forget their passwords.
There are a couple of issues getting mixed together here. One way that you might escalate your privilege from a sandboxed, low-rights process would be a social engineering attack using an escalation prompt, as we showed. The keychain offers another option, because the encryption key used to protect it is solely derived from the user's password. The keychain file is available from the sandbox, so an attacker could pull the keychain file and send it off-site for a brute-force attack. The algorithm is definitely non-trivial to brute-force (1000 rounds of seeded MD5) but is not out of bounds for state-sponsored attackers, especially if the user is using a weak password. So the keychain isn't only useful to us as a repository of network passwords, but as a decryption oracle that can be cracked off-site (like in a basement in Beijing, cough...).
Our recommendation to Apple was to provide the user keying material that is partially derived from the user as well as from a machine-specific key stored somewhere only available to root. This would at least prevent low-rights and sandboxed processes from using the keychain as an oracle, although it would likely impact compatibility with downlevel versions of migration assistant.
Slide 53 -- "Modify existing binaries and services, which breaks signing but is generally not noticed" -- maybe in your shop, pal, not mine.
How do you regularly check for system binaries being modified? Do you use Tripwire? There seems to be no equivalent technology built into OS X, so we pointed out that one way to persist malware would be to modify parts of the system that are already running. This is, in no way, an OS X specific issue, although the lack of kernel extension signing makes it a bit more problematic than on Windows. (That being said, state hackers have already demonstrated a propensity for stealing Authenticode certificates from hardware makers, so driver signing isn't super helpful on Windows).
Slide 76 -- "Run your computers as little islands on a hostile network" -- FTFY
I disagree with this correction and your summary of our work. Our conclusion is that Apple has evened the score with Windows on anti-exploit technologies and has made it much easier for their ISVs to use the OS's sandboxing capabilities. We also concluded that it is possible to build a secure, managed Windows network that uses integrated authentication mechanisms to provide access to network services, although most organizations will not be ready to take the back-compat hit it takes to do so correctly. We concluded that it is currently impossible to build a secure network using OS X and OS X Server, and that any use of Apple-proprietary protocols makes credential stealing and network escalation attacks easier than it should be.
The Tl;DR is that Apple machines are more secure alone, and Windows machines are more secure when connected and managed.