As to the codebase needing auditing, we had both svn and git mirrors that allowed the entire history to be checked. We also had copies of checksums of releases and so all of these things were verified. Bringing CVS back online took a bit longer, as CVS easily let us verify the top of the tree, but not the history. I think we ended up regenerating the entire CVS history from svn, and took the opportunity to officially remove support for CVS.
Are there still vulnerabilities? Almost certainly. Any codebase more than a few dozen lines long will contain bugs, and some of them are exploitable if you're sufficiently clever. That's why a lot of the focus in 10.0 has been on mitigation techniques. The auditdistd framework lets you easily deploy auditing for an entire site. Capsicum makes it relatively easy to compartmentalise applications and a system daemons use capsicum out of the box. So do some of the normal filter utilities, for example even if you run uniq as a root user, once it's finished parsing the command line arguments it won't be able to access to any files in your system except the ones you told it to read.
They've literally deprecated fork, because they can't be bothered to make it work reliably with Core Framework
fork() deserves to be deprecated. The API originates with old machines that could have a single process in-core at a time. When you wanted to switch processes, you wrote the current process out and read the new one in. In this context, fork was the cheapest possible way of creating a new process, because you just wrote out the current process, tweaked the process control block, and continued executing. On a modern machine, it requires lots of TLB churn as you mark the entire process as copy-on-write (including TLB shootdowns which require IPIs on a multithreaded program using multiple cores). And then, in most cases, it's followed by exec() and so the process that you've just created is replaced by another one and you need to go through the whole sequence again to stop its memory being CoW.
Not only is fork() a ludicrously inefficient way of creating a process on a modern machine, it's also incredibly difficult to use correctly. When you fork(), all of your threads and all of your process descriptors are copied. You need to make sure that every thread that you create uses pthread_atfork() to ensure that it doesn't do any I/O after the fork() and before the exec(). You also need to ensure that you close any file descriptors that you don't want to be propagated to the child, which is nontrivial if you have other threads opening and closing files in the background (O_CLOEXEC helps here, but do you remember to use it everywhere?).
Oh, and posix_spawn() isn't much better. It's designed to be possible to implement on top of existing APIs and so ends up being largely useless without non-standard additions. It doesn't, for example, provide a mechanism to say 'close all file descriptors in the child, except for these ones'.
When an HDD fails, you can still get the data off of it. It's expensive, but it can be done.
At current prices, you can buy several TB of flash for the cost of recovery on a single HDD (which may or may not succeed, depending on the failure mode). If your data is important enough to you to even consider that, then you should probably be backing it up regularly...
2: Flash will always be 10x cost of harddrives. In other words, Flash won't overtake harddrives on price.
That's assuming that hard drives keep getting bigger and cheaper. The amount of R&D money required for each generation of improvement (in most technology) goes up, but the spending for HDDs has gone down as manufacturers see that they're hitting diminishing returns. The number of people who will pay for 4TB disks is lower than the number that will pay for 2TB, which is lower than the number that will pay for 1TB disks and so on. For a lot of users, even 500GB is more than they will need for the lifetime of a disk.
The minimum costs for an SSD are lower than the minimum costs for an HDD. Currently, the smallest disks I can find are about 300GB, and they cost about as much as a 64GB SSD. If you bring an 8TB disk to market now, you're betting that enough people will buy it at a premium price to recoup your R&D expenses before SSDs (Flash or some other technology) pass it in capacity. But now, a lot of the people who traditionally bought the high-end disks are buying SSDs and they're caring more about latency and throughput than capacity. If you show an insanely expensive disk that gives 10x the capacity of the current best, most of these people will say 'meh,' but if you show them a disk that can do 10x the IOPS then they'll ask how much and how soon you can deliver. This gives a big incentive to concentrate R&D spending on SSDs.
Repel them. Repel them. Induce them to relinquish the spheroid. - Indiana University fans' chant for their perennially bad football team