Would you argue that if a Microsoft (or other vendor) SSL implementation was used by most of the world's web servers, this would have been less likely to happen? As far as I know, there's no reason to think that any other implementation, open or closed, would be any more immune to such problems. There is little or no evidence that closed source software is generally more reliable, or that substantial effort is made to audit it.
If you're arguing that it's bad that such a high percentage of the world's web servers use the same software, I might agree, but that is completely orthogonal to whether that software is open or closed.
Heartbleed is a perfect example of why software should be written in "safe" languages, which can protect against buffer overruns, rather than unsafe languages like C and C++.
Of course, the problem is that if you try to distribute open source software written in a safe language, everyone bitches and whines about how they don't have a compiler for that language, and how run time checking slows the software down by 10%. Personally I'd rather have more reliable software that ran 10% slower, than less reliable software that ran faster. It's also crazy to turn off the run-time checks "after the software is debugged", as if the debugging process ever succeeded in finding all the bugs. As C.A.R. Hoare famously observed in 1973, "What would we think of a sailing enthusiast who wears his lifejacket when training on dry land, but takes it off as soon as he goes to sea?"
The "with enough eyes" argument, and "if programmers were just more careful" arguments don't justify continued widespread use of unsafe languages. Granted, safe languages don't eliminate all bugs, but they eliminate or negate the exploit value of huge classes of bugs that are not just theoretical, but are being exploited all the time.
I keep hoping that after enough vulnerabilities based on buffer overruns, bad pointer arithmetic, etc. are reported, and cost people real money, that things will change, but if Heartbleed doesn't make a good enough case for that, I despair of it ever happening.
The ability to undo a problematic install should be mandatory, but in too many instances it is not. That's because software developers are always operating under the assumption that the latest version is the greatest version, when it may not be. This is especially true in the smartphone and tablet world. There is no rollback to be had for anything in the iOS and Android worlds. Until the day comes when software developers start releasing perfectly functioning, error-free code, we need the ability to go backwards with all software."
Asking around among our tech-savvy friends though, no one has a good answer to the question, 'how would you backup 20TB of data?'. It's not like you could just plug in an external drive, and using any cloud service would be terribly expensive. Blu-Ray discs can hold a lot of data, but that's a lot of time (and money) spent burning discs that you likely will never need. Tape drives are another possibility, but are they right for this kind of problem? I don' t know. There might be something else out there, but I still have no feasible solution.
So I ask fellow slashdotters: for a home user, how do you backup 20TB of Data?" Even Amazon Glacier is pretty pricey for that much data.
so basically if they start building the uranium enrichment plants now, they might have a working nuke in 10-20 years.
There's an existence proof that it can be done in four years, if someone is willing to devote sufficient resources to it.
...and again and again, while I wait in the queue.