Heh. Haha. BWAHAHAHAHAHAHAH!
Not really, most of each of thousands of projects have at most a few core developers and extraneous people who occasionally submit patches to fix specific itches. There is no "A team" scouring all open source for vulnerabilities from the simple fact such vulnerabilities most certainly do exist as innocent bugs and have not been reported by such teams.
To illustrate this point the linux kernel is developed by armies of smart people yet an automated tool found a laundry list of shit that has been around for years nobody noticed.
First, from the very report that you linked to:
The results show that the number of defects detected by the Coverity analysis system has decreased from over 2000 to less than 1000 while, during the same period of time, the source code has quadrupled in size and the power of Coverity's detection capabilities has increased markedly. We conclude using this data that the Linux kernel is a robust, secure system that has matured significantly.
You want a real eye opener? Check out Coverity's current press release:
Code quality for open source software continues to mirror that of proprietary softwareâ"and both continue to surpass the accepted industry standard for good software quality. Defect density (defects per 1,000 lines of software code) is a commonly used measurement for software quality. Coverityâ(TM)s analysis found an average defect density of
.69 for open source software projects that leverage the Coverity Scan service, and an average defect density of .68 for proprietary code developed by Coverity enterprise customers. Both have better quality as compared to the accepted industry standard defect density for good quality software of 1.0. This marks the second, consecutive year that both open source code and proprietary code scanned by Coverity have achieved defect density below 1.0.
Linux remains a benchmark for quality. Since the original Coverity Scan report in 2008, scanned versions of Linux have consistently achieved a defect density of less than 1.0, and versions scanned in 2011 and 2012 demonstrated a defect density below
.7. In 2011, Coverity scanned more than 6.8 million lines of Linux code and found a defect density of .62. In 2012, Coverity scanned more than 7.4 million lines of Linux code and found a defect density of .66. At the time of this report, Coverity scanned 7.6 million lines of code in Linux 3.8 and found a defect density of .59.
While static analysis has long been cited for its potential to improve code quality, there have been two significant barriers to its adoption by development organizations: high false positive rates and a lack of actionable guidance to help developers easily fix defects. Coverity has eliminated both of these obstacles. The 2012 Scan Report demonstrated a false positive rate for Coverity static analysis of just 9.7 percent in open source projects. Additionally, the 2012 report noted more than 21,000 defects were fixed in open source codeâ"more than the combined total of defects fixed from 2008-2011.
The real conclusion that you should draw is twofold. First, if you're relying on software that isn't doing static code analysis, you're probably relying upon insecure code.
Second, Every. Single. App. Has. Bugs. The difference is that open source lets anyone do the analysis and fix the bugs. The same can't be said when of any closed source package.
So, which is safer? The OSS app where everything is publicly discussed and bug fixes generally get acted upon fast, or the closed source app where the vendor may be handing the known vulnerabilities off to the NSA or its equivalent in the country of your choice? I know which way I choose.