Bug Hunting Open-Source vs. Proprietary Software 244
PreacherTom writes "An analysis comparing the top 50 open-source software projects to proprietary software from over 100 different companies was conducted by Coverity, working in conjunction with the Department of Homeland Security and Stanford University. The study found that no open source project had fewer software defects than proprietary code. In fact, the analysis demonstrated that proprietary code is, on average, more than five times less buggy. On the other hand, the open-source software was found to be of greater average overall quality. Not surprisingly, dissenting opinions already exist, claiming Coverity's scope was inappropriate to their conclusions."
Smell Microsoft? (Score:0, Interesting)
my open-source project was scanned by Coverty... (Score:4, Interesting)
Rather than brag (I won't say who I am or the name of my project), I'm just going to sit back and read all the defensive flames from self-appointed "security experts" whose open-source project didn't do so well. After all the flames from these "security experts" that I've endured, I'm going to enjoy watching them squirm.
It's karma.
It was about mision-critical software (Score:5, Interesting)
Basically, my own conclusion from reading the article was that it IS possible to write excellent software with very few bugs, if that is a top priority. And, that the author seems to say that while mission-critical software (which happens to be proprietary) is fortunately much better than the rest, among all that other non-mission-critical software, open source tends to be better than proprietary.
Not surprising, and quite encouraging...
meaningless, no data, and probably biased (Score:5, Interesting)
Furthermore, Coverity simply cannot accomplish what they claim to accomplish: there is no way of detecting "bugs" automatically--if there were, compilers would already be doing it. Coverity effectively does little more than compare code against a set of internal coding conventions; that can be useful if it's done right, but it's not a measure of code quality. Some completely correct code will score thousands of violations against their tool, while other code may contain thousands of bugs, none of which register. Furthermore, it is likely that a lot of their customers are Windows based and that Coverity is biased towards Windows-based coding conventions, giving more false positives on non-Windows code. Before publishing such comparisons, Coverity first would need to demonstrate that their tool does not contain such biases.
Finally, and perhaps most importantly, the company isn't publishing its data, so nobody can verify or even evaluate their claims. Not only do they fail to publish their raw data (obviously, they can't do that for proprietary software), they also fail to list their summary statistics by vendor and project (which they could, but obviously won't do). They don't even give a summary statistic by class of application, class of organization, and code size. Their results are meaningless because they're not reproducible.
These numbers tell you nothing about FOSS code quality relative to commercial code quality. What they tell you is that Coverity apparently doesn't know how to do statistics, misrepresents what their product can do, and doesn't know how to report experimental results properly. Now, do you want to put your trust in such a company?
Re:Why is this surprising? (Score:3, Interesting)
Not always. Perhaps some companies do but it is far from a universal practice. More the common practice is to whip it out as fast as possible and patch it later. Even if a company has a QA, they are often just documenting the bugs found for future releases. Understaffed and politically managed developers may take years to fix issues. This is very common and I suggest the norm in business grade software.
Most seasoned systems administrators with a software development background know which companies tend to be better and worse. And it is no different with Open Source, some projects are better than others.
Quality of Programmers is critical... (Score:3, Interesting)
Which would you rather have 100 monkeys programming on a project or 10 skilled programmers?
More programmers and more 'eyes' on a project does not mean it is going to be inherently more bug free. In fact with a group of bad programmers in the mix, it can cause severe harm to a project.
I'm not knocking Open Source, but for people to just expect it to be better because more people have access to work on it, have obviously not met as many programmers as I have.
There are a lot programmers that put time into project (and yes Open Source) that have no business developing a VB application for a 10yr old kiddie game, yet they are taking part in large scale coding projects that truly would be better off with them not working on it.
When working with XWindows years ago, I ran into a few people that scared the hell out of me and other people. They had no vision or scope past the specific things they were trying to do, and would often come up with modifications or 'features' that would break more than it added to the project.
In the Windows world of 3rd Party developers I have also found 100s of people I wouldn't want them to develop Hello World. As they had no concept or regard to security, Unicode or many other features that would fail when the applications would run on a non-English system with a user having administration privledges.
You can even find many commericial products in the 3rd party Windows world that also have these problems, but are yet produced by big companies are popular products.
I wish that all ideas would be welcomed into a project, but the people having the final say could trump crap programmers and crap ideas if they are detrimental to the project.
When you look at the Linux Kernel or BSD, you can quickly understand why Linus and others don't want to let the 'deciding' control into the masses, or both of these core OS would become crap in a matter of months of unregulated programmer additions.
Not quite (Score:5, Interesting)
Re:Lies, damned lies, and statistics (Score:3, Interesting)
The other thing that's obviously intentionally slanted here - many of the OSS projects on their list show zero bugs per 1000 lines. Obviously we can't do better than zero bugs, so saying "no open-source project we analyzed had fewer software defects (per thousand lines of code) than the top-of-the-line closed-source application" is pure spin.
We can just as easily say "15 of the top 60 OSS projects have zero bugs, but only *one* of the closed source projects could match that." Ultimately the numbers here are meaningless, so it's best to just not play this game.
What would be more interesting to me is a metric of time-to-solution, after a bug has been reported. The current coverity scan isn't set up to measure that accurately, because it doesn't notify anyone on the project when a scan completes and finds any bugs. So unless you check their web site very frequently, you won't know what it found.
Re:just an example of how "buggy" OSS software. (Score:3, Interesting)
Re:just an example of how "buggy" OSS software. (Score:3, Interesting)