Bug Hunting Open-Source vs. Proprietary Software 244
PreacherTom writes "An analysis comparing the top 50 open-source software projects to proprietary software from over 100 different companies was conducted by Coverity, working in conjunction with the Department of Homeland Security and Stanford University. The study found that no open source project had fewer software defects than proprietary code. In fact, the analysis demonstrated that proprietary code is, on average, more than five times less buggy. On the other hand, the open-source software was found to be of greater average overall quality. Not surprisingly, dissenting opinions already exist, claiming Coverity's scope was inappropriate to their conclusions."
not to mention... (Score:2)
Number of Bugs vs Bug types (Score:5, Insightful)
The summary just says all bugs, which is not fair if the proprietary has 5 times the number of critical or super-critical bugs.
Even worse. (Score:5, Insightful)
He refuses to accept that different projects have different requirements. When the project results in people dying if it fails, you spend a LOT more money and time finding all the "bugs".
When the worst that happens is that you don't see a web page, your money/time requirements are not so high.
Even so, from his finding, Open Source is, on average, better than the closed source projects (not counting the closed source projects that result in loss-of-life in the event of a failure).
He's an idiot for confusing the different requirements.
Re:Even worse. (Score:5, Insightful)
What this guy is trying to say (besides 'buy my software') is that open source can do better (the title of his article is "...what open-source developers can learn....."). He wants people to use stricter development practices; things like automatic testing, nightly builds, etc.
Furthermore, he is probably right, automatically testing code ala j-unit or cpp-unit is a great idea when you are getting contributions from many different people. If that became common practice in the open-source world, the code quality would improve. He's not saying open-source is bad, he's saying it could get better.
This guy is not an idiot, you just didn't understand his point.
Re: (Score:3, Insightful)
openoffice (Score:3, Informative)
since then many people try to clean it, but its hard and risky to clean a such big app
most projects have a coding style that everyone should follow, and many force you to comply if they want their code to be accepted
OpenOffice is a bad example. (Score:4, Informative)
It's too bad because it actually works kinda okay, but it's a real effort to get your hands dirty with.
Blender is also like that... it seems when a codebase has 'gotten around' it tends to pick up the bad habits of all the hands its been through.
MySQL is a bad state because it's really only developed by MySQL AB -- no one else is contributing to it so they have no reason to make it any more maintainable than it is. PostgreSQL, on the other hand, had the luxury of being the fruit of some academic research projects and was rewritten once or twice, so it's a little more maintainable.
Re: (Score:2)
He refuses to accept that different projects have different requirements. When the project results in people dying if it fails, you spend a LOT more money and time finding all the "bugs".
Yeah, back in the day a failure in apache could result in loss of life if you were the sysadmin for a .com company back in 2000 and the webserver died just as the CEO was showing some potential investors
Re: (Score:2)
The idiot here is not the author of the article...
Re: (Score:3, Insightful)
Please find and report bugs whenever possible. Fix some bugs if you can. This is the process that does make open source better in the long run.
Re:Number of Bugs vs Bug types (Score:4, Insightful)
However as others have pointed out they are comparing mission critical software to non mission critical software. What should have been done (as has also been pointed out) is to cluster by usage case or software field. So databases to databases, browsers to browsers, generic office usage to generic office usage, etc.
LetterRip
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
Picking a random day and testing two pieces of software for bugs on that day is reasonably fair. The following week both products will be better, but unless you're going to test constantly, picking the latest release as of some day is the best you're going to do.
Re:not to mention... (Score:4, Insightful)
Now these days you often get studies claiming that proprietary software is less buggy than free software, but it misses some very significant points, the ones we used to respond to MozillaQuest articles still apply very much to today:
Re: (Score:3, Insightful)
Coverity's study is based on their analysis of the code itself, not on bug reports, so the considerations you mention are not relevant.
Re: (Score:2)
So how did they test the proprietary software? (Score:4, Insightful)
Re:So how did they test the proprietary software? (Score:5, Informative)
I don't know if the closed source statistics are online somewhere, but these are the open source statistics.
http://scan.coverity.com/ [coverity.com]
and if you ask me the "Defect Reports / KLOC" is pretty low, and such software would normally be considered "good" software.
Re: (Score:2)
Yeah, all I saw was the open source statistics as well.
Without seeing how the closed source apps were analyzed, the only conclusion I can reach is that automated bug detection finds more bugs when you have have access to the source code than when you don't. How surprising. Duh.
Re: (Score:2)
They were analyzed the exact same way. The results are of course not public.
Re: (Score:2)
Re: (Score:2)
Re:So how did they test? -- badly (Score:3, Informative)
A method known to have flaws. It raises a ton of false positives, things that might "look like" potential bugs but aren't because of the data flow. You have to do a data flow analysis to see if they really are bugs.
For example, not checking for buffer overflows when copying strings, etc, is usually considered a (potential) bug. Certainly it is when dealing with unknown input. However, in a function buried deep behind l
Re: (Score:2, Insightful)
I agree with your analysis - I've been on the fixing end of a lot of these kinds of reports, and have known that the flagged error can never occur, but the linting nazis insist that there must be zero warnings at any cost.
I thought Voyager was the ultimate in stable code, not the space shuttle?
Fa
Re: (Score:2)
Re: (Score:3, Informative)
A guess is that this is because much of emacs' functionality is implemented in elisp code, which is not part of the core program and so not included in the source line count, whereas most of vim is implemented directly.
Re: (Score:2)
So.... they've got a statistics page for their defect scanning tool. Which says that Subversion has 15 lines of code... umm, have they run their bug scanner against their own code? :)
Re: (Score:2)
I call FUD, even if he's got good intentions.
What's a bug? (Score:5, Insightful)
Well, what is a bug?
I doubt he'd send me a check if I told him that TeX doesn't have an easily accessible iconic user interface. No, his concept of a bug is a deviation from the specified functionality.
But what if that functionality is wrong or sucks?
Apple does really well at creating functionality that doesn't suck. They suffer from the same problems of deviations from the spec as much as anyone, but they manage to mold their spec around what users want. Microsoft, to some extent, does the same and they release products that conform to what users want (generally) because they change the spec as necessary when customers demand change.
If you are implementing towards a standard (like most OSS projects with any traction are wont to do), then you are necessarily restricted by what that spec says. If the spec says to do something inane, the standard-follower must implement it that way.
I don't really have a point here except to say that unless they say "this is what we mean by bug", there can be no way to really examine their results.
Re:What's a bug? (Score:4, Informative)
I think you're conflating two things. The check was (is?) for $50 or some such. The version number of the software is pi (or e) to whatever number of decimals, where each subsequent release adds a decimal place (becomes a closer approximation to the real thing.)
No, his concept of a bug is a deviation from the specified functionality.
That's the only reasonable definition of a bug in the software.
But what if that functionality is wrong or sucks?
Then that's a bug in the specification or in the requirements. I spent the better part of six months debugging the requirements on a major project once. Part of that was getting mutual agreement from three major customers, part of that was resolving internal inconsistencies in the requirements document, and part of that was a high level design process in parallel, to be sure we had a chance of actually satisfying the requirements.
Of course the end user (especially of off-the-shelf software) generally doesn't differentiate between a bug in the software vs a bug in the specification or requirements. The end user generally never sees the spec, and only has a vague idea of the requirements. (Sometimes worse than vague -- how many people do you know who use a spreadsheet for a database?)
(And to BadAnalogyGuy -- I'm not disagreeing, just amplifying.)
A bug can be many things (Score:4, Informative)
- This is the typical meaning behind the word "bug".
- That's where your TeX complaint would fall under. It's "by design" that it doesn't have an iconic user interface, but that doesn't mean it's something that shouldn't be addressed ever
- This is actually a result of the bug tracking system that we use. Rather than sending e-mail, which often gets lost, we often track work items as bugs. For example, "Need to turn off switch X on the test server when we get to milestone Y"
To further complicate things, there is a severity and priority attached to every bug. Severity is a measure of the impact the bug has on the customer/end-product. It can range from 1 (Bug crashes system) to 4 (Just a typo). Priority is a measure of the importance of the bug. It ranges from 0 (Bug blocks team from doing any further work, must fix now), to 3 (Trivial bug, fix if there is time). (I don't know why the ranges don't match, BTW, seems silly to me)
As anyone who works on large-scale project probably knows, there are always a wide range of bugs, across all the pri/sev levels. To me, a simple count of all the bugs isn't terribly useful. A project could have a ton of bugs, but most of them being DCRs (which are knowingly going to be postponed till the next release) and/or low pri/sev bugs. Or maybe it's the beginning of the project and they're all known work items. Or a project could have only a few bugs, but with all of them being critical pri/sev ones.
So, whenever I see a report that simply talks about bug count, I take it with a huge grain of salt. If I had to guess (I skimmed the article), it seems like OSS projects have far more bugs, but perhaps lower pri/sev since the product itself has been evaluated as being higher quality. In the end, it's the quality that the customer really cares about.
Re: (Score:2, Informative)
Not quite (Score:5, Interesting)
old rant? (Score:2)
Isn't this an old rant? Sorry if I come out as a troll!
Horrible Comparisions (Score:5, Funny)
"Deanna Asks A Ninja: What is the circumference of a moose?!"
"It's michael pailum with his face in a pie times douglas adams squared."
This answer makes as much sense as the article.
Except "Ask A Ninja" made more sense. And was more accurate. And more entertaining.
Can I just get a Ninja hit out on this guy something so these articles will not make it slashdot anymore?
Exactly what constitutes a software bug? (Score:2, Insightful)
Somebody please explain to me exactly what kind of software bug can be found by automatic scanning that isn't found by standard debugging and compile-time checks. If a computer can ascertain exactly what the programmer intended to do, why do we need programmers?
The simple answer to this is that they can't. That's the point behind hiring human codeslingers to write applications. Considering that most software bugs are logic bugs (off by one, etc) that can't be directly seen in the code without actually,
Re: (Score:3, Insightful)
Somebody please explain to me exactly what kind of software bug can be found by automatic scanning that isn't found by standard debugging and compile-time checks. If a computer can ascertain exactly what the programmer intended to do, why do we need programmers?
Security holes. Coverity specializes in programmatic detection of buffer overflows.
On a related note, as a programmer, I find open source software much more valuable than closed source because WHEN (not if) I find a critical bug, I can usually
Re: (Score:3, Informative)
Security holes. Coverity specializes in programmatic detection of buffer overflows.
Oh, and I forgot some of the other obvious things you can check for: unreachable code, comparisons that always evaluate to true or false, possible uninitialized use of variables, global and/or heap storage of pointers to variables on the stack.... There are a lot of things that are usually unsafe to do and are usually bugs. It is usually too slow to check for this stuff during compilation, as it requires at least some d
Some automatic bug finding (Score:2)
Well first of all, you have to assume that all programmers even do the "standard debugging and compile-time checks". Even then, those checks are often hardly comprehensive. You can build so
Re: (Score:3, Informative)
Somebody please explain to me exactly what kind of software bug can be found by automatic scanning that isn't found by standard debugging and compile-time checks. If a computer can ascertain exactly what the programmer intended to do, why do we need programmers?
Decimal one = 1;
Decimal two = 2;
one.add(two);
System.out.printline(one);
Guess whats printed? Similar errors are made if you use methods on java.lang.String like replace(pattern, replacement, pos).
The simple answer to this is that they can't.
Thats a ve
Re: (Score:2)
All changes to code have the potential to introduce a bug. Sorry, but non-conformance with "best practices" is not a bug.
Re: (Score:2, Insightful)
Re: (Score:2)
A *LOT*. The existance of bugs such as buffer overflows PROVES that the "standard debugging and compile-time checkes" are insufficient.
CCured found bugs in a couple different applications. Three separate programs in the SPECINT benchmark set (compress, ijpeg, and go) had bugs. In SPECINT! Probably one of the most heavily analyzed programs ever -- p
Re: (Score:2, Informative)
Another paper from UC Santa Barbra and (I think) the Technical Institute of Vienna used static analysis of compiled code (not even source!) to try to determine if a kernel module was malicious. (In this context, "malicious" means that it acts like a rootkit; in other words, if it modifies internal kernel
just an example of how "buggy" OSS software. (Score:3)
linux 2.6: 3,315,274 lines of code, 0.138 / 1000 lines of code.
kde: 4,518,450 lines of code, 0.012 bugs / 1000 lines of code.
based on this I would say we are doing pretty good with open source.
but we shouldn't forget that this tool only scans coding errors, not coding logic.
wine for example only has 0.112 / 1000 lines of code as well.
and we all know it by far doesn't always do what we want it to do.
Re:just an example of how "buggy" OSS software. (Score:4, Funny)
> and we all know it by far doesn't always do what we want it to do.
Well duh! It is an implementation of the Windows API. And when considering how often the WinAPI does what you want, I think they have made a perfect copy.
Re:just an example of how "buggy" OSS software. (Score:4, Informative)
linux 2.6: 3,315,274 lines of code, 0.138 / 1000 lines of code.
kde: 4,518,450 lines of code, 0.012 bugs / 1000 lines of code.
So far so good! But for contrast, I'll add this stat from TFChart:
Gnome: 31,596 lines of code, 1.931 bugs / 1000 lines of code.
Eeeep!!
(No wonder I prefer KDE
Re: (Score:2)
Re: (Score:3, Interesting)
Re: (Score:3, Interesting)
Re: (Score:2)
Re: (Score:2)
Of course, if the problem is that Wine is not finished, than it should be label as a work in progress rather than a finished product.
Re: (Score:2)
Re: (Score:2)
Sounds like a pretty fair emulation of Windows to me.
my open-source project was scanned by Coverty... (Score:4, Interesting)
Rather than brag (I won't say who I am or the name of my project), I'm just going to sit back and read all the defensive flames from self-appointed "security experts" whose open-source project didn't do so well. After all the flames from these "security experts" that I've endured, I'm going to enjoy watching them squirm.
It's karma.
Why is this surprising? (Score:3, Insightful)
In the end, it comes down to black box vs. white box testing. Commercial software has a strong QA engineering component. Open Source software relies primarily on a black box testing approach.
Open source has MANY benefits and MANY advantages over commercial software. This just doesn't happen to be one of them, but unlike the commercial software, the bug fix cycle on open sourced stuff can be a LOT quicker, so it evens out in the end.
Re:Why is this surprising? (Score:5, Informative)
Says who? QA and testing covers the entire gamut, from formalized unit-testing at every level, to 'throw it at the beta testers and hope nothing breaks'. it's got nothing to do with 'proprietary' (not 'propriety') vs open source.
Where on Earth did you get that? Are you completely oblivious to all the testing methodologies and systems developed by the open source community? Here's a few for you to research: JUnit, Test::Unit, and Selenium.
Again with the generalizations! Commercial software development is, by definition, proprietary, so you don't know how they do it! They might tell you they have a 'strong QA engineering component' (whatever that means) but they could be full of shit!
Mod parent up! Good QA can come from anywhere (Score:2)
Commercial software has no "lock" on QA. Good fundamentals can be practiced anywhere. And I've certainly seen many testers coming from other commercial firms that have no idea what it does to be a good tester. (Definately instances of that in MS as well, we're a very large company)
I think the only difference "commercial vs OSS" has on QA is perhaps the environ
how did they get access to the proprietary code (Score:2)
You're kidding right, what about that US university booking that wouldn't accept applications from 'overseas' students with addresses in the UK. Or the Airline Radio system that borked [socalscanner.com] every 2^32 millisecs seconds when a 32 bit buffer cycled round to zero.
"Open source software must rely on after-the-fact testing in the form of "this broke when I tried to do this"."
"Open Source software relies primarily on a black
Re: (Score:2)
In testing terms, "black box" and "white box" have specific meanings which aren't in related to how open the source code is.
http://en.wikipedia.org/wiki/White_box_testing [wikipedia.org]
http://en.wikipedia.org/wiki/Black_box_testing [wikipedia.org]
Both forms of testing can be applied to both open and closed source projects (after all, all projects have source code, the only difference in this context is if the source code is available to the general public). However, it's
Re: (Score:3, Interesting)
Not always. Perhaps some companies do but it is far from a universal practice. More the common practice is to whip it out as fast as possible and patch it later. Even if a company has a QA, they are often just documenting the bugs found for future releases. Understaffed and politically managed developers may take years to fix issues. This is very common and I suggest the norm in business gr
Re: (Score:2)
I think the other replies to your post were pretty spot-on, so I'll just summarize them here:
Man that was funny! I laughed and laughed...and so did the Software and QA Engineers at work.
Re: (Score:2)
and an open source project is never under pressure to get something out the door?
Misquoting TFA (Score:5, Informative)
TFA says that no open source project is as good as the BEST of proprietary, but it also says that the AVERAGE open source is better than the AVERAGE proprietary.
Re: (Score:2, Informative)
Not quite... (Score:5, Insightful)
No, *popular* open-source software is 5x as buggy as *safety-critical* closed software. The linked dissenting opinion [fortytwo.ch] is at least partly right; they're comparing apples to oranges.
Maybe they should try comparing open- and closed-source software that's actually trying to solve the same problem? That'd be a bit more valid of a comparison...
That wouldn't work for him. (Score:2)
If you required that he match the apps/categories, then he wouldn't be able to match aircraft software to any Open Source project. Without the highly tested, life-critical proprietary apps, his case would collapse.
Which is why he only differentiates based upon "proprietary" or "open".
Re: (Score:3, Insightful)
And even more dissenting opinions (Score:2)
Re: (Score:2)
First, some kinds of software have harder requirements. Live critical software will have fewer bugs. That is just caused by the fact that there is zero tolerance for the slightest possibility of a bug, so there is hundred times more time invested in each line of code. Lets say it takes you two hours to write some code at reasonable quality. Your boss says: That's fine, but we must have a one hun
Actually (Score:2, Informative)
It was not an apples to apples comparison, more like apples to diamonds. Dom't worry, just fix any real problems identified. Many of the bugs found are theoretical, not real. Many others are style questions. the experts will probably never quit arguing a
Open or Closed ? (Score:3, Insightful)
Open-source software is expensive if you want a commercial support contract (because you are asking a professional to spend a lot of time learning).
Closed-source software doesn't have the function that you want, and you cannot fix it to add the funcion that you want.
You pays your money and you takes your choice. You can always stick to pencil-and-paper, and not use this 'software' stuff at all, if you prefer.
Re: (Score:2)
Open-source software is expensive if you want a commercial support contract (because you are asking a professional to spend a lot of time learning).
How is this going to be different say for any new commercial or open source product a company has or is about to get? Learning in I/T is a given (unless you want to be RIFed or outsourced) or only hire chair mushrooms. But in any case, learning challenges follow both open and closed source.
Closed-source software doesn't have the function that you want, and
It was about mision-critical software (Score:5, Interesting)
Basically, my own conclusion from reading the article was that it IS possible to write excellent software with very few bugs, if that is a top priority. And, that the author seems to say that while mission-critical software (which happens to be proprietary) is fortunately much better than the rest, among all that other non-mission-critical software, open source tends to be better than proprietary.
Not surprising, and quite encouraging...
Re: (Score:2)
Really, the software has no commercial value in itself
The software may have no commercial value, but the ability to support it certainly does. By open-sourcing it, anyone could set up a business selling and supporting the software, which would undermine the business plan of the company which wrote it in the first place - and you'd probably never get any significant community contributions because not many people in the community need to built a mission critical air tra
Re: (Score:2)
meaningless, no data, and probably biased (Score:5, Interesting)
Furthermore, Coverity simply cannot accomplish what they claim to accomplish: there is no way of detecting "bugs" automatically--if there were, compilers would already be doing it. Coverity effectively does little more than compare code against a set of internal coding conventions; that can be useful if it's done right, but it's not a measure of code quality. Some completely correct code will score thousands of violations against their tool, while other code may contain thousands of bugs, none of which register. Furthermore, it is likely that a lot of their customers are Windows based and that Coverity is biased towards Windows-based coding conventions, giving more false positives on non-Windows code. Before publishing such comparisons, Coverity first would need to demonstrate that their tool does not contain such biases.
Finally, and perhaps most importantly, the company isn't publishing its data, so nobody can verify or even evaluate their claims. Not only do they fail to publish their raw data (obviously, they can't do that for proprietary software), they also fail to list their summary statistics by vendor and project (which they could, but obviously won't do). They don't even give a summary statistic by class of application, class of organization, and code size. Their results are meaningless because they're not reproducible.
These numbers tell you nothing about FOSS code quality relative to commercial code quality. What they tell you is that Coverity apparently doesn't know how to do statistics, misrepresents what their product can do, and doesn't know how to report experimental results properly. Now, do you want to put your trust in such a company?
Re: (Score:2)
You can't detect bugs with 100% certainty by definition on any Turing machine, but you can certainly detect code which may result in unintended behaviour. Run lint against a bunch of source code and you'll see what I'm talking about.
Re: (Score:2)
Re: (Score:3, Informative)
It should, however, be remembered that coverity d
Nice way to generate publicity (Score:3, Insightful)
Now to compare every open source software application to aerospace software is really comparing apples to oranges. There is a big difference in the expected quality between an editor and an aerospace application. It's alright even if my editor crashes once in every 20 times I invoke it. Is that acceptable with an aeroplane?
I'm sure the folks at Coverity understand all this. But if they really speak what is right, they will not get all the eyeballs and publicity. In classic slashdot lingo:
1. Do something (anything) that involves open source and proprietary software
2. Make claims that sound outrageous / controversial
3. Profit! (with all the free publicity)
misleading summary (Score:2)
From the summary:
From the article:
Lies, damned lies, and statistics (Score:5, Insightful)
Re: (Score:3, Interesting)
The other thing that's obviously intentionally
Maybe I'm Misreading this (Score:2)
Now, if Open source is better at finding more subtle errors, even if it fixes them, doesn't his methodology penalize OS code against proprietary code where they didn't find and correct the error in the first place?
Pug
This is basically nuts (Score:3, Insightful)
What they have not done is compare comperable projects -- IE to Firefox, Open Office to MS Office, Windows to OS-X to Linux-KDE. There is, as far as I know, no Open Source software product that is really intended for mission critical applications -- I guess maybe SSH might qualify, but I don't see it in their list.
So, I think what we have here is a comparison of Apples to Turnips using a dubiously calibrated error-o-graph machine that uses an unknown technology to perform undefined tests on software.
Don't get me wrong. I sure as hell wouldn't run a nuclear power plant with Linux-X-Windows-whatever. Nor with Windows -- neither Windows 9 nor NT based Windows. They don't meet my admittedly subjective standards of quality either. But if we waited for near perfect software quality, we'd still be trying to get text mode right. Personally, I 'd vote for that because I think building major structures on weak foundations will likely lead to big trouble a decade or three out, but I think I'd lose that vote about 93 to 1 with maybe 6 abstensions.
Quality of Programmers is critical... (Score:3, Interesting)
Which would you rather have 100 monkeys programming on a project or 10 skilled programmers?
More programmers and more 'eyes' on a project does not mean it is going to be inherently more bug free. In fact with a group of bad programmers in the mix, it can cause severe harm to a project.
I'm not knocking Open Source, but for people to just expect it to be better because more people have access to work on it, have obviously not met as many programmers as I have.
There are a lot programmers that put time into project (and yes Open Source) that have no business developing a VB application for a 10yr old kiddie game, yet they are taking part in large scale coding projects that truly would be better off with them not working on it.
When working with XWindows years ago, I ran into a few people that scared the hell out of me and other people. They had no vision or scope past the specific things they were trying to do, and would often come up with modifications or 'features' that would break more than it added to the project.
In the Windows world of 3rd Party developers I have also found 100s of people I wouldn't want them to develop Hello World. As they had no concept or regard to security, Unicode or many other features that would fail when the applications would run on a non-English system with a user having administration privledges.
You can even find many commericial products in the 3rd party Windows world that also have these problems, but are yet produced by big companies are popular products.
I wish that all ideas would be welcomed into a project, but the people having the final say could trump crap programmers and crap ideas if they are detrimental to the project.
When you look at the Linux Kernel or BSD, you can quickly understand why Linus and others don't want to let the 'deciding' control into the masses, or both of these core OS would become crap in a matter of months of unregulated programmer additions.
Did they run it on its own source code? (Score:2)
Bugs in Open Source Code Are "Better" (Score:2)
Coverity is involved, so ignore the study (Score:3, Insightful)
Coverity, of course, knows that reports like this will be written up in exactly the way this summary was, clearly associating their company with the idea of enumerating the bugs in a piece of source code. While not illegal, this type of marketing is of course deceptive; while published papers describe the type of defects (or non-defects) actually detected, the overwhelming volume of commentary will reflect the broader, and incorrect, view that Coverity == bug-finder. It would be just as meaningful (which is to say, not very) to publish the number of lint warnings or missed opportunities to qualify pointer arguments with the const keyword, and neither would require an expensive piece of overhyped software.
Just Say No to Coverity's marketing gimmicks.
Re: (Score:3, Insightful)
Re: (Score:3, Funny)
Re:How much is it really true? (Score:4, Informative)
No they are not to the extend of a experienced developt.
going through the code dow not find bugs. Either you do a formal correct approach, that is a walk through or a code inspection then you may find bugs, or you only have the chance to find occasional off by one errors in a loop or array index. Just by looking over code as you say in your n00b appoach you only find suspicious pieces of code.
What now? You change it to be less suspicious? And then? You commit it? So you don't know if somethign elsewhere is breaking now because of your change? Ah
Testing means to DEFINE how individual pieces of code should behave and writing a test case exactly for that. Changing software and fixing bugs means to have tests, lots of tests, not eyeballs.
angel'o'sphere
P.S. that does not mean that formal walk throughs / inspections don't work, they do!! But informal ones are only for educational purpose intersting.
Re: (Score:3, Informative)
An open source software is tested by a whole lot of people over the world and everyone is free to take the code and test if. On the other hand in case of proprietary software this is not the case and is tested by far less number of individuals.
That sounds rather idealistic... The coverage on OSS varies a lot. Most is not tested much, and the testing is not systematic and analyzed, but ad hoc. And if a bug is found, many just shrug and think of it as buggy software, but don't do more about it. There is
Re: (Score:2)
And for a software that can be considered closed and is now more or less open, at least open enough to follow your approach: Qt.
angel'o'sphere
P.S. software is not tested by numbers of individuals but by test cases, that are instruments of mesurement like a balance or a folding rule
Porbably not. More likely green hills (Score:2)
Re: (Score:2, Insightful)
Or indirectly, or even mention them anywhere on the same page.
Just because something shows Open Source software in a bad light don't automatically assume that Microsoft is behind it.
I don't think Microsoft actually care that much about 99% of Open Source software out there. They probably even use a whole bunch in house.
There's a whole lot less conspiracies in the world than you think.
Re: (Score:2)
I think they should release it as an open source project, so that we all can see how mny bugs *it* has.
Re: (Score:2)