Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Bug Hunting Open-Source vs. Proprietary Software 244

PreacherTom writes "An analysis comparing the top 50 open-source software projects to proprietary software from over 100 different companies was conducted by Coverity, working in conjunction with the Department of Homeland Security and Stanford University. The study found that no open source project had fewer software defects than proprietary code. In fact, the analysis demonstrated that proprietary code is, on average, more than five times less buggy. On the other hand, the open-source software was found to be of greater average overall quality. Not surprisingly, dissenting opinions already exist, claiming Coverity's scope was inappropriate to their conclusions."
This discussion has been archived. No new comments can be posted.

Bug Hunting Open-Source vs. Proprietary Software

Comments Filter:
  • that many of the bugs found by coverity have already been fixed.
    • by Alien54 ( 180860 ) on Saturday October 07, 2006 @02:01PM (#16349381) Journal
      The problem is that there are different types of Bugs. things like a typo in a help file, or American spelling vs British spelling, vs a bug were the app crashes the system when installed on a system with an early version of Quicktime are clasdsified differently.

      The summary just says all bugs, which is not fair if the proprietary has 5 times the number of critical or super-critical bugs.
      • Even worse. (Score:5, Insightful)

        by khasim ( 1285 ) <brandioch.conner@gmail.com> on Saturday October 07, 2006 @02:32PM (#16349635)
        He's comparing "bugs" in a project such as Apache with "bugs" in the software controlling a jet engine on an airplane.

        He refuses to accept that different projects have different requirements. When the project results in people dying if it fails, you spend a LOT more money and time finding all the "bugs".

        When the worst that happens is that you don't see a web page, your money/time requirements are not so high.

        Even so, from his finding, Open Source is, on average, better than the closed source projects (not counting the closed source projects that result in loss-of-life in the event of a failure).

        He's an idiot for confusing the different requirements.
        • Re:Even worse. (Score:5, Insightful)

          by phantomfive ( 622387 ) on Saturday October 07, 2006 @04:17PM (#16350317) Journal
          Don't listen to the slashdot summary. It's terrible. The author is not against open source, he talks about the "brilliant open-source community."

          What this guy is trying to say (besides 'buy my software') is that open source can do better (the title of his article is "...what open-source developers can learn....."). He wants people to use stricter development practices; things like automatic testing, nightly builds, etc.

          Furthermore, he is probably right, automatically testing code ala j-unit or cpp-unit is a great idea when you are getting contributions from many different people. If that became common practice in the open-source world, the code quality would improve. He's not saying open-source is bad, he's saying it could get better.

          This guy is not an idiot, you just didn't understand his point.
        • by linuxci ( 3530 )
          He's comparing "bugs" in a project such as Apache with "bugs" in the software controlling a jet engine on an airplane.

          He refuses to accept that different projects have different requirements. When the project results in people dying if it fails, you spend a LOT more money and time finding all the "bugs".

          Yeah, back in the day a failure in apache could result in loss of life if you were the sysadmin for a .com company back in 2000 and the webserver died just as the CEO was showing some potential investors

        • by rduke15 ( 721841 )
          He is not confusing anything. You obviously didn't read the article.

          The idiot here is not the author of the article...
        • Re: (Score:3, Insightful)

          by MikeFM ( 12491 )
          I think the obvious point is that open source is a process which is evidently working since we have these independent third parties donating help to find and fix these bugs in the open source software. Yes, you may find bugs in the open source but then you are finding and fixing the bugs in the open source. It's a matter of time before the open source has fewer bugs.

          Please find and report bugs whenever possible. Fix some bugs if you can. This is the process that does make open source better in the long run.
      • by LetterRip ( 30937 ) on Saturday October 07, 2006 @02:42PM (#16349709)
        Coverity scanner only checks for programming errors. Ie things that cause crashes, etc.

        However as others have pointed out they are comparing mission critical software to non mission critical software. What should have been done (as has also been pointed out) is to cluster by usage case or software field. So databases to databases, browsers to browsers, generic office usage to generic office usage, etc.

        LetterRip
      • Re: (Score:3, Insightful)

        I couldn't agree more. But it's really only interesting if they stop grandstanding and compare comparable products. In our case, Coverity shouldn't make any statements about Open versus Closed source unless they have some degree of comparable data for OpenLDAP versus Netscape/Red hat, Sun, IBM, CA, Novell, and Oracle (at a minimum) Directory Server products. Comparing the bug level in OpenLDAP to that of a Jet Engine control program is not only misleading (because they don't give you a measure of the cost p
    • by Derkec ( 463377 )
      And new ones have been added. In both the OS and Proprietary worlds, people make efforts to fix bugs and as new features are added (and bugs are squashed) new ones are added.

      Picking a random day and testing two pieces of software for bugs on that day is reasonably fair. The following week both products will be better, but unless you're going to test constantly, picking the latest release as of some day is the best you're going to do.
    • by linuxci ( 3530 ) on Saturday October 07, 2006 @03:47PM (#16350161)
      I hate reports like this, there's so many reasons that bug counts don't prove anything. This all reminds me of the times MozillaQuest [mozillaquest.com] used to delight in posting Mozilla bug counts as a measure of quality (now MozillaQuest doesn't seem to mention Mozilla anymore, but a good parody of their Mozilla reporting is here [mozillaquestquest.com]).

      Now these days you often get studies claiming that proprietary software is less buggy than free software, but it misses some very significant points, the ones we used to respond to MozillaQuest articles still apply very much to today:

      • Free software projects very often have an open bug database so it's easy to see how many open bugs are in a project, most proprietary software doesn't have an open bug database so you have to trust the manufacturer and your own testing
      • Not all bugs in open databases are really bugs. Some are requests for enhancement, some are duplicates and some are rants
      • In some cases one persons bug may be another persons feature (e.g. if an application does something differently to the platform guidelines, some people may like this alternative behaviour, others will consider it a bug).
      • The profit motive - companies have a lot to lose by letting people know about bugs, volunteer led projects tend to want people to know about bugs in the hope someone will help fix them (this is getting a bit blurred now that more and more organisations are making money off free software but the fact still is with proprietary software you can't fix the bugs so they gain nothing by telling you about them)
      Sorry if this is redundant, I'm working on call at the moment and was halfway through typing this when I had some work to do!
      • Re: (Score:3, Insightful)

        by belmolis ( 702863 )

        Coverity's study is based on their analysis of the code itself, not on bug reports, so the considerations you mention are not relevant.

        • by linuxci ( 3530 )
          I was guilty of just reading the summary and comments there and when I read them my immediate thought was 'MozillaQuest style reporting' so that's what I started writing about. Anyway, the report is just as flawed when you talk about code analysis. With free software you can analyse any of the code out there without permission, whereas with proprietary software you need to be given access to the code in the first place and then you'd have to be given permission to publish the results of the analysis. I mea
  • by pembo13 ( 770295 ) on Saturday October 07, 2006 @01:47PM (#16349275) Homepage
    I scanned through the article, it didn't seem to mention how they tested the top proprietary software. I can well understand that there are are a lot of bugs in open source code since it is written by humans. But human also right the proprietary code. How did they test it?
    • by msh104 ( 620136 ) on Saturday October 07, 2006 @01:55PM (#16349333)
      they tested it by using a program that systemattically scans code for common errors.

      I don't know if the closed source statistics are online somewhere, but these are the open source statistics.
      http://scan.coverity.com/ [coverity.com]

      and if you ask me the "Defect Reports / KLOC" is pretty low, and such software would normally be considered "good" software.
      • Yeah, all I saw was the open source statistics as well.

        Without seeing how the closed source apps were analyzed, the only conclusion I can reach is that automated bug detection finds more bugs when you have have access to the source code than when you don't. How surprising. Duh.

        • by chgros ( 690878 )
          Without seeing how the closed source apps were analyzed.
          They were analyzed the exact same way. The results are of course not public.
      • by pembo13 ( 770295 )
        The article was pretty clear on how they did the open source ones, but said nothing about how they did the closed source ones.
        • The same tool was used to scan the source code of all the software, whether open-source or proprietary. Obviously this requires cooperation from proprietary software developers.
      • they tested it by using a program that systemattically scans code for common errors.

        A method known to have flaws. It raises a ton of false positives, things that might "look like" potential bugs but aren't because of the data flow. You have to do a data flow analysis to see if they really are bugs.

        For example, not checking for buffer overflows when copying strings, etc, is usually considered a (potential) bug. Certainly it is when dealing with unknown input. However, in a function buried deep behind l
        • Re: (Score:2, Insightful)

          by fatphil ( 181876 )
          The '11 of the top 15' is also grossly misleading. There were twice as many proprietory projects as OSS projects - you'd expect them to take 10 out of the top 15 slots. Deviation by 1 from that is lost in the noise.

          I agree with your analysis - I've been on the fixing end of a lot of these kinds of reports, and have known that the flagged error can never occur, but the linting nazis insist that there must be zero warnings at any cost.

          I thought Voyager was the ultimate in stable code, not the space shuttle?

          Fa
      • by chrisv ( 12054 )

        So.... they've got a statistics page for their defect scanning tool. Which says that Subversion has 15 lines of code... umm, have they run their bug scanner against their own code? :)

      • If his scanner is well written, it's very likely that the specifications of 'common errors' should be available for audit. If not, this 'evaluation' is null data; we don't know what he's scanning for, and as such, we can't verify that his results are reproducable or balanced.

        I call FUD, even if he's got good intentions.
  • What's a bug? (Score:5, Insightful)

    by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Saturday October 07, 2006 @01:48PM (#16349279)
    Knuth used to have this great offer where he'd send you a check for pi or e or something if you managed to find a bug in his code.

    Well, what is a bug?

    I doubt he'd send me a check if I told him that TeX doesn't have an easily accessible iconic user interface. No, his concept of a bug is a deviation from the specified functionality.

    But what if that functionality is wrong or sucks?

    Apple does really well at creating functionality that doesn't suck. They suffer from the same problems of deviations from the spec as much as anyone, but they manage to mold their spec around what users want. Microsoft, to some extent, does the same and they release products that conform to what users want (generally) because they change the spec as necessary when customers demand change.

    If you are implementing towards a standard (like most OSS projects with any traction are wont to do), then you are necessarily restricted by what that spec says. If the spec says to do something inane, the standard-follower must implement it that way.

    I don't really have a point here except to say that unless they say "this is what we mean by bug", there can be no way to really examine their results.
    • Re:What's a bug? (Score:4, Informative)

      by AJWM ( 19027 ) on Saturday October 07, 2006 @02:35PM (#16349655) Homepage
      Knuth used to have this great offer where he'd send you a check for pi or e or something if you managed to find a bug in his code.

      I think you're conflating two things. The check was (is?) for $50 or some such. The version number of the software is pi (or e) to whatever number of decimals, where each subsequent release adds a decimal place (becomes a closer approximation to the real thing.)

      No, his concept of a bug is a deviation from the specified functionality.

      That's the only reasonable definition of a bug in the software.

      But what if that functionality is wrong or sucks?

      Then that's a bug in the specification or in the requirements. I spent the better part of six months debugging the requirements on a major project once. Part of that was getting mutual agreement from three major customers, part of that was resolving internal inconsistencies in the requirements document, and part of that was a high level design process in parallel, to be sure we had a chance of actually satisfying the requirements.

      Of course the end user (especially of off-the-shelf software) generally doesn't differentiate between a bug in the software vs a bug in the specification or requirements. The end user generally never sees the spec, and only has a vague idea of the requirements. (Sometimes worse than vague -- how many people do you know who use a spreadsheet for a database?)

      (And to BadAnalogyGuy -- I'm not disagreeing, just amplifying.)
    • by jchenx ( 267053 ) on Saturday October 07, 2006 @02:41PM (#16349703) Journal
      I work at MS. In my group (and I imagine it's the same in others), a bug can be many things. Here's what they typically are though:

      1. A product defect
        - This is the typical meaning behind the word "bug".
      2. DCR (Design Change Request)
        - That's where your TeX complaint would fall under. It's "by design" that it doesn't have an iconic user interface, but that doesn't mean it's something that shouldn't be addressed ever
      3. Work item
        - This is actually a result of the bug tracking system that we use. Rather than sending e-mail, which often gets lost, we often track work items as bugs. For example, "Need to turn off switch X on the test server when we get to milestone Y"

      To further complicate things, there is a severity and priority attached to every bug. Severity is a measure of the impact the bug has on the customer/end-product. It can range from 1 (Bug crashes system) to 4 (Just a typo). Priority is a measure of the importance of the bug. It ranges from 0 (Bug blocks team from doing any further work, must fix now), to 3 (Trivial bug, fix if there is time). (I don't know why the ranges don't match, BTW, seems silly to me)

      As anyone who works on large-scale project probably knows, there are always a wide range of bugs, across all the pri/sev levels. To me, a simple count of all the bugs isn't terribly useful. A project could have a ton of bugs, but most of them being DCRs (which are knowingly going to be postponed till the next release) and/or low pri/sev bugs. Or maybe it's the beginning of the project and they're all known work items. Or a project could have only a few bugs, but with all of them being critical pri/sev ones.

      So, whenever I see a report that simply talks about bug count, I take it with a huge grain of salt. If I had to guess (I skimmed the article), it seems like OSS projects have far more bugs, but perhaps lower pri/sev since the product itself has been evaluated as being higher quality. In the end, it's the quality that the customer really cares about.
    • Re: (Score:2, Informative)

      by tawhaki ( 750181 )
      Knuth used to have this great offer where he'd send you a check for pi or e or something if you managed to find a bug in his code.
      It is a $2.56 check. The reasoning for that was that it is "an hexadecimal dollar".
      • Not quite (Score:5, Interesting)

        by The_Wilschon ( 782534 ) on Saturday October 07, 2006 @04:04PM (#16350231) Homepage
        Bugs (a.k.a. Entomology)

        Donald Knuth, a professor of computer science at Stanford University and the author of numerous books on computer science and the TeX composition system, rewards the first finder of each typo or computer program bug with a check based on the source and the age of the bug. Since his books go into numerous editions, he does have a chance to correct errors. Typos and other errors in books typically yield $2.56 each once a book is in print (pre-publication "bounty-hunter" photocopy editions are priced at $.25 per), and program bugs rise by powers of 2 each year from $1.28 or so to a maximum of $327.68. Knuth's name is so valued that very few of his checks - even the largest ones - are actually cashed, but instead framed. (Barbara Beeton states that her small collection has been worth far more in bragging rights than any equivalent cash in hand. She's also somewhat biased, being Knuth's official entomologist for the TeX system, but informal surveys of past check recipients have shown that this holds overwhelmingly for nearly everyone but starving students.) This probably won't be true for just anyone, but the relatively small expense can yield a very worthwhile improvement in accuracy.
        This is from the TeX users group site, at http://www.tug.org/whatis.html [tug.org].
  • In fact, the analysis demonstrated that proprietary code is, on average, more than five times less buggy.

    Isn't this an old rant? Sorry if I come out as a troll!

  • by Herkum01 ( 592704 ) on Saturday October 07, 2006 @01:55PM (#16349335)

    "Deanna Asks A Ninja: What is the circumference of a moose?!"

    "It's michael pailum with his face in a pie times douglas adams squared."

    This answer makes as much sense as the article.

    Except "Ask A Ninja" made more sense. And was more accurate. And more entertaining.

    Can I just get a Ninja hit out on this guy something so these articles will not make it slashdot anymore?

  • Somebody please explain to me exactly what kind of software bug can be found by automatic scanning that isn't found by standard debugging and compile-time checks. If a computer can ascertain exactly what the programmer intended to do, why do we need programmers?

    The simple answer to this is that they can't. That's the point behind hiring human codeslingers to write applications. Considering that most software bugs are logic bugs (off by one, etc) that can't be directly seen in the code without actually,

    • Re: (Score:3, Insightful)

      by dgatwood ( 11270 )

      Somebody please explain to me exactly what kind of software bug can be found by automatic scanning that isn't found by standard debugging and compile-time checks. If a computer can ascertain exactly what the programmer intended to do, why do we need programmers?

      Security holes. Coverity specializes in programmatic detection of buffer overflows.

      On a related note, as a programmer, I find open source software much more valuable than closed source because WHEN (not if) I find a critical bug, I can usually

      • Re: (Score:3, Informative)

        by dgatwood ( 11270 )

        Security holes. Coverity specializes in programmatic detection of buffer overflows.

        Oh, and I forgot some of the other obvious things you can check for: unreachable code, comparisons that always evaluate to true or false, possible uninitialized use of variables, global and/or heap storage of pointers to variables on the stack.... There are a lot of things that are usually unsafe to do and are usually bugs. It is usually too slow to check for this stuff during compilation, as it requires at least some d

    • I figured I should chip in, since I'm a Dev that works in QA.

      Somebody please explain to me exactly what kind of software bug can be found by automatic scanning that isn't found by standard debugging and compile-time checks. If a computer can ascertain exactly what the programmer intended to do, why do we need programmers?

      Well first of all, you have to assume that all programmers even do the "standard debugging and compile-time checks". Even then, those checks are often hardly comprehensive. You can build so

    • Re: (Score:3, Informative)


      Somebody please explain to me exactly what kind of software bug can be found by automatic scanning that isn't found by standard debugging and compile-time checks. If a computer can ascertain exactly what the programmer intended to do, why do we need programmers?


      Decimal one = 1;
      Decimal two = 2;

      one.add(two);

      System.out.printline(one);

      Guess whats printed? Similar errors are made if you use methods on java.lang.String like replace(pattern, replacement, pos).

      The simple answer to this is that they can't.
      Thats a ve
      • "That does not mean all found points are truely positive, but thy definitely are bad coding practice and my end in a bug later if the code gets changed."

        All changes to code have the potential to introduce a bug. Sorry, but non-conformance with "best practices" is not a bug.
        • Re: (Score:2, Insightful)

          by EvanED ( 569694 )
          There's a difference though between "a code change can break the code, despite the surrounding code's quality" and "the surrounding code is very brittle; tread lightly." Stuff like "check preconditions of functions, especially if they are on a module boundary" are important and should be present. It's the idea of failing early. If I pass you bad data, and you fail in an assert, that's good; if I pass you bad data, and you corrupt your internal variants and continue running for a while, that's bad. Functions
    • by EvanED ( 569694 )
      Somebody please explain to me exactly what kind of software bug can be found by automatic scanning that isn't found by standard debugging and compile-time checks.

      A *LOT*. The existance of bugs such as buffer overflows PROVES that the "standard debugging and compile-time checkes" are insufficient.

      CCured found bugs in a couple different applications. Three separate programs in the SPECINT benchmark set (compress, ijpeg, and go) had bugs. In SPECINT! Probably one of the most heavily analyzed programs ever -- p
      • Re: (Score:2, Informative)

        by EvanED ( 569694 )
        BTW, in full disclosure, CCured isn't a static analysis tool per se, and may have required running the programs to find the above bugs. But without the instrumentation CCured they would have gone undetected.

        Another paper from UC Santa Barbra and (I think) the Technical Institute of Vienna used static analysis of compiled code (not even source!) to try to determine if a kernel module was malicious. (In this context, "malicious" means that it acts like a rootkit; in other words, if it modifies internal kernel
  • by msh104 ( 620136 ) on Saturday October 07, 2006 @02:03PM (#16349387)
    as scanned by coverity.

    linux 2.6: 3,315,274 lines of code, 0.138 / 1000 lines of code.
    kde: 4,518,450 lines of code, 0.012 bugs / 1000 lines of code.

    based on this I would say we are doing pretty good with open source.
    but we shouldn't forget that this tool only scans coding errors, not coding logic.

    wine for example only has 0.112 / 1000 lines of code as well.
    and we all know it by far doesn't always do what we want it to do. ;)
    • by Anonymous Coward on Saturday October 07, 2006 @02:24PM (#16349567)
      > wine for example only has 0.112 / 1000 lines of code as well.
      > and we all know it by far doesn't always do what we want it to do. ;)

      Well duh! It is an implementation of the Windows API. And when considering how often the WinAPI does what you want, I think they have made a perfect copy.
    • by Reziac ( 43301 ) * on Saturday October 07, 2006 @02:35PM (#16349651) Homepage Journal
      Quoth the poster:

      linux 2.6: 3,315,274 lines of code, 0.138 / 1000 lines of code.
      kde: 4,518,450 lines of code, 0.012 bugs / 1000 lines of code.

      So far so good! But for contrast, I'll add this stat from TFChart:

      Gnome: 31,596 lines of code, 1.931 bugs / 1000 lines of code.

      Eeeep!!

      (No wonder I prefer KDE :)

      • May be more bugs but at least it isn't a 4.5Million line bloated piece of crap.

      • Re: (Score:3, Interesting)

        Perhaps their scanner can't detect bugs in C++ code as well as it can plain C.
        • Re: (Score:3, Interesting)

          by AVee ( 557523 )
          Or perhaps coding in C++ is just a far better idea then coding in plain C? It may be rare, but sometimes the new thing is just better then the old one...
    • Wine is more incomplete than buggy. It can run all sorts of outrageously complicated Windows software rather well (Microsoft Office and Lotus Notes, for example), but some other applications will use things that simply aren't implemented yet and thus you really end up with a hit or a miss situation without much in between.
      • This is the problem with a lack of formal requirements. If the requirement is to run all Windows applications properly, than not doing it is a bug. If the requirement is to run a specific subset of Windows applications, than it's not.

        Of course, if the problem is that Wine is not finished, than it should be label as a work in progress rather than a finished product.
    • by AJWM ( 19027 )
      wine for example [...] we all know it by far doesn't always do what we want it to do.

      Sounds like a pretty fair emulation of Windows to me. ;-)
  • by Anonymous Coward on Saturday October 07, 2006 @02:03PM (#16349389)
    ...and while it is on the list on the web page, I was happy to determine that most of the issues they found were false alarms. They found three real bugs, none of which were likely to bite, and even if they did bite it is not exploitable. Nonetheless, those bugs probably wouldn't have been found otherwise, so I was happy for the scan.

    Rather than brag (I won't say who I am or the name of my project), I'm just going to sit back and read all the defensive flames from self-appointed "security experts" whose open-source project didn't do so well. After all the flames from these "security experts" that I've endured, I'm going to enjoy watching them squirm.

    It's karma.
  • by Chairboy ( 88841 ) on Saturday October 07, 2006 @02:05PM (#16349403) Homepage
    Why does this surprise anyone? Propriety software traditionally undergoes a formalized, designed testing process. It's not perfect, but it's an ordered approach to boundary testing, design level implementation of quality, and more. Open source software must rely on after-the-fact testing in the form of "this broke when I tried to do this".

    In the end, it comes down to black box vs. white box testing. Commercial software has a strong QA engineering component. Open Source software relies primarily on a black box testing approach.

    Open source has MANY benefits and MANY advantages over commercial software. This just doesn't happen to be one of them, but unlike the commercial software, the bug fix cycle on open sourced stuff can be a LOT quicker, so it evens out in the end.
    • by tb3 ( 313150 ) on Saturday October 07, 2006 @02:38PM (#16349681) Homepage
      Are you nuts? Or are you just trying to see how many vapid over-generalizations you can jam into a single comment?

      Propriety software traditionally undergoes a formalized, designed testing process. It's not perfect, but it's an ordered approach to boundary testing, design level implementation of quality, and more.
      Says who? QA and testing covers the entire gamut, from formalized unit-testing at every level, to 'throw it at the beta testers and hope nothing breaks'. it's got nothing to do with 'proprietary' (not 'propriety') vs open source.

      Open source software must rely on after-the-fact testing in the form of "this broke when I tried to do this".
      Where on Earth did you get that? Are you completely oblivious to all the testing methodologies and systems developed by the open source community? Here's a few for you to research: JUnit, Test::Unit, and Selenium.

      Commercial software has a strong QA engineering component. Open Source software relies primarily on a black box testing approach.
      Again with the generalizations! Commercial software development is, by definition, proprietary, so you don't know how they do it! They might tell you they have a 'strong QA engineering component' (whatever that means) but they could be full of shit!

      • I happen to work in QA for a very commercial software firm (MS in fact, although it's in the games group). I agree whole-heartedly with your comments.

        Commercial software has no "lock" on QA. Good fundamentals can be practiced anywhere. And I've certainly seen many testers coming from other commercial firms that have no idea what it does to be a good tester. (Definately instances of that in MS as well, we're a very large company)

        I think the only difference "commercial vs OSS" has on QA is perhaps the environ
    • "Propriety software traditionally undergoes a formalized, designed testing process"

      You're kidding right, what about that US university booking that wouldn't accept applications from 'overseas' students with addresses in the UK. Or the Airline Radio system that borked [socalscanner.com] every 2^32 millisecs seconds when a 32 bit buffer cycled round to zero.

      "Open source software must rely on after-the-fact testing in the form of "this broke when I tried to do this"."

      "Open Source software relies primarily on a black
    • Re: (Score:3, Interesting)

      by canuck57 ( 662392 )

      Why does this surprise anyone? Propriety software traditionally undergoes a formalized, designed testing process.

      Not always. Perhaps some companies do but it is far from a universal practice. More the common practice is to whip it out as fast as possible and patch it later. Even if a company has a QA, they are often just documenting the bugs found for future releases. Understaffed and politically managed developers may take years to fix issues. This is very common and I suggest the norm in business gr

    • Propriety software traditionally undergoes a formalized, designed testing process. It's not perfect, but it's an ordered approach to boundary testing, design level implementation of quality, and more. ... Commercial software has a strong QA engineering component.

      I think the other replies to your post were pretty spot-on, so I'll just summarize them here:

      Man that was funny! I laughed and laughed...and so did the Software and QA Engineers at work.

  • Misquoting TFA (Score:5, Informative)

    by Harmonious Botch ( 921977 ) on Saturday October 07, 2006 @02:07PM (#16349425) Homepage Journal
    While I appreciate that PreacherTom was good enogh to bring this to us, the sentence "...no open source project had fewer software defects than proprietary code." just does not match TFA.

    TFA says that no open source project is as good as the BEST of proprietary, but it also says that the AVERAGE open source is better than the AVERAGE proprietary.
    • Re: (Score:2, Informative)

      "...no open source project had fewer software defects than proprietary code." just does not match TFA. AMANDA,emacs, ntp, OpenMotif, OpenPAM, Overdose, Postfix, ProFTPD, Samba, Subversion, tcl, Thunderbird, vim, XMMS all now with 0 defect reports/KLOC. That must match the best closed source software!
  • Not quite... (Score:5, Insightful)

    by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Saturday October 07, 2006 @02:11PM (#16349449) Homepage Journal
    The study found that no open source project had fewer software defects than proprietary code. In fact, the analysis demonstrated that proprietary code is, on average, more than five times less buggy. On the other hand, the open-source software was found to be of greater average overall quality.

    No, *popular* open-source software is 5x as buggy as *safety-critical* closed software. The linked dissenting opinion [fortytwo.ch] is at least partly right; they're comparing apples to oranges.

    Maybe they should try comparing open- and closed-source software that's actually trying to solve the same problem? That'd be a bit more valid of a comparison...

    • He's selling a service/product ("bug" scanning).

      If you required that he match the apps/categories, then he wouldn't be able to match aircraft software to any Open Source project. Without the highly tested, life-critical proprietary apps, his case would collapse.

      Which is why he only differentiates based upon "proprietary" or "open".
  • Well, the article with some arguments covered one thing I was going to mention, there's a big difference between software to control jet engines or nuclean powerplants and software to be used as an office suite or the like. Of course there will be quality differences between those as bugs in one can likely kill people where as bugs in the other probably won't. They have different levels of allowable bugs and required quality. The other thing which was not mentioned in the second article was this. What w
    • I'll add a dissenting opinion - not that these guys are wrong, but that whatever they are publishing is pointless.

      First, some kinds of software have harder requirements. Live critical software will have fewer bugs. That is just caused by the fact that there is zero tolerance for the slightest possibility of a bug, so there is hundred times more time invested in each line of code. Lets say it takes you two hours to write some code at reasonable quality. Your boss says: That's fine, but we must have a one hun
  • Actually (Score:2, Informative)

    the report (carried by Business Week) said that the Porpriatary software that beat out the open source stuff was avionics software or controls for reactors or other heavy industrial software. That stuff is all small, done in assembly, and extensively tested.

    It was not an apples to apples comparison, more like apples to diamonds. Dom't worry, just fix any real problems identified. Many of the bugs found are theoretical, not real. Many others are style questions. the experts will probably never quit arguing a
  • Open or Closed ? (Score:3, Insightful)

    by quiberon2 ( 986274 ) on Saturday October 07, 2006 @02:16PM (#16349499)

    Open-source software is expensive if you want a commercial support contract (because you are asking a professional to spend a lot of time learning).

    Closed-source software doesn't have the function that you want, and you cannot fix it to add the funcion that you want.

    You pays your money and you takes your choice. You can always stick to pencil-and-paper, and not use this 'software' stuff at all, if you prefer.

    • Open-source software is expensive if you want a commercial support contract (because you are asking a professional to spend a lot of time learning).

      How is this going to be different say for any new commercial or open source product a company has or is about to get? Learning in I/T is a given (unless you want to be RIFed or outsourced) or only hire chair mushrooms. But in any case, learning challenges follow both open and closed source.

      Closed-source software doesn't have the function that you want, and

  • by rduke15 ( 721841 ) <rduke15@gTWAINmail.com minus author> on Saturday October 07, 2006 @02:16PM (#16349511)
    The article makes it quite clear that the proprietary software which is much better that open source is mission-critical software. A class of software where ensuring minimum bugs is a top priority, and also a class of software which mostly does just not exist in OSS. If you are an OSS developer, would you try to develop open source air traffic control software? And even if yes, how would you do it anyway?

    Basically, my own conclusion from reading the article was that it IS possible to write excellent software with very few bugs, if that is a top priority. And, that the author seems to say that while mission-critical software (which happens to be proprietary) is fortunately much better than the rest, among all that other non-mission-critical software, open source tends to be better than proprietary.

    Not surprising, and quite encouraging...
  • by oohshiny ( 998054 ) on Saturday October 07, 2006 @02:34PM (#16349643)
    The selection of programs from the two populations of programs (open source, proprietary) are not going to be comparable: vendors of proprietary software have a say over which code gets scanned, and they are going to select a different population of programs than the company selected for open source projects. This isn't a fixable problem: there is no way of doing this sort of study so that you can compare the two data sets. The best they could do is compare something like OpenOffice against Microsoft Office, or Apache against IIS.

    Furthermore, Coverity simply cannot accomplish what they claim to accomplish: there is no way of detecting "bugs" automatically--if there were, compilers would already be doing it. Coverity effectively does little more than compare code against a set of internal coding conventions; that can be useful if it's done right, but it's not a measure of code quality. Some completely correct code will score thousands of violations against their tool, while other code may contain thousands of bugs, none of which register. Furthermore, it is likely that a lot of their customers are Windows based and that Coverity is biased towards Windows-based coding conventions, giving more false positives on non-Windows code. Before publishing such comparisons, Coverity first would need to demonstrate that their tool does not contain such biases.

    Finally, and perhaps most importantly, the company isn't publishing its data, so nobody can verify or even evaluate their claims. Not only do they fail to publish their raw data (obviously, they can't do that for proprietary software), they also fail to list their summary statistics by vendor and project (which they could, but obviously won't do). They don't even give a summary statistic by class of application, class of organization, and code size. Their results are meaningless because they're not reproducible.

    These numbers tell you nothing about FOSS code quality relative to commercial code quality. What they tell you is that Coverity apparently doesn't know how to do statistics, misrepresents what their product can do, and doesn't know how to report experimental results properly. Now, do you want to put your trust in such a company?
    • by jimicus ( 737525 )
      Furthermore, Coverity simply cannot accomplish what they claim to accomplish: there is no way of detecting "bugs" automatically--if there were, compilers would already be doing it.

      You can't detect bugs with 100% certainty by definition on any Turing machine, but you can certainly detect code which may result in unintended behaviour. Run lint against a bunch of source code and you'll see what I'm talking about.
    • Or instead of guessing, you could read the article. They didn't keep the bugs secret, they showed them to the open source teams. They have helped reduce the bugs. You can see what they have done with some projects at http://scan.coverity.com/ [coverity.com] and you can furthermore read the report by going to the link on the left hand side of the page. Registration is required, but you will get a lot of other reports as well.
      • Re: (Score:3, Informative)

        by dkf ( 304284 )
        As a member of one of the OSS teams contacted, most of what Coverity found in our project were not actual bugs but rather places where the software wasn't smart enough to guess the preconditions on a function right. So they were more places where ill-advised maintenance might well have introduced a bug in the future. (Maybe the other spots were also like this, but we decided to clarify the code in all places anyway so the coverity problems were all cleansed.)

        It should, however, be remembered that coverity d
  • by wannabgeek ( 323414 ) on Saturday October 07, 2006 @02:35PM (#16349649) Journal
    This is just smart marketing. Imagine they put up a survey that did not make any controversial claims (something like, open source and proprietary software are comparable), then would that generate as much heat? Now many people hear about the company because more people talk about this now than if the survey said something less controversial.

    Now to compare every open source software application to aerospace software is really comparing apples to oranges. There is a big difference in the expected quality between an editor and an aerospace application. It's alright even if my editor crashes once in every 20 times I invoke it. Is that acceptable with an aeroplane?

    I'm sure the folks at Coverity understand all this. But if they really speak what is right, they will not get all the eyeballs and publicity. In classic slashdot lingo:
    1. Do something (anything) that involves open source and proprietary software
    2. Make claims that sound outrageous / controversial
    3. Profit! (with all the free publicity)
  • From the summary:

    In fact, the analysis demonstrated that proprietary code is, on average, more than five times less buggy.

    From the article:

    In our research using automatic bug-hunting technology, no open-source project we analyzed had fewer software defects (per thousand lines of code) than the top-of-the-line closed-source application. That proprietary code, written for an aerospace company, is better than the best in open source--more than five times better, in fact. That company's software won't let

  • by Ibag ( 101144 ) on Saturday October 07, 2006 @02:43PM (#16349717)
    If you look at the summary, you come to the conclusion that proprietary software is five times less buggy than open source. It is also unclear how software can have five times as many bugs but be of higher quality. However, if you read the article, you find:

    In our research using automatic bug-hunting technology, no open-source project we analyzed had fewer software defects (per thousand lines of code) than the top-of-the-line closed-source application. That proprietary code, written for an aerospace company, is better than the best in open source--more than five times better, in fact. That company's software won't let you down when you're flying from New York to London.

    If we ignore that the automatic bug finding algorithms might not be a good measure for anything, we have a few issues with the summary. The richest american is twice as rich as the richest Swiss man. Does it follow that Americans are on average twice as rich as Swiss people? No. In the same way, the statement does not imply that the average open source software has five times as many bugs as the average proprietary software does. The coding practices of mission critical apps like flight control systems are different from those of most of the industry, and it is almost wrong to lump them together with everything else.

    The problem with statistics is not that they give an inaccurate picture, or even that selecting the right statistics can give a skewed picture, but that people who don't appreciate what statistics actually give use them to form opinions, make decisions, and summarize articles. Statistics don't lie, but the people who misreport them do, even if they don't realize it.
    • Re: (Score:3, Interesting)

      by hyc ( 241590 )
      Expanding your post... they're fairly specifically holding up a single piece of software in the aerospace industry, as the cream of the crop, and comparing it to everything else. That's what we call an outlier, skewing the results. A good analysis discards outliers and uses what's left. We already know that the "average quality" of the OSS projects is high; without that outlier it's probably no contest. (Just guessing, not having seen their closed-source data.)

      The other thing that's obviously intentionally
  • But as I read it - they didn't audit the proprietary code base - they counted the errors found, per 1,000 lines of code, and errors fixed.

    Now, if Open source is better at finding more subtle errors, even if it fixes them, doesn't his methodology penalize OS code against proprietary code where they didn't find and correct the error in the first place?

    Pug
  • by vtcodger ( 957785 ) on Saturday October 07, 2006 @03:31PM (#16350051)
    What they seem to have done is run a bunch of software through some sort of automatic bug checker that may or may not be a a pile of manure, identified the "best" product which chances to be what the military would call mission critical proprietary software. Then they proclaim that open source isn't as good (Duh) and doesn't meet their high standards.

    What they have not done is compare comperable projects -- IE to Firefox, Open Office to MS Office, Windows to OS-X to Linux-KDE. There is, as far as I know, no Open Source software product that is really intended for mission critical applications -- I guess maybe SSH might qualify, but I don't see it in their list.

    So, I think what we have here is a comparison of Apples to Turnips using a dubiously calibrated error-o-graph machine that uses an unknown technology to perform undefined tests on software.

    Don't get me wrong. I sure as hell wouldn't run a nuclear power plant with Linux-X-Windows-whatever. Nor with Windows -- neither Windows 9 nor NT based Windows. They don't meet my admittedly subjective standards of quality either. But if we waited for near perfect software quality, we'd still be trying to get text mode right. Personally, I 'd vote for that because I think building major structures on weak foundations will likely lead to big trouble a decade or three out, but I think I'd lose that vote about 93 to 1 with maybe 6 abstensions.

  • by TheNetAvenger ( 624455 ) on Saturday October 07, 2006 @04:00PM (#16350215)
    Quality of Programmers is critical...

    Which would you rather have 100 monkeys programming on a project or 10 skilled programmers?

    More programmers and more 'eyes' on a project does not mean it is going to be inherently more bug free. In fact with a group of bad programmers in the mix, it can cause severe harm to a project.

    I'm not knocking Open Source, but for people to just expect it to be better because more people have access to work on it, have obviously not met as many programmers as I have.

    There are a lot programmers that put time into project (and yes Open Source) that have no business developing a VB application for a 10yr old kiddie game, yet they are taking part in large scale coding projects that truly would be better off with them not working on it.

    When working with XWindows years ago, I ran into a few people that scared the hell out of me and other people. They had no vision or scope past the specific things they were trying to do, and would often come up with modifications or 'features' that would break more than it added to the project.

    In the Windows world of 3rd Party developers I have also found 100s of people I wouldn't want them to develop Hello World. As they had no concept or regard to security, Unicode or many other features that would fail when the applications would run on a non-English system with a user having administration privledges.

    You can even find many commericial products in the 3rd party Windows world that also have these problems, but are yet produced by big companies are popular products.

    I wish that all ideas would be welcomed into a project, but the people having the final say could trump crap programmers and crap ideas if they are detrimental to the project.

    When you look at the Linux Kernel or BSD, you can quickly understand why Linus and others don't want to let the 'deciding' control into the masses, or both of these core OS would become crap in a matter of months of unregulated programmer additions.

  • ...and if so, did it report any bugs?
  • This is the usual completely meaningless accounting, with a myriad of methodological flaws. You cannot make a general statement about bugginess of open-source vs. closed-source code. There are just too many variables to conduct a statistically meaningful study. The reason why open source code is better bug-wise is very simple: the user of the code can fix the bugs. As a developer, I hate to depend on closed-source code. Why? Because *every* moderately large code, open or closed, has bugs, and they invariab
  • by The Man ( 684 ) on Saturday October 07, 2006 @09:06PM (#16351895) Homepage
    The whole purpose of the study is to ingrain in the minds of readers the idea that Coverity makes software that can count the number of bugs in a piece of software, leading to the obvious conclusion that it can also identify them and therefore is an extremely valuable product for developers. Of course, this is not true. Coverity's products cannot tell you that your program includes an infinite loop (because it cannot solve the Halting Problem), they cannot tell you that your program will perform at a snail's pace (because the performance characteristics of a piece of software depend on the algorithms it uses, which cannot be reliably determined by examining code, as well as the performance characteristics of the machine on which they are run, which simply cannot be determined in any way by examining source), and they cannot tell you that your program is logically wrong (because they do not know what your program is supposed to do). These are, in the real world, the kinds of problems that occupy virtually all bug-fixing effort. Worse still, many of the problems that Coverity's products, like all other automated source checkers such as lint and gcc -Wextra, do report are in fact false alarms.

    Coverity, of course, knows that reports like this will be written up in exactly the way this summary was, clearly associating their company with the idea of enumerating the bugs in a piece of source code. While not illegal, this type of marketing is of course deceptive; while published papers describe the type of defects (or non-defects) actually detected, the overwhelming volume of commentary will reflect the broader, and incorrect, view that Coverity == bug-finder. It would be just as meaningful (which is to say, not very) to publish the number of lint warnings or missed opportunities to qualify pointer arguments with the const keyword, and neither would require an expensive piece of overhyped software.

    Just Say No to Coverity's marketing gimmicks.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...