Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Firefox Analyzed for Bugs by Software 226

eldavojohn writes "In a brief article on CNet, a company named Coverity announced that Firefox is using software to detect flaws in Firefox's source code. Even more interesting is the DHS initiative for Coverity to use this same bug detection software on 40 open source projects." An interesting tidbit from the article: "Most of the 40 programs tested averaged less than one defect per thousand lines of code. The cleanest program was XMMS, a Unix-based multimedia application. It had only six bugs in its 116,899 lines of code, or .51 bugs per thousands lines of code. The buggiest program is the Advanced Maryland Automatic Network Disk Archiver, or AMANDA, a Linux backup application first developed at the University of Maryland. Coverity found 108 bugs in its 88,950 lines of code, or about 1.214 bugs per thousand lines of code." We've covered this before, only now Firefox is actually licensing the Coverity software and using it directly.
This discussion has been archived. No new comments can be posted.

Firefox Analyzed for Bugs by Software

Comments Filter:
  • If this is the same (Score:3, Interesting)

    by Anonymous Coward on Saturday August 12, 2006 @11:53AM (#15894425)
    If this is the same as most automated testing software I've seen, it detects many things which aren't truly bugs as bugs. Accuracy on automated testing tools I've been exposed to is around 40%.

  • Interesting... (Score:3, Interesting)

    by porkThreeWays ( 895269 ) on Saturday August 12, 2006 @11:58AM (#15894451)
    I find the AMANDA results interesting because AFAIK it hasn't recieved a code rewrite since the early 90's. I think an interesting study would be the to compare older projects with ones that have been rewritten from the ground up. Comparing the rate of new bugs introduced as opposed to those hidden in legacy code.
  • Re:Errr... (Score:5, Interesting)

    by twiddlingbits ( 707452 ) on Saturday August 12, 2006 @12:07PM (#15894500)
    Finding all POSSIBLE bugs in a software program means traversing all possible paths in the code with all possible inputs. That's a HUGE problem. You can "model" the code using Logic Equations and that helps some but any errors in the conversion from code to logic equations invalidate results. The DoD and NASA have spent many millions on solving this problem over the last 10-12 yrs. When I was at NASA we used several different tools (CodeSurfer, Purify, Lint, Polyspace as I recall) as each tool was better at one thing (i.e memory leaks vs null pointer dereferences). A The complete process took a couple of days to weeks and then human eyes and expertise were still needed to remove false positives. A good site for all the tools out there, old & new is http://spinroot.com/static/ [spinroot.com]. Looks like Coverty might be a good one to look into, as the best I had seen was CodeSurfer. All the good tools I have seen are commercial (NOT open Source) and EXPENSIVE!! I'd love to see a decent open source tool to run as a first pass before applying the other tools. Another point is that these tools are STATIC analysis. Run-Time Analysis is a whole 'nother animal but that area is improving with tools like DTRACE in Solaris.
  • by Anonymous Coward on Saturday August 12, 2006 @12:09PM (#15894511)
    Amanda works on many unix and unixoid operating systems, it's not a "linux" backup system. It's used primarily for driving remote backups to big tape libraries, most /. reading linux users would never have systems large enough to justify its use. :-)

    Amanda IS, however being very actively developed right now, lots of new features -> lots of new bugs. Other issue is that it's a componenty, plugin architecture, made of a few processes communicating over pipes and sockets. A failure in one component won't necessarily be a security risk or take the whole system down, it's extremely robust in normal operation in my experience, despite this "high bug count". Unlike XMMS, various contributed plugins (e.g. tape changer robot drivers) are redistributed in the source tarball but only used by very small numbers of people with outlandish hardware.
    I suspect if you included various XMMS plugins in the XMMS count, things would be different...

    None of that *really* excuses a high bug count - but what really pisses me off is coverity's "we've found X bugs, but we're not going to tell you what they are or substantiate our claims (some of amanda is quite old code, has a lot of strcpys, I know that some automated security checkers will treat a strcpy as a "bug" even if it's safe), just FUD your project in various public fora...

  • by RebelWebmaster ( 628941 ) on Saturday August 12, 2006 @12:14PM (#15894539)
    Here are some links to show the bugs in the Bugzilla database which were turned up by Coverity.
    Open Coverity Bugs [mozilla.org]
    All Coverity Bugs [mozilla.org]
  • by bcrowell ( 177657 ) on Saturday August 12, 2006 @12:14PM (#15894540) Homepage

    You mean "who have brought down the count of their bugs that this tool can detect down to zero." I'm sure they will have other bugs in code and design.
    Yeah, if they could make a program that would detect all bugs in a program, it would violate Turing's proof that the halting probelm is undecidable. [wikipedia.org]

    From the articles, it sounds like they're basically looking for mistakes that could lead to security flaws, e.g., buffer overflows. If AMANDA is particularly buggy by their metric (detectable bugs per thousand lines of code), it's probably because AMANDA doesn't interface to the web, so the people coding it knew that certain classes of buffer overflow "bugs" wouldn't be a problem, because they wouldn't be exploited through an internet-facing interface. If you went back and ran this program on Unix apps written in C from the 1980's, you'd probably find zillions of bugs, but it wouldn't indicate low quality, it would just mean that the programs weren't written for an internet-facing environment in the year 2006, when the internet has become a battle zone for evil spammers, botnets, etc. If the only way such a bug can show up is for the user to supply carefully tailored input, and the result is simply that the program dumps core, then that's not a bug for a program that isn't facing the modern internet.

  • Re:Errr... (Score:1, Interesting)

    by Anonymous Coward on Saturday August 12, 2006 @12:21PM (#15894570)
    I hope these Coverity guys aren't pompous enough to think that their tool can find ALL bugs in a program with... magic...

    I should hope not, as that is demonstrably false. For example, at one point the KDE project with its I-don't-know-how-many-millions of lines of code had a coverity rating of 0 open bugs, but I'm sure no one is silly enough to think that such a large and complex project has no bugs at all!

    Most static analysers look for very simple, easily machine-detectable, low-level imperfections which could conceivably lead to hard-to-spot bugs - not initialising a variable before it is used is probably the classic example of the kind of "bug" that would be detected by an analyser such as Coverity. I imagine Coverity is quite a lot more sophisticated than that, though :)

  • by Vlad_the_Inhaler ( 32958 ) on Saturday August 12, 2006 @12:24PM (#15894586)
    No major piece of software is ever bug free.

    I follow the news:linux.samba [linux.samba] Newsgroup a bit. Various Samba features have been shipped broken in various recent releases.

    CIFSfs? (it is replacing smbfs and some Linux distributions have taken to disabling smbfs in the kernel to force people to switch) Cifsfs was broken in the newest major release. An intermediate release fixed that.
    'Valid Users' used with 'smbpasswd': that was broken in the intermediate release. The next intermediate release will cover that.

    No major piece of software is ever bug free, at least the Samba guys are very responsive to error reports.
  • by Myria ( 562655 ) on Saturday August 12, 2006 @12:37PM (#15894628)
    Coverity sounds like a scam. It is not possible for a program to analyze another program and find all the bugs; see halting problem [wikipedia.org].

    I would find heuristic analysis annoying. I'd get quite annoyed if the program says "fix this buffer overflow" 1000 times because I use "strcpy" somewhere - even though I'm very careful and only use it when I know it can't overflow.

    I should write a program that searches for odd perfect numbers [wikipedia.org] and terminates if it finds one. I wonder whether Coverity would say it is an infinite loop.

    Coverity sounds like scare tactics to make money by claiming to do the impossible. They won't even disclose what their algorithm is. I would never trust them, especially on closed-source programs. Firefox doesn't have that risk, but they are wasting money.

    Microsoft's PREfast is simpler but seems like a much more realistic solution: mark up your code to say how things are supposed to be used and the compiler can decidably sense problems. I'd just get tired of typing 2 underscores a million times.

    Melissa
  • No rsync? (Score:3, Interesting)

    by ortholattice ( 175065 ) on Saturday August 12, 2006 @01:06PM (#15894756)
    Funny selection of programs; I don't see rsync on the list. From the article: DHS wants to reinforce the quality of open-source programs supporting the U.S. infrastructure. So, XMMS (an MP3 player) is more important to the U.S. infrastructure than rsync?
  • Re:Errr... (Score:5, Interesting)

    by twiddlingbits ( 707452 ) on Saturday August 12, 2006 @01:13PM (#15894799)
    I had some extensive conversations with the team at CodeSurfer and they think they the problem is NOT impossible, maybe more like Polynomial time. The DOD was funding them (this was about 3 yrs) ago to try to develop a solution that worked for C/C++ and Ada. NASA wanted to tag along on the research but we were told it was "classified" and DOD only. It's rare when someone turns down research money so they must be on to something.
  • Re:Errr... (Score:3, Interesting)

    by twiddlingbits ( 707452 ) on Saturday August 12, 2006 @01:49PM (#15894954)
    Good programming practice says ANY function should give the same outputs ALL the time for the same inputs (i.e if you put in a 2 today you get out a 4 and the same thing tomorrow). What you seem to be talking about are "side effects" where a global variable or input parameter is modified within the context of a function. Some programming languages DO allow you to change the value of a parameter within the function and that result is passed back to the caller. In fact thats easy to do in C with pointers. Harder to do in other languages. Either way IMHO, it's a horrible programming practice. The hardest thing I ever saw was a bunch of C programmers trying to learn how to code in Ada. All the "shortcuts" they used to use were removed by strong typing and strict rules. Testing of OO code where you are changing the internal state of an object via one of it's methods or via another method (such as in C++) makes things a LOT harder to develop good tests for and I would suspect good code analysis tools.
  • Types of bugs (Score:3, Interesting)

    by Dan East ( 318230 ) on Saturday August 12, 2006 @02:23PM (#15895050) Journal
    After looking at some of the results from the Firefox sources, I see that "bugs" include unreferenced variables and dead code that never gets executed.

    It looks like most of the real bugs consist of not checking return values, the worst being routines that act upon an object allocated by another routine without checking for null pointer.

    Dan East
  • by Anonymous Coward on Saturday August 12, 2006 @03:37PM (#15895278)
    For a company selling a software product, they seem stupidly protective of how much the damn thing is going to cost me to obtain. Try and find a price sheet on their website. It isn't there.

    The less up-front anybody is about costs, the less worthwhile their product usually is. And the more variable the cost usually is (ie: as they figure out how much they can overcharge you). And no, I will not register with them for the "honor" of finding out more information. I'm guessing that it's something stupidly outrageous since the cost of running their application on a bunch of Open Source programs cost $1.2 million - which anyone with a single copy and a free weekend probably could have done for themselves.

    They also don't disclose what their product actually does. So I'll join with the other voices here in calling for the need of an open-source alternative to this project - an alternative that has full disclosure about what the product is capable of and what it's going to cost you to use.

An authority is a person who can tell you more about something than you really care to know.

Working...