Comment Lies, damn Lies, and statistics. (Score 1) 226
Measures of defect density are meaningless, for the most part. I should know, because I have worked as a Software Metrics dude as a full time job. Now, I must admit that I haven't read the article, but if he says that Linux has a higher defect density than other Unices, then this can be accounted for quite easily.
Consider one difference between Linux and other unices, namely the definition of a "standard" distribution. There isn't really any such thing in Linux. As a result, we pretty much have an infinite number of systems, since any two programs that interoperate within the system can be classed as a sub-system. Ergo, we have a lot more problems with interoperability. If you want to bump up the defect density for propaganda reasons, you just count as many individual incompatibilities as you want, but treat the "area" as being fixed. So you can basically prove anything in terms of your numbers.
There's another difference here between Linux and other unices. Namely, users are expected to have a bit more common sense when it comes to ironing out the kinks. If there's a problem with one bit of software, they can often leave it aside and work out how to fix it later. They can *still* have a system that works well, and still has a lot more features than the equivalent "other" unices.
Also, compare this with Microsoft's method of producing software. They don't give a damn about defect density. They realise too that it doesn't tell you anything. Instead, what they do is classify bugs according to their impact. Then they trade off testing/bug-fixing so that they only fix the major, high-impact stuff. Then they release what are effectively beta versions and let the customers find out the niggling errors that aren't too serious.
This seems to be a new form of FUD tactic from a pro-microsoft head. Since linux is the enemy, simply pit linux heads against unix heads. It doesn't matter if the issue is irrelevant. It diverts attention from the real issues.
Beware of statisticians: numbers are an easy source of divisiveness.
Consider one difference between Linux and other unices, namely the definition of a "standard" distribution. There isn't really any such thing in Linux. As a result, we pretty much have an infinite number of systems, since any two programs that interoperate within the system can be classed as a sub-system. Ergo, we have a lot more problems with interoperability. If you want to bump up the defect density for propaganda reasons, you just count as many individual incompatibilities as you want, but treat the "area" as being fixed. So you can basically prove anything in terms of your numbers.
There's another difference here between Linux and other unices. Namely, users are expected to have a bit more common sense when it comes to ironing out the kinks. If there's a problem with one bit of software, they can often leave it aside and work out how to fix it later. They can *still* have a system that works well, and still has a lot more features than the equivalent "other" unices.
Also, compare this with Microsoft's method of producing software. They don't give a damn about defect density. They realise too that it doesn't tell you anything. Instead, what they do is classify bugs according to their impact. Then they trade off testing/bug-fixing so that they only fix the major, high-impact stuff. Then they release what are effectively beta versions and let the customers find out the niggling errors that aren't too serious.
This seems to be a new form of FUD tactic from a pro-microsoft head. Since linux is the enemy, simply pit linux heads against unix heads. It doesn't matter if the issue is irrelevant. It diverts attention from the real issues.
Beware of statisticians: numbers are an easy source of divisiveness.