Comment Another Factor? (Score 2) 529
There are many different factors that weigh in on software quality. The article mentioned some of them:
1. Complexity: For many projects it isn't possible for a single person to understand the details of how every single part of the system works. This leaves the project with a number of possible sources for bugs.
2. Testing: It's impossible to test the entire range of possible inputs and compare to outputs. Many real-world stimuli can't be predicted in testing, and often can't be dealt with.
There's another factor, which the article didn't mention:
3. Unlike other industries, there's no leeway. Either a product WORKS, or it doesn't. There's no such thing as kind-of-works. Take the automotive industry for example: Cars still have showstopper bugs (cars have been recalled, as have most consumer products), but there are fewer possible causes of a showstopper bug. Automotive showstoppers are almost always safety related. But if it turns out that the car has a part that is prone to leaking, the manufacturer waits until the consumer notices it and brings it in to be fixed, because even while leaking, or while the engine is knocking, or there is hesitation before acceleration, the car will continue to work acceptably. Consumers work around these bugs EVERY DAY. Just like I work around the fact that my toaster's timer isn't quite exact, often pushing it down for a second run cause the toast didn't get as brown as it usually does.
However, with a software product, every piece of the product is in such a delicate balance, it takes only one thing to go wrong for the error to propogate through the rest of the system, causing a crash. And often, the error propogation is completely unpredictable. This means that every part of the system has to work exactly as defined, (with no allowance for random fluctuation or acceptable levels of correctness), when the slightest variation can through every other piece of the system out of whack. In essence, every error can potentially destroy the entire system, no matter how trivial that error might be.
This is why software companies release products with licenses that disclaim responsibility. They know they can't predict every possible usage situation. In places where such predictability is absolutely crucial (air-traffic control being the canonical example) products are written for a specifically defined environment, with a specific set of interactions. The focus is entirely on reliability in a single environment, resulting in a loss of flexibility and features. In that situation, flexibility and features aren't necessary.
But in the consumer market, the story is quite different. Users want the utmost reliability, with flexible usage environments and all the features (yeah, everyone would accept a stripped down product, but only if it were stripped down to what they use). The bug situation is only exacerbated when the programmer has to worry about the actions of yet another software program affecting the operation of his or her own product, something which is not as bothersome in a well-defined environment.
Not even automotive companies will warranty a car which has been used in a way not predicted by the manufacturer, nor will any other company if they don't consider the product's day to day use to be normal. It's just much harder to define what normal use is going to be in the software world. Ideally, it would be "Used in exactly the same way, on the exact same machine, w/ the exact same setup as the developer's machine." But in the real world, that's never going to happen. That's one of the reasons big, single purpose servers tend to be more stable than my machine at home. The usage environments are far more well-defined than that of the average PC user.
1. Complexity: For many projects it isn't possible for a single person to understand the details of how every single part of the system works. This leaves the project with a number of possible sources for bugs.
2. Testing: It's impossible to test the entire range of possible inputs and compare to outputs. Many real-world stimuli can't be predicted in testing, and often can't be dealt with.
There's another factor, which the article didn't mention:
3. Unlike other industries, there's no leeway. Either a product WORKS, or it doesn't. There's no such thing as kind-of-works. Take the automotive industry for example: Cars still have showstopper bugs (cars have been recalled, as have most consumer products), but there are fewer possible causes of a showstopper bug. Automotive showstoppers are almost always safety related. But if it turns out that the car has a part that is prone to leaking, the manufacturer waits until the consumer notices it and brings it in to be fixed, because even while leaking, or while the engine is knocking, or there is hesitation before acceleration, the car will continue to work acceptably. Consumers work around these bugs EVERY DAY. Just like I work around the fact that my toaster's timer isn't quite exact, often pushing it down for a second run cause the toast didn't get as brown as it usually does.
However, with a software product, every piece of the product is in such a delicate balance, it takes only one thing to go wrong for the error to propogate through the rest of the system, causing a crash. And often, the error propogation is completely unpredictable. This means that every part of the system has to work exactly as defined, (with no allowance for random fluctuation or acceptable levels of correctness), when the slightest variation can through every other piece of the system out of whack. In essence, every error can potentially destroy the entire system, no matter how trivial that error might be.
This is why software companies release products with licenses that disclaim responsibility. They know they can't predict every possible usage situation. In places where such predictability is absolutely crucial (air-traffic control being the canonical example) products are written for a specifically defined environment, with a specific set of interactions. The focus is entirely on reliability in a single environment, resulting in a loss of flexibility and features. In that situation, flexibility and features aren't necessary.
But in the consumer market, the story is quite different. Users want the utmost reliability, with flexible usage environments and all the features (yeah, everyone would accept a stripped down product, but only if it were stripped down to what they use). The bug situation is only exacerbated when the programmer has to worry about the actions of yet another software program affecting the operation of his or her own product, something which is not as bothersome in a well-defined environment.
Not even automotive companies will warranty a car which has been used in a way not predicted by the manufacturer, nor will any other company if they don't consider the product's day to day use to be normal. It's just much harder to define what normal use is going to be in the software world. Ideally, it would be "Used in exactly the same way, on the exact same machine, w/ the exact same setup as the developer's machine." But in the real world, that's never going to happen. That's one of the reasons big, single purpose servers tend to be more stable than my machine at home. The usage environments are far more well-defined than that of the average PC user.