Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Almighty Buck

Why Most Software Sucks 529

gregbaker writes "Shift Magazine has an interesting article on bugs. They look at why commercial software has so many bugs and attempt to figure out why the software industry doesn't seem to have the same quality standards as other industries." Every Slashdot reader who manages commercial software projects should print this 8-page article out and glue it to his or her bathroom mirror and reread it every morning. But that's just my opinion - which is probably shared by another 100 million+ disgusted computer users worldwide who the commercial software industry seems to think should happily eat whatever garbage they want to throw at us.
This discussion has been archived. No new comments can be posted.

Why Most Software Sucks

Comments Filter:
  • by Entity42 ( 67737 ) on Sunday October 03, 1999 @10:41AM (#1641195) Homepage
    You'll have to admit that no piece of software is ever free from bugs, they'll always be there because it's impossible to predict conflicts between certain pieces of hardware etc.

    The real problem is that many major manufacturers aim for a shipping date rather than a quality product.

    I'll prolly get flaim-baited for this but 'That's the way things work these days' I just hope OSS makes a big enough impact
  • Just one more reason why to keep software within GPL. Tho I must admit, companies developing software for Windows versions should be exempt from scorn over bugs. In all reality, how is a company supposed to put out a bug free program to run on top of a bug-riddled OS?
  • Most software sucks because of Moores law. Until researchers "hit a wall" in processing power , what posible insentives to people have to maintain/treek/improve/optimize code when it's going to be scaled out of relevance in a year? I think Moores law/marketingtrend could slow down just a little bit.
  • On the other hand, I wish that most users would have some appreciation for the vast complexity of even apparently "simple" pieces of software.

    You cannot have your cake and eat it too. If the software industry had the same stringent standards as other industries, 99% of software would not exist.

    If users weren't so rabidly feature-hungry, we wouldn't be in this mess. Unfortunately, users expect that each new version of software will have 23,456,236,263 (I counted) new features in it or they won't buy it.
  • who's _really_ to blame?

    Is it the testers? no, they do what they can to get bugs fixed - they can't control when the program goes to market.

    is it the management who rushes the software out the door? Maybe. But what's their motivation to rush incomplete projects? The almighty stock price. Companies set targets for when software will be available, and then get slammed for inevitable delays. And I'm not talking about vaporware, like NT 5. I'm talking about real projects that are getting delayed for legitimate reasons. However, the big stockholders put tremendous pressure on the company to get products out there fast, because otherwise their shares go down. The board of directors, whose interests are those of the stockholders, put pressure on management to rush products out the door. "What? Windows 98 is exactly the same as Win95? Ship it anyway - 98 is a bigger number than 95, so it must be better!" And from this we get the bug-infested software that we see so much of these days.

    So what's the solution? The public needs to realize that the software industry is very dynamic, and delays are inevitable in most cases. If the stockholders don't push projects out the door when they're not ready to go, the projects end up being better. And _that_ is good for the company, and for the end users.

    Just my $0.02

  • The Internet Tax Freedom Act sailed through the Senate on a nearly unanimous vote of ninety-six to two.

    Is that the bill recently proposed by Senator McCain, or the older thing? Yeah, I know, I'm clueless on this point, but hey, whatever. =P I don't keep up with U.S. law anymore.. heh.

  • they should have this information burned into their brains.

    i just do dopey little stuff when i program, but when i finally show it to someone, its a matter of pride in my abilities and knowledge. if i was to release a program that had errors in it, i would look bad. most software companies should allow the time for extensive beta testing and to let the programmers pour through the code.

    this is also where open source would help out tons. if you had to release the source, you would:

    1)have a MASSIVE base of programmers that could be working on it.
    2)you'd be more likely to fix errors before releasing it. if M$ released win9x source code, it would probably be more humorous than informative.
    3)there would be more healthy competition because the company who has the best code with the fewest errors is most likely to get the most customers.

    thus ends my rant.
  • by weloytty ( 53582 ) on Sunday October 03, 1999 @10:48AM (#1641205)
    That crappy software ships is, of course, no suprise. What I never can figure out is that in every article I see about this phenomenon, you see a line like this one:

    "Management hadn't given the programmers nearly enough time to do a decent job, and it showed."

    EVERY project I have worked on that turned into a disaster had this happen. 3 different companys, too. And every project that worked and was on time--happend because we did that old boring junk that no one likes to do:

    1) Write Specs
    2) Follow the spec.
    3) When the marketing department trys to add stuff, you say "Is it in the spec?"--"Sure we can add it, but it is going to take X additional weeks".
    4) Test.

    Writing good software is not brain surgery. But you cant take shortcuts and expect everything to work fine.

    And while it is fun to slam managment/marketing, programmers have to take blame too: lots of time we say "Yeah, it *WOULD* take a year for someone else to do it, but I am a programming genuis. I can have it done in a month!".
  • First time I heard of this magazine... From the article I've just seen its very cool indeed. They even say the dreaded "f word" several times. I'll have to subscribe to it now :)
  • I admit, I actually waited until posting on /. until I could get a first post. However, once that post was made, I felt cheap...cheap and dirty. I didn't want to waste time writing a meaningful post, because that leaves potential for someone to click "SUBMIT" before I do, and I couldn't have that. Besides, I later realized that I was moderated down so far, that the glory was so short-lived it wasn't worth it.

    -I9mm-
  • by The Cunctator ( 15267 ) on Sunday October 03, 1999 @10:56AM (#1641209) Homepage
    I really believe some of the problem with both software development and all tech-work in general is our need for paper. At my office, productivity, efficiency, and reliabilty is harmed by the fact that everyone has their own method of dealing with stuff, a lot of it being the constant print-out of web-pages and other computer docs, because there's no one set-up that's good enough. I know it makes QA a nightmare, because you can't find the specs, or problems aren't reported in a standard way, etc. etc.

    Let's face it: the promise of HTML won't be realized until the whole Web is Slashdotized (not Slashdotted!). By that I mean that every page can be personalized--for that, effectively, is what these comment forums allow us to do. By the time this thread is fully played out and moderated, the this thread [slashdot.org] will be more useful than the original article, because it will allow access to the article, and have insightful and useful comments and links highlighted. Can't really do that with paper.

    If you think about it, Slashdot is analagous to a QA system. Speaking about that, it might be cool to make a Slashdot-style interface to Bugzilla [mozilla.org]. Why shouldn't the whole Web, and by extension, the whole world, have a QA system? (I suspect some might argue that's the idea behind Everything [slashdot.org].)

    The Web has a ways to go before I can really feel it's cool. Which is why Mozilla [mozilla.org] could be the most important app ever.
  • by TheBeginner ( 30987 ) on Sunday October 03, 1999 @10:56AM (#1641210)
    There is one and really only one reason that commercial software is buggy: it's more profitable that way. Let's face it, no matter how much we whine and complain about the software being buggy, if it is even reasonably workable we buy it. And this is slashdot readers. Think about the general public. Most of them just assume that is the only way software can be written.

    I was talking to my sister recently (not a master computer user by any means) and she asked me why Windows crashes so often. I tried to explain about the bugs and conflicts between the various pieces of software she has, but she could not grasp the idea that a flawed product would be released intentionally.

    Why? Because it what other industry is it done? None. The fact is that consumers at large have just accepted the necessity of software bugs. Because of this, the software companies have little or no incentive to release a clean product.

    It would take much more money and man hours to realease a clean product and that is time that it is simply not worth it to spend. This is capitalism at its high point. Often it works to bring a higher standard about, but in the case where the buggier product makes more money, it can do the reverse as well.

    A solution? I don't know if there really is one... perhaps make it so that a bug causes a computer explosion. Then, just the chips in our airplanes, companies would have to release quality software. Just a thought.

  • get the soft ware right the first time, then people would lose alot of jobs. if its right, why would the soft ware company pay them to make it better when its already as good as its gonna get?

    if there's bugs, the company can release Version X.X and get more money, to pay more people, to pad the exec's pockets a bit more, so they can have more bugs in the next issue, so that they can release....etc etc etc..

    heck, if windows was right hte first time, why would we need win2k? if linux 1.0 was had been perfect...why would we need 2.0.0.0.0.0.0.0.0.0.0?

    granted, its not always about money. sometimes its a simple over site, or a situation that the testers/programers didnt think about.


    Gorfin

  • I think you hit the nail on the head there,


    98 is a bigger number than 95, ship it.


    I often hear people tell newbies when buying a computer that the bigger the number the better.
    There's an attitude rife throughout the PC world that you have to have the latest version of everything. No matter what the cost. Call it techno-lust, whatever. It's like the guy that bought the Discman a few years ago for 250 quid and now they're giving them away.

    This kind of attitude is what drives this 'ship first, ask questions later' market. You should have seen my tutors eyes light up when one of the guys in my class offered him a copy of Win2K beta (he's a Microserf), imagine how many bugs there'll be in that!


    People don't have to realise that there'll be delays. People need to protest at having to put up with shoddy software and maybe then the large corporations will put more effort into making better software.
  • Why would a company code at all? Just wait for your competition to write the program, then undercut them in price, since you don't have to recover development cost.
  • It all boils down to money, mr. eries.
    If all of these features didn't make users continually re-purchase the software (like "upgrading" to Windows 98, or 98 SE) it probably wouldn't be done.
    Almost all of the software that i use is free (and legally so!). But the common user wants software to be easy-to-use and helpful, (like "Clippy" in MS Office... Ugh.) so they would invest the money for these new features.
    They think that if they pay money for software, they'd be getting more performance than a free product; in most cases that is not true, as you probably know. People have to realise that in "CyberSpace" the old rules don't apply. Software is not really a tangible thing like a donut or a car is. Thus, the standard rules and laws can not apply very well.
    Just my 1 1

  • by barlowg ( 5396 ) on Sunday October 03, 1999 @11:01AM (#1641216) Homepage
    In 1975, Fred Brooks showed how many of the practices described in this article would actually produce worse software and extend the time necessary to complete it in hia classic, The Mythical Man Month. However, management still does not seem to understand these basic concepts. Any software project, open or closed source, should heed Brooks' words wisely. If you are a programmer or manage programmers, read this book!

    It seems a shame that most people in the industry have not read it and that most managers have little or no idea of how managing a software project differs from general management.
    --
    Gregory J. Barlow
    fight bloat. use blackbox [themes.org].

  • To me, an ideal project's timeline should be:
    • 33% design
    • 33% implementation
    • 33% testing
    And, there are important constratints around this. For example, *no* features are to be added after the design phase, unless it is
    • absolutely critical
    • the requestors understand the implications to the schedule (ie, more features = more implementation = more testing)
    In the real world, however, the PHBs financing the operation can't get this concept through their thick skulls. When this happens to me, I tell then my recommendations; it's their dime, they can ignore it if they want to, but the reason they pay me (I hope) is that I know what I'm talking about. I've seen this happen before, and if something goes wrong, it's NMFP.

    They can feel free to buy more of my time to fix the problems they brought upon themselves.

  • Just like someone's signature i saw the other day....
    "Gates' Law: The speed of software halves every 18 months"
    :)

  • One of the main factors that has led to the current state of the software market, I would propose, is that is is, in fact, NOT a munufacturing-based industry, as it is modelled after. Think about it: when a company sells you a piece of software, IP laws and shrinkwrap liscences and the hefty price you pay all lead you to believe that you are purchasing one "item". You have one "item" and cannot make more, or use that "item" in any manner for which is was not intended. If you wish to make this "item" available for others, you will be charged as if you now had several "items" (per-seat liscensing fees).

    But that is not what software (is,should be, must become). It is a service, and needs to become modelled more appropriately after a service-based industry. I retain the services of a software company to help me do a certain task. They give me a piece of media worth 15 cents, but I am not paying for a thing. I am paying for the services of a company.

    Think about the difference in current markets set up like this: shouldn't software be more like a getting a doctor or a plumber, instead of like buying a car or a hammer? Information, that which makes up this "thing" that they want to sell me, it is not a "thing". It is just a service they provide, to help me serve web documents or print a document. If I do not like their service, I find some other, better provider.
  • 1) The stock holders don't like it when the ship date isn't met because it drives the price down.

    2)Marketers don't like it when the ship date isn't met because it means the product will not be released when people are most hyped about it.

    Would one possible answer be that software companies should be more vague about their ship dates? Announce specific dates later in the project so that you have a better idea of what problems will have to be dealt with.

    Though this would probably make marketing's job harder, it may keep the Wall St. boys happier.
  • Humans program computers. Computers want perfection. Humans are not perfect. Within a software project, there is so many uncertain elements - skill of projects, dead lines, learning curves of new technologies. Unfortunately, software is not like building a house - you know the design of the house, you know now to put up a house - because you've done it before - what can happen, it falls down, or cracks in walls or other defects. UNFORTUNATELY, every software project is different, and is more complex than building a house, or other engineering industries. there are always different problems, different complexities involved with each project. If a dead line is approaching, more often than not corners will be cut, ie, a part of a project not tested properly, software cut down in complexity, which can lead to bugs in code. Until machines program computers, (which will be possible, sooner rather than later), software will be bugged. However, there are ways that code can be made less buggier, ie, automatic testing tools - so humans don't have to do the same boring process of testing functionality which can lead to cut corners, or functionality not tested correctly, or because the tester doesn't understand the application. Of course, such tools are only as good as they are implemented by the tester - poor automatic testing - poor application testing - probablility of more bugs. However, testing tools cannot test for "odd behaviour" of users, so manuyal testing will still need to be done. Setting up of auto testing tools take lots of time. There are programming languages of course, that make coding easier. But, in the end, its the human that has to program, test, and debug, and the problem is, we are not perfect, but computers want perfection. You can't compare software to other industries, its very much more complex.
  • Having been brought up in the Microsoft world, I've come to expect that Windows PCs are not reliable and that the 'release early and fix later' mentality is a valid one.

    But as I have got older, I have had two experiences which have changed that.

    The first was Linux/*nix. Here was an OS that was stable and didn't crash or need multiple reboots after every software installation.

    The second was working for the largest Telco in my country. When we had a software release, it had better be bug free, for if we had an outage for a couple of hours, it would cost the telco several hundred thousand as the application was used nation wide with thousands of users 24 hours a day, 7 days a week. The Telco demanded that the software was stable and were happy to allow extensions of delivery dates to ensure that happened.

    How did these experiences affect me?

    First, Linux/*nix showed that it is possible to create stable software. Second, that if the customer wants a quality product, software developers will produce the goods.

    So IMHO, it's not the shareholders that I would blame, but the customers. If the customers kicked up a big enough stink and looked for alternatives, M$ share price would drop which would hurt it where it really hurts.

    But customers don't really have many alternatives, nor do many of them know that software can be stable. Maybe we are forever doomed to have buggy software?

  • When I was in high school, two alumni came to speak before the student body. The two students worked for Microsoft, and one of them was (at the time) the head of the Internet Explorer development team. He was talking about their upcoming release of IE5, and noted that they still had to fix some bugs before releasing, but that IE5 would ship with approximately 2,500 KNOWN bugs. He also said that this was a relatively low number of bugs, and that he was proud of his team for achieving such a high efficiency level.

    Isn't anyone else a little bothered by the fact that Microsoft, and presumably other Big Software companies, have convinced themselves that this is okay? They spend so much coding time adding bloated features with lots of bugs, rather than fixing the ones that are already there. Shameful.
  • The phenomina of bad code is not unique to commercial software. Despite the popularity and success of some open source software, there is a lot of bad code out there. Anyone who can figure out the three letters 'gcc' seems to be able to post their code to sunsite or freshmeat.

    Despite having the source code, the amount of time and effort required to understand a large amount of code is overwhelming. How many programmers would volunteer to fix Windows bugs?

    Consider the lesson of the WWIV BBS software, which was an experiment in Open Source commercial software. The program was written by a C instructor, who distributed the code (although you had to pay). People would "learn to program" by changing the source code, and once the code worked, distributing it.

    The sad fact is, most people can't program. The classes don't really teach anything other than the syntax of the language, which they can then put on their resume and get the fast money.

    What I'd like to see is: First, quality standards for software. Software is the only form of engineering in which there is not some sort of standard of how many or what sort of bugs are acceptable.

    It's possible that C/C++ are not the best languages for Application development. Research has gone into developing new languages, such as Eiffel.

    Lastly, software quality ends at the programmer. Think before you vomit unmentionable code at the feet of the rest of us.

  • Could be any. It was just a generic example, I believe. Lots of games are rushed like that.
  • On (most versions, methinks; I don't remember 3.1, for instance) Windows, the three-finger salute is supposed to bring up a Windows dialog which will allow you to close an arbitrary program, or reboot the entire machine.
  • this article ignores what is clearly the worst part of USCITA: legitimisation of prohibhiting reverse engineering of a product.

    The whole lack-of-a-warranty thing is nothing new in software, and i seriously doubt any company would try the PR disaster of setting up their program so that they can kill it remotely. But we should really be worried about anything that gives a company the power to prevent someone from using cleanroomed reverse engineering on their product.

    The big defense of software lisenses is that hey, if you don't like it, you can take it back to the store. But in a world without reverse engineering, you have to face the possibility that at some point you'll wind up with a program where switching to a different program isn't an option because the new program is prohibhited from communicating with the old one, or prohibhited from using the old one's files, and you'll be left with a large amount of data or work rendered useless..

    (A question: could USCITA apply to hardware as well as software, such that, say, Nintendo could put a no-reverse-engineering software liscensing requirement on the N64, and then use that to prosecute anyone who exersizes their right under the patent laws to cleanroomed reverse engineering? What if you didn't actually buy or open the product yourself, but just found it laying on the street or something? Do you still violate the liscense?)

    anyway, this article is completely right. software makers, especially those that make web browsers, pay a little too much attention to "features" and not quite enough to stability..

    -mcc
    why web browsers suck: http://home.earthlink.net/~mcclure111/cyberleary.h tml#discontent
  • by Kitsune Sushi ( 87987 ) on Sunday October 03, 1999 @11:16AM (#1641243)

    I'm going to kill whoever thought it was a good idea to have the ads reload every 10-15 seconds on the site that article is on. Grr.

    Talk to any programmer, tester or honest manager and they'll tell you a very different story about software. It is the unspoken scandal of digital culture, and it goes like this:
    Software is badly made. More than that, it is often horribly made. It is developed with the sort of irresponsible abandon that would be unconscionable if it were applied to bridge-building, car-making and possibly even plumbing. And the internet has only made matters worse by encouraging dot-com companies to rush products out ever faster, despite the fact that software is now more complex than ever. Desperate to ride stock-market hysteria and the sea of investment dollars for dubious projects and websites, software companies cram their wares through on shorter and shorter timelines, with no latitude for serious planning, testing or concern for quality. "It's ship first and ask questions later," says one weary programmer, a survivor of a database company.

    Hmm. Well, most glaringly, this kind of assumes that all programmers are found in the commercial sector. In simpler times, I would not have found this incorrect assumption to be all that surprising. In light of the recent thrust of GNU/Linux into the spotlight, it appears to be more of a gross oversight.

    On the other hand, does anyone here really think that every proprietary software product is a horrible piece of.. whatever? And no, Microsoft does not make all known proprietary software products, contrary to the belief of many conspiracy theorists who have spent too much time on board alien space ships.. ;) I would imagine that a greal deal of commercial stuff is actually good and relatively bug free. Not all of it, but just because all of it doesn't kick ass doesn't mean that all of it sucks, either.

    In this context, perhaps it's no wonder we face the stinging paradox of the computer age-that even though we have ever more way-kewl digital tools, our productivity has not budged an inch.

    Oh wait.. No wonder. The writer has obviously spent too much time around script kiddies and other lower network life. ;)

    "The average American would never buy an electric razor... as buggy and unreliable as a PC," says Bruce Brown, founder of BugNet, a Sumas, Washington-based firm that reports on bugs and provides fixes.

    Actually, I know a lot of idiots with electric razors that don't shave a person's face (or elsewhere) very well. I consider that to be "unreliable".

    Perhaps the most astonishing fact is that we, the customers, let software companies get away with products of such atrocious quality. We take every frozen screen as part of the package. We don't complain. And we sit back while, unregulated by government and cheered on by the stock market, software makers embark on a race to the bottom. They're almost all the way down.

    "We, the suckers who don't know any better.." But seriously, who doesn't complain? I'd like to meet the person who isn't at least mildly ticked upon being visited by a BSOD.

    Once a program goes over a few million lines of code, no one person can hold the structure neatly in their head.

    Encapsulation and modularization are your friends ..

    This is particularly true with operating systems, which have skyrocketed in size and developed thousands of byzantine byways. Windows 95 had eleven million lines of code; Windows 2000 is slated to have a mind-blowing twenty-nine million.

    ..too bad the people from Redmond don't make nice to these two potential buddies..

    On top of the software issues, there's the challenge of hardware. The explosion of cheap PCs has created thousands of different machines built in completely different ways. This one has a nVidia graphics chip; that one has a Turtle Beach soundcard; yet another has some memory sold out of the back of a van in Mexico. Such variability makes it hard for software to run smoothly on each box.

    GPL your drivers!!

    I don't know.. After reading this entire article, I was first surprised that the writer even knows the correct term for a BSOD, and also I just have to ask: Can anyone come up with any reason why free software would not be considered a Very Good Thing to inject into the software industry, as far as the average end-user is concerned? Obviously it's not quite so helpful to big business. ;)

  • So you would have us go back to ...shudder... COBOL? Excuse me while I go into spasms.

    Seriously though, OOP doesn't mysteriously generate bugs. It's poor programming practice that does it. Go take a look at Eiffel [loria.fr], a OOPL that forces you to program well. It even has a GNU compiler.
  • So far the commercial offerings are pretty even with open source offerings. Everything crashes. Having a source tree to compile makes no difference if you can't navigate the 100,000 lines of code that make it up. If you're dealing with a niche market of users who are less interested in CS than they are in playing guitar you won't get any feedback from the users. You're the only one who knows that source code well enough to hack it.

    If you're dealing with a niche market made up of CS majors you'll get cleaner Makefiles and configure scripts but that doesn't make the software any more reliable. Only when you deal with one or two fundamental, basic necessities of computer operation does the source code become useful.

    In 4 years of producing source code, the lion's share of complaints were from users who can't compile the source code while binary releases were merely a matter of resetting LD_LIBRARY_PATH. It's a lot easier to give them a binary than start an endless discussion on compiler flags.
  • If the software industry had the same stringent standards as other industries, 99% of software would not exist

    Ah, but the 1% that did would be quality stuff. The interesting question is, would we be better off with a smaller quantity of software, with higher quality?

    I think so. I know I don't need 1e6 features in my word processor, or web browser, or [insert application here]. Most people don't, and I'm sure a lot of people would trade unused features for stability.

    I also think that if some companies would start creating software with stringent quality control, avoiding feature-bloat and the accompanying bugs, they could make a name for themselves (with the appropriate marketing, of course). Thinking of buying M$ Bloatware 3000? Buy our app instead. It has all the features you actually use in the MS product, but ours actually works!
  • There's a two-part solution:

    A) Never publicly announce release dates until the product is in the final stages of testing. Even so, don't be specific until FedEx comes to pick up your GM.

    B) Internally, keep target dates for different stages of the project, and update them weekly to reflect your progress. Try to make them accurate estimations for your marketers to plan by, rather than deadlines for your developers to meet. Make sure your marketers understand that there may be unforseen delays, so that they don't start hyping a product too soon.

  • by Kitsune Sushi ( 87987 ) on Sunday October 03, 1999 @11:24AM (#1641251)

    The grand majority of your project time should be spent in the design phases. The less time you spend on design, the more likely you're going to fuck up during implementation, and thus the more time you're going to spend doing testing. Design should consume about 50% to 70% of a project's time (or at least closer to those figures than 33%). After you have a full, working design down on paper, implementation shouldn't be all that hard to nail down quick unless your programmers really are quite clueless. The reason you should spend so much time on design is so that before you even write a single line of code, you know everything that the program is supposed to do. You figure out the best way to implement each feature, and whoosh! You're off. A lot of bugs are solved this way before you even go to your favorite text editor. Moral of this story? If you don't implement something incorrectly in the first place, you won't have to fix it later. It's a Good Thing. And most of your bugs will be typos and other assorted weirdness rather than critical design flaws. A change in design during implementation is much harder, and quite time consuming. You'll be much better off if you have an extremely clear view of the design beforehand. How much testing you do shouldn't really be set down as any predesignated percentage, AFAIC.. You test it until it's done being tested. Besides, how much time that will require depends entirely on your licensing and how you plan to test it. ;)

  • To those of us whose idea of an elegant OS is Unix, that paperclip seems really stupid, and we laugh at it. But not every user can sit down in front of an unfamilier UI and get their work done.

    I like the paperclip. It saves me time. I can show a user how to use it to find the answers to their questions without asking me. Being able to type "How do I change the margins?" and get an answer is very useful if you need that sort of help. Think of it as a cute/annoying first-person graphical front-end to searcheable help files.

    Believe it or not, Microsoft actually does spend some time researching UI intuitivity.

  • by NickHolland ( 91075 ) on Sunday October 03, 1999 @11:37AM (#1641260)
    In the 1950s through the 1970s, people bought cars based on features and looks, not based on reliability and quality. There were a few cars that were well built and lasted, but for the most part, they rotted on the lots, while their flashy but unreliable competition sold.

    People used to line up to see the new models. People with money lined up to BUY the new models. Knowing full well it probably didn't start any better than the old one.

    It wasn't that Detroit was incapable of BUYING a quality product. It was that the consumer was unwilling to buy the boring but solid product. It wasn't until the late 70s and early 80s that consumers suddenly demanded QUALITY over LOOKS from Detroit, and Detroit responded (in the auto industry, the problem was the speed at which the change in consumer attitude took place. You can't change the manufacturing criteria from flash to quality overnight..and no one was sure if this was a passing fad or a real trend)

    The facts are, if the consumer demands something with their money, they will get it. Complaints don't mean a thing.

    Another example: Airlines. People complain about late flights, they complain about lousy service, but they book the cheapest flight. Duh! Leaving on time costs money! Great service costs money! If the consumer buys the cheapest product, they can complain until they are blue in the face, nothing will change, at least not for the better.

    It is simple economics. There is great demand for flashy software. There is little demand for quality software. While that is the case, that's what the consumer gets. The software industry better hope consumers don't change their mind on flash vs. quality overnight, like happened with the auto industry.

    I've been supporting bad software for many years now. I'm starting to detect a change in attitudes towards business people (although, I am probably guilty of contaminating the sample!). I think this is good.

    Consumer complaints don't enter into economic decisions. Dollars do.

    Nick.
  • It's used to kill a program that has crashed. Or for an example, if you went here [1wh.com], you would have to use CAD to regain control of your desktop.
  • ok, what you say is true.. but not always.

    some programs all that matters is whether they get the task done, and not whether they do the task well. But not all. the perfect example of something that works the other way is a web browser: stabilty really _is_ more important than features there. Because it isn't simply performing a certain task and then leaving; it's omnipresent, always there running in the background, never being quit since ther'es no way of knowing when it will be needed again.

    as someone on an OS with no memory protection (macintosh) i wind up with this problem amplified-- since any one program crashing automatically means i have to reboot. And MSIE causes more crashes than any other app for me, almost always while i'm just kinda checking on something on the web as i do something in another program. (and stability is one of the main reasons i switched from netscape to begin with.)

    which brings me to the most important thing: the ship-first-fix-later philosophy doesn't work unless you actually fix later. Meanwhile the web browser companies _never_ go back and do the fix-later part; they just ship, over and over, constantly adding new features and never considering he validity or stability of old ones. The proverbial feature freeze doesn't even happen _after_ the product is shipped.

    The point is, even if features are more important than stability, at some point stability should be at least a _consideration_.

    -mcc
    why web browsers suck: http://home.earthlink.net/~mcclure111/cyberleary.h tml#discontent

  • There are many different factors that weigh in on software quality. The article mentioned some of them:

    1. Complexity: For many projects it isn't possible for a single person to understand the details of how every single part of the system works. This leaves the project with a number of possible sources for bugs.

    2. Testing: It's impossible to test the entire range of possible inputs and compare to outputs. Many real-world stimuli can't be predicted in testing, and often can't be dealt with.

    There's another factor, which the article didn't mention:

    3. Unlike other industries, there's no leeway. Either a product WORKS, or it doesn't. There's no such thing as kind-of-works. Take the automotive industry for example: Cars still have showstopper bugs (cars have been recalled, as have most consumer products), but there are fewer possible causes of a showstopper bug. Automotive showstoppers are almost always safety related. But if it turns out that the car has a part that is prone to leaking, the manufacturer waits until the consumer notices it and brings it in to be fixed, because even while leaking, or while the engine is knocking, or there is hesitation before acceleration, the car will continue to work acceptably. Consumers work around these bugs EVERY DAY. Just like I work around the fact that my toaster's timer isn't quite exact, often pushing it down for a second run cause the toast didn't get as brown as it usually does.

    However, with a software product, every piece of the product is in such a delicate balance, it takes only one thing to go wrong for the error to propogate through the rest of the system, causing a crash. And often, the error propogation is completely unpredictable. This means that every part of the system has to work exactly as defined, (with no allowance for random fluctuation or acceptable levels of correctness), when the slightest variation can through every other piece of the system out of whack. In essence, every error can potentially destroy the entire system, no matter how trivial that error might be.

    This is why software companies release products with licenses that disclaim responsibility. They know they can't predict every possible usage situation. In places where such predictability is absolutely crucial (air-traffic control being the canonical example) products are written for a specifically defined environment, with a specific set of interactions. The focus is entirely on reliability in a single environment, resulting in a loss of flexibility and features. In that situation, flexibility and features aren't necessary.

    But in the consumer market, the story is quite different. Users want the utmost reliability, with flexible usage environments and all the features (yeah, everyone would accept a stripped down product, but only if it were stripped down to what they use). The bug situation is only exacerbated when the programmer has to worry about the actions of yet another software program affecting the operation of his or her own product, something which is not as bothersome in a well-defined environment.

    Not even automotive companies will warranty a car which has been used in a way not predicted by the manufacturer, nor will any other company if they don't consider the product's day to day use to be normal. It's just much harder to define what normal use is going to be in the software world. Ideally, it would be "Used in exactly the same way, on the exact same machine, w/ the exact same setup as the developer's machine." But in the real world, that's never going to happen. That's one of the reasons big, single purpose servers tend to be more stable than my machine at home. The usage environments are far more well-defined than that of the average PC user.


  • I have developed on both NT and Windows98 (for either) and can tell you the OS can stop software development cold.

    ActiveX controls and DLL's... I can't tell you how many times a bug is introduced in a new version of Comctl32.dll but is then fixed in a later version. It is often extremely difficult to determine whether the bug lies in your code, the development environment you are using, or in the operating system. The more time you have to spend tracking down OS bugs, the more of your own bugs get through.

    There is also an extremely annoying bug in Windows 9X which causes the tab button to switch between programs rather than (seems the taskbar has grabbed the focus). If you use the tab key to autocomplete or to indent your code, it makes the system unuseable. The only solution I've found to date is to reboot.

    So with NT/98 I waste my time debugging and rebooting when I should be coding and bugfixing.

    Doug
  • And I owe it all to you!

    If you honestly thought that I meant that the entire bulk of Windows was compiled from a single source file, I don't think you have any right whatsoever to call anyone ignorant.. except, perhaps, for yourself.

    Anyway, I'd say something more useful, but you're obviously trying to craft a troll. I kind of got the feeling right off the bat, but I thought this post [slashdot.org] was a complete riot. Sorry, you'll have to do better than that to sucker me in. ;)

  • The opensource movement doesn't have enough control to be fully recognized by the article. Aside from that, OpenSource software is STILL plagged by these problems.

    Redhat can't go a week without having a security problem found. It rescently IPO'd. I only expect this to get worse.

    Both KDE and Gnome have some serious problems. Partially due to the small number of developers on the projects trying to do a big job.

    The Linux kernel (ugh..) also has some very serious problems. Albeit not as bad as the more popular windows, not as good as it could be.

    In my conclusion, GPLing doesn't make for a better product. Just makes people feel better about themselves. As linux becomes more commercialized, these problems will only get worse.

    I still fully believe that the best Linux kernel & library set was around the 1.2 kernel releases. I don't see this changing for quite a while.

    OS/2 was quick at fixing bugs, but it shipped with alot of them. Microsoft obviously has bugs (notice how it performs better if everything on the machine is M$ though?). Macs die too.. Don't forget that. Linux takes a little longer to kill, but just give it a load over about 25 and watch it plummit.

    BeOS will be my next try... I hear its great for multimedia stuff, but not to expect much in the office suit department.

    Yes, yes... Linux is still more stable than some of the others. I just wish the hardware that was reported as being supported, was truely (stably) supported. SBLive anyone?
  • It's the worst kind of bug....an API that invites errors. Wher it takes a dozen lines of code just for the equivilant of if(pid = fork()), the chances of an error are quite high.

    Undocumented APIs that MS uses in it's software makes matters worse. In order to offer comperable features, the competition must also use the undocumented API, only it has to reverse engineer it first, and never knows which ones will disappear and when.

    Windows also has a lot of leaks (otherwise, how could one app crashing take the whole system out). That not only maximises the damage, but also complicates testing. If software cores, you know it's got a bug (and often, where it is). If software renders the system fragile so that it will crash hours later, who knows?

    Windows could be blamed for not handling 3rd software on a tighter leesh, but not be blamed for the fault itself.

    When a worker falls from a catwalk with no railing, the employer that failed to have a railing is the one that gets the blame. Software should work the same way.

    The only time I have ever managed to crash any *nix system during development was driver development. Otherwise, coredump, load the debugger.

  • Compare Microsoft's resources with those of the Linux hackers: M$ has a thousand times more money, M$ has access to every piece of hardware on the market plus early access to all the new designs still under development, for every Linux hacker there are ten M$ programmers and two dozen support staff. According to traditional capitalist theory, a programmer at M$, with his potential to become a millionaire from stock options, should be vastly more motivated than a Linux hacker working for free.

    So why doesn't Linux suck half as bad as those products described in that article? Could it be that capitalism ruins everything? When a Linux hacker, or even a tenth-rate amateur hacker like me, produces a piece of code, we do it because we want a working program. When M$ does whatever you'd call what it does, it couldn't care less about how well the program itself works; the one and only consideration is, "how much money will this thing bring us?"

    And it isn't just software. It's the pesticide-laden, hormone-laced food you eat and the polluted air you breathe; it's the way you waste five or ten hours of your lives each week sitting in traffic jams, because "the system" requires everyone to be at their work station at 8:00 AM sharp; and when you drag yourself home from work, it's the idiocy you're offered as "entertainment" on TV. See, food, water and air don't count for anything; the hours of your life are valueless; art is nothing but another commodity - the only fundamentally important thing, the one criterion by which all human effort is planned and judged in this society, is money, money, money.

    The lust for money, accurately described in the Gospel as "the root of all evil," dictates every facet of American life, including even immaterial things like religion - nowadays, seemingly, a wholly owned subsidiary of the more pro-business of America's two pro-business political parties. Life in the U.S.A. is primarily controlled by the very rich and by corporations, and every day that goes by sees the corporate stranglehold over your life and mine, even down to the most insanely minute detail (e.g. what files do you have on your hard drive? the NSA has got to know! what did you smoke last weekend? piss in this cup so the boss man can check!) only tighten.

    Karl Marx spelled out a potential solution a century and a half ago. But ninety percent of the people reading this post, having had their brains marinated in anti-socialist, pro-capitalist propaganda their whole lives, will be so spooked by the recital of that dread name, that they won't ever bother to even consider the possibilities of a different form of government than the absolute, unlimited reign of capital.

    No! instead of that, we all must have faith in the all-beneficent "invisible hand" of capital! Everything will come out for the best in the end! Sure it will.

    Yours WDK - WKiernan@concentric.net

  • That sucked, my apologies for not using preview :-) Here it is in better form:

    There are different kinds of design, you can't just use one "x%" statistic to describe the time necessary.. Take, for example, a program that is supposed to reverse a string.

    The design spec will say:
    * Dialog will pop up with an aspect ratio of 15x6
    * Dialog will have an input field that is 60% the width of the dialog
    * Dialog will have an "OK" button directly to the right of the input field
    * The input field on the dialog will accept all character input such as ascii, DBCS, SBCS, etc
    * User clicks OK, a new dialog pops up with the string reversed.
    * etc

    So this is handed off to the programmer. And the programmer is the one who has the expertise to design what he/she does:
    * Once the user clicks OK, go to Parse_Input()
    * In parse_input, if any of the characters are not ascii, go to Non_Ascii_Parse()
    * In parse_input, if the characters are all ascii, reverse 'em
    * etc

    While the programmer is working on implementing the spec, the tester can go through the spec and /design/ the test plan:
    * Test running the app on a japanese OS
    * Test running the app on a US OS with a japanese IME (input method editor, allows you to type japanese characters)
    * Test running the app on OS#1, OS#2, OS#3
    * Test running the app when the system is very low on memory/disk space
    * Use characters such as & and * and tab in the input field and make sure program has correct output
    * etc

    With proper peer review (say the design spec didn't account for non-ascii and the tester or programmer noticed), the major issues can be caught sooner rather than later. But even after the design spec is 'done', the programmer will run into problems that might be best solved by changing the spec. Or a customer calls in to say that they need this program to do bidirectional as well which results in more work in adding to the design spec, more work for the programmer, more work for the tester.

    I don't think there's any one "% design time, % coding time, % testing time" that will apply to all software projects, or even all software projects of the same class.

    Plus, not all software designers, programmers and testers are of the same skill level.. a lesser skilled programmer will probably need more time than a more experienced one. So maybe in that case, the designer would allocate extra time in design stage and make the spec much more detailed to help the programmer with code flow, etc.
  • "Brooks's Law" as you call it is not a definite relationship, as you seem to believe. Rather, Brooks shows that as you add people to a project, communication problems tend to overwhelm the project. In another work by Brooks, No Silver Bullet, Brooks argues that no "silver bullet" (technological advance) had yet been produced that would advance software development by an order of magnitude. I think that we are now seeing this silver bullet, the internet. It allows for communication between vast numbers of people across a wide area.

    The principles of The Mythical Man-Month also apply to open source, however. Obviously as you add people to an open source project, the per person productivity decreases. (Which is much of what Brooks says in MMM) The reason open source projects are successful are good organizational structures, which Brooks emphasized.
    --
    Gregory J. Barlow
    fight bloat. use blackbox [themes.org].

  • perhaps what many are saying is true: software isn't like most other industries, but it's really not that diffent from the semiconductor industry. yet for some reason, we don't just have wild "crashing" and "bugs" in semiconductors (short of the occassional pentium slip-up :).

    i think you're cutting software developers too much slack. sure software is difficult and complex, but have you really tried to understand the layout of cutting edge semiconductors?

    i used to work for an FPGA manufacturer (who shall remain nameless). our FPGAs were/are cutting edge, and despite my degree in Electrical Engineering (with an emphasis on semiconductor design), i couldn't even come close to truely understanding half of what goes on inside those things.

    our chips rarely had a hardware problem when things went to ship. but we were constantly having quality issues in Product Engineering (where i worked). why? because the software was, quite simply, lousy.

    now don't get me wrong -- place and route routines can get pretty hairy, but i've seen the source code to the programming software, and i can assure you that's not 1/10th as complex as the chip their trying to program. but when i would confront the Software group about their buggy software, they gave me the typical arrogant "you don't understand computers" response, and more or less stated that software is just plain buggy by nature.

    bullshit.

    why is it that software programmers don't have the same idea of quality that hardware designers have? why do they automatically assume that software can't be (relatively) bug-free like complex hardware can be? i noticed it more than ever working at this FPGA manufacturer -- many software programmers simply can't think of software being any other way.

    and this isn't a case of capitalism-driven bugs. this company isn't making any profit off of selling software updates. in fact, the majority of time-to-market goals we missed were due to the multitudes of software problems we had to overcome.

    so to all you software developers who can't seem to comprehend the importance of design and testing: try taking a hint from your hardware-designing associates and smarten up!

    let's start dispelling the myth: software doesn't need to suck!


    - j
    --
    "The only guys who might buy [the Apple iBook] are the kind who wear those ludicrous baggy pants with the built-in rope that's used for a belt."
    - John C Dvorak - PC Magazine
  • Some copmpanies go out of their way to force the public to demand the latest version. In most cases when I have been asked to make a software upgrade decision, I prefer to stay one release back from latest and greatest (barring known problems etc).

    The thing is, what if one version back won't read a file from the latest and buggiest? What if We really need to be 2 or 3 versions back to avoid killer bugs?

    How about concentrating on bug fixes and ONE nifty new feature. Not 12 new features and 327 gratuitous changes. (and no bug fixes)

  • by VAXman ( 96870 ) on Sunday October 03, 1999 @12:40PM (#1641316)
    Anybody who read The Cathedral and Bazaar (most people here, I'm assuming), know that the entire PREMISE of the free software industry is "release early, release often" -- which means that free software uses the attitude described in this article, only on steroids.

    I find it scary that people think this is limited to commercial software. The article mentions that there are 5 patches for NT 4.0 -- there are 12 for the Linux 2.2 kernel, and that is just for the kernel (in many cases, the packages distributed on top of that have patches also, there are probably thousands of patches and updates collectively to every packages distributed in Red Hat 6.0, for example), AND Linux 2.2 has been around for less than 1/4 as long as Windows NT 4.0.

    Feee software usually doesn't have formal testing either. Instead of a dedicated testing team like most commercial software has, the testing philosophy is to release it and for users to test it. Not good preventive treatment obviously. Nobody is going to test if the new SCSI driver is going to wipe off your hard drive - it's left for the beta testers to find this.

    I can think of five or six showtopper bugs off the top of my head in Red Hat 6.0 that would have prevented the release from cooming close to shipping out the door had it been a serious commercial system, but it was released anyways.

    I have also looked at the source code for many free projects such as GCC and GIMP and noticed that the code quality was quite low. For example, malloc() calls were usually unchecked (especially in GIMP). I have worked on commercial projects before, and checking malloc() is rule #1 -- if you happen to run out of memory while using GIMP, it'll blow up, where as commercial systems will simply fail to complete the current operations. If such a high profile package is of such low code quality, I expect the lesser profile packages are considerably more buggy.


  • by rlp ( 11898 ) on Sunday October 03, 1999 @12:42PM (#1641319)
    Want to produce really bad software.
    It's easy if you follow just a few
    simple rules:

    1) Produce no documents - avoid creating
    requirements documents, design specs,
    etc. Just jump right into coding.

    2) If it's a large project, divide the
    work into several different development
    groups, and make sure they don't
    communicate. If they can be
    geographically separated, so much the
    better.

    3) Don't hire any experienced
    programmers, or if you make the mistake
    of hiring them, don't listen to them.

    4) Make sure that managers create
    impossible schedules. Nothing produces
    bugs like highly caffeinated over-worked
    sleep deprived programmers.

    5) Change requirements (unwritten of
    course) frequently. Be sure and add
    plenty of new features at the last
    minute.

    6) Be absolutely certain, that you don't
    learn any lessons from industry history.
    Don't read Brooks, Deming, Humphrey, or
    any other Software Engineering or
    Quality literature. And for God sake's,
    DON'T look at 'http://www.sei.cmu.edu/'!

    7) Avoid any and all code inspections.

    8) Avoid creating any sort of
    development processes, or if you do,
    make them so pointless and burdensome,
    that they are self defeating.

    9) Do believe that you can test quality
    into a product. But be sure to compress
    the testing schedule just in case.

    10) Three words - "Ship it anyway".
  • Unfortuantely, the way business has been structured for the last some odd 100 years has been to push the product out as fast as possible.

    Within limits, that's true. However, lemon laws reign this in for the auto industry. More serious bugs are halted by liability (with notable exceptions). The requirement of PE signoff controls civil-engineering and industries that depend on it.

    Meanwhile, half of the software industry seems to run on the "It compiled...SHIP IT!!!".

  • by wilkinsm ( 13507 ) on Sunday October 03, 1999 @12:44PM (#1641322)
    *Flame suit on*

    Yes.

    Question: When was the last time you returned an high tech item because it was flakey?

    For the average Joe, I'd venture that the occurance of these events are far and few between. One reason is with complex items it's harder to determined exactly what the problem is, or who is at fault. Consumers are becoming lazy in their shopping skills. If I buy a knife set with a broken blade, it's easier for me to put blame on the manufacter than when I buy a piece of software. I know a broken blade when I see it, but do I recognize a broke program? How can I be sure it's not the hardware's fault or the operating systems fault?

    Another Question: When was the last time you bought something new that had a users manual with a wiring scematic in the back? Manufactuers don't even bother anymore. They know that most of us are too clueless these days to figure them out anyway.

    So the software companies can get lax behind their lawyers and their propertery magic while the world around them falls apart.
  • Reliability starts with liability.

    Back in the mid '70s, when I was at CERL [umn.edu], Sherwin Gooch [wenet.net] came up to me on the verge of panic. He said something to the effect "We're dead. Software engineering is no longer a profession!"

    What rattled his cage was a court case in which the defendant, a software engineer, was held immune to the claims damage by his client. In the opinion, the judge in the case held that software engineering was not an engineering profession in the same class as civil engineers, and that therefore the programmer could not be held liable for damages resulting from his software.

    Sherwin was right. It has taken decades for the demand for highly skilled programmers to rebound from the lows they experienced in the late '70s when I was doing systems programming at Control Data Corporation's side of the PLATO project [thinkofit.com] for about $20K/year.


  • Redhat can't go a week without having a security problem found. It rescently IPO'd. I only expect this to get worse.

    So... how many of the security problems are in programs written by RedHat programmers?

    Believing that software gets worse when a company gets public investors is as naive as believing that software is buggy because consumers are computer-ignorant and want lots and lots of nifty features.

    --
    QDMerge [rmci.net] 0.21!
  • by Anonymous Coward on Sunday October 03, 1999 @01:05PM (#1641339)
    You need to understand how the industry really works.

    The Vice-president doesn't care if the software works or not. If SOMETHING isn't installed on the target date that the President gave him, he is out the door.

    The Director who reports to the VP doesn't care if the software works or not. Actually, he hopes it won't, since when the VP gets canned, the Director hopes to be promoted. Meanwhile, the Director is going to do everything he can to help. Like scheduling seven hours a day of mandatory "specification review" meetings for the developers and their supervisors. And "opening dialogs" with temp programmer agencies to help the managers with their resource management problems. And encouraging the Business Analysts to learn SQL so that they can provide better direction to the programmers in their functional specifications.

    The Managers who work for the Director don't have time to care if the code works because they are too busy interviewing the hordes of fresh immigrant (temporary) programmers who have professional experience in every language you ever heard of. Except practical English.

    The Supervisors who work for the Managers don't care about the code because they are too busy filling out the project status reports and the time sheets for the contract workers, attending the "specification review" meetings, and sitting on the "issue resolution" committee.

    The Business Analysts actually care about the code and are sure that it would work if the programmers would just use the EXACT SQLs that were written in the functional specs. And don't bother them with any techie nonsense about "syntax errors"; the "Learn SQL during Lunch" book said it worked exactly like that. And "We need to have a meeting to discuss YOUR refusal to follow our design. We intend to escalate this as an issue."

    The Project Leaders don't care about the code since they are on contract from the consulting arm of one of the "big 42" consulting/accounting firms. They care about three things: keeping their billable hours at maximum, forcing everybody to submit reports formatted according to their company's standard, and seeing that something is installed on target "go-live" date. Since their contract expires the day after "go-live", they are free to piss off everybody in sight. They won't be around when the bomb explodes.

    The programming team leaders would like to care about the code. After all, they used to be programmers. And after "go-live", they are going to be stuck supporting the project. But with the sudden influx of temp/contract programmers, the new team leaders are spending all of their time trying to explain how the version control software works and why code is written on the development box, not the QC box, and trying to actually get logins for the temps in the first place. If anybody had asked, the Senior TL could have knocked out half of the project with a handful of Korn shell scripts, but he is busy setting up card tables in the hallway for six new temp programmers whose names he can't pronounce or spell and one whom he is already about to kill. At least setting up card tables serves as an excuse for avoiding the mandatory specification review meetings.

    The new temp/contract programmers would also like to care about the code. And as soon as someone comes to their senses and replaces this horrible [AIX | BSD | HP-UX | Linux | NT | Sun ] box with a [decent | larger] [AIX | BSD | HP-UX | Linux | NT | Sun] system and installs a C++ compiler, the code that they have written will work fine. There's not any real difference between MQSeries and DCE. Obviously there was a mistake in the specification so we coded for the one we used last time.

    The Tech Writers, meanwhile, not only don't care about the new programs, they don't even know that there is new software coming. Nobody has talked to them about documenting it. Three days before "go-live", one of them will overhear a conversation in the lunchroom. But conversations about the "latest fiasco" are too common and this one will be forgotten for another four days..

    The QC/QA group cares about the code. They are already receiving threats from the Operations group about "another delivery of bug-infested code". Consequently, seven of the first ten bug reports will be for misspelled screen prompts. The other three will read "Doesn't work". (It will subsequently be discovered that "it didn't work" because the sysadmins had not installed the test code on the correct box.) Testing might a little faster is someone could answer them just one simple question: "What is it supposed to do?"

    The system admins are completely unconcerned about new code. Until it is installed somewhere, they are free to ignore the upcoming need for disk space, printer queues, bandwidth. Just as well, since they are going to have to take the network down for the next week to install new routes in the bridges or bridges in the routers (they seem vague on what they hope to accomplish). But "we should have your workstation IP addresses changed out by the middle of next week, for sure, d00d".

    Oh, and the Marketing department just pointed out to the President that there is no certificate of Y2K compliance for the project.

    And all vacations and time off has been cancelled.

    And the company firewall is now blocking http requests to monster.com.



  • by Anonymous Coward on Sunday October 03, 1999 @01:06PM (#1641341)
    I do have an account, I just don't think I want it to be trivial to work out who I work for. You'll see why.

    Pieces of this are true, the pressure builds up and management applies pressure, frequently in the wrong places.

    I work for a company that makes specialized software for select clients. We have a sales team which goes out and beats the bushes looking for customers. We rarely have more than about thirty active customers at any one time -- we do a large project, deliver it, and get out.

    My part of the project was babysitting the deliverable builds (the code builds that actually got shipped to the customer) and constructing the master images of the software the customer would receive. It would get handed off to the QA team, usually with the admonition "Important -- We must ship this today!"

    Usually with this sort of admonition, you knew in advance that the disc you'd just sent to QA would go to the customer no matter what (since the inertia in fixing any problem was a minimum of three hours, plus the minimum three hours it takes to install and set up our product and run the basic basic basic smoke tests). And frequently, the contents of the disc were crap.

    At one point, we listed (in earshot of the manager in charge of the project) the criteria for QA approval to ship. The candidate master must be round, gold, and have a hole in the middle. Someone observed that a maple dip doughnut could satisfy these requirements, and be just as functional in the hands of the customers. (More so, since they'd have something to eat while calling Customer Support.)

    The root cause of our problem was that we were too customer driven. There are direct competitors in our space for the same customers, and the sales team is under incredible pressure to make the sale and bring home the deal. If that means saying Yeah, we can do that when they don't have the first goddamn clue about how hard it might be to do that. The contracts team then rolls in and agrees to many things with regards to functional spec and deadlines, and they are under lots of pressure to close the deal. The technical people who try to estimate the complexity of the requirements are under a lot of pressure to make the estimates low -- still competing against the other companies in our space, you know!

    So these deals are impossible before the CEO (or whomever) finishes signing his name on the deal!

    And then QA gets the heat to sign off on this candidate which would better be used as a drinks coaster!

    These deals we write usually have performance clauses for delivery -- we agree that we will deliver the finished product no later than this date, and will affix penalties for each day (frequently in the area of $10K per day) that the product is late. There have been times we shipped blank tapes or CDs with a product label on it simply because we were contractually obligated to ship something. Now is that insane or what?

    The big problem in our space is that no one is willing to say "no" to the customer. If you don't say "yes", then our competitors will, and we will lose the deal. It means that our competitors get the customer's money instead of us while the customer figures out the awful truth!

    Our company even went so far as to regiment an ISO9001-like procedure that says every release candidate must to A, B, C, D... There's a form that each candidate has and it goes through each section from being ordered, built, tested and approved by QA, then duplicated and shipped. In practise, we ship a fair number of 'one timers' and dummy up the paperwork afterwards.

    I don't know what the solution is. I know in our space the problem isn't likely to go away -- one hears stories of the competitors making huge deals that fall on their faces with the customers paying the tab. Sometimes we get to go in and try our luck -- sometimes not.

    So what do we do? Listen to the technical people. You pay them big bucks for a reason! Walk away from sales that require the impossible. Avoid deals with financial penalties for late delivery. And stop trying to lay the failure or success of a product on the heads of QA! They don't make the product crap, it's already crap when it lands on their desk.

    Maybe liability for software would help. I'm sure the company would jack prices through the roof to cover the added risk, but it would sure as hell focus the attention of the managers and developers and make sure the company wrote deals that made it better to deliver late software that worked and not the other way around like it is today.

    Late and right beats on time and crap every time -- or at least, it should.

  • Take a guess what commonly worshipped software development beast is at fault for the problems you and others here have described:

    The Hacker.

    The Hacker is someone who spurns top level design, and just wants to "write code."

    The Hacker doesn't want to hear from a usability engineer, he wants to "release often" and have random people all over the 'net find his bugs.

    The software industry is permeated by hot dog coders who, umm, basically are just coders. It's a real problem, and is worse in the Free Software community than in the Commercial Software business. "Open Source," also known as "fix it yourself," while viewed by some people as the central philosopy of Free Software development, is also merely the least wobbly leg it leans on. So we end up with Byzantine software, designed by nobody, and added to by everybody, that works well at the things coders have felt a need for, but has a User Interface reknown throughout the rest of the world as cryptic. Emacs is probably the supreme example of this design philosophy.

    Please don't bring up "Peer review" to argue against my point. The classic notion of "Peer Review" that has increasingly been distorted by "Open Source" advocates, has to do with a body of peers with credentials. Not the guy with the loudest mouth on Usenet.

  • I'm going to kill whoever thought it was a good idea to have the ads reload every 10-15 seconds on the site that article is on. Grr.

    take a deep breath... get in line... right-click->Open Frame in New Window... move on with life.... ;)

    I would imagine that a greal deal of commercial stuff is actually good and relatively bug free.

    Once upon a time... yeah. But the lemmings in the industry are dragging everyone else with them. Even the Blue Giant [ibm.com] has moved up their release cycles, products that used to ship a version every couple years now ship on a 3 quarter cycle... and these are things far removed from the .com hype. The products caught up in the thick of it are quarterly.

    Encapsulation and modularization are your friends..

    OO is a fairly good paradigm yes, but it has some glaring problems. Especially in the realm of this discussion....

    There is a piece of code I own that was written with entirely too much OO on the brain... every thing is an object, and everything is encapsulated. A simple trace through the section for one invocation involves something like 25 instances of about 40 classes (this is when inheritance sucks) on three threads. It has taken me over a year to have even a marginal feel for this code... and I understand it at the high level, and have full design docs at my disposal. This code by the way, is well under a KLOC in total... closer to 1/2 that. It doesn't need to be this complex; I've redrawn it on paper down to as few as 8 objects in the process of understanding it.

    Managers view programmers as a resource, programmers are considered plug replaceable. I watched recently as a wet behind the ears college kid was plopped into place replacing a veteran of almost 20 years in the industry, who had inherited the code several years ago from another who had been around as long as he had. This poor kid is in over her head... and we all try to help her out... but she doesn't have a snowball's chance in hell: that component is just plain fsck'd, the first three new features management pushed down her throat resulted in the component being completely broken - not just a little flaky - completely broken.

    Andreessen's quote: "We have, historically, definitely prioritized features over time and time over quality," describes near all of the industry by now.

    It's all about the deadlines... psychotic as the auther called them. My management recently respun the "final" build about 8 times after giving it to the test organization, each time promising it would be the last one... never once did they push the ship date. Final testing happened in a couple days, instead of a couple weeks it was originally planed for, or the couple months asked for to do it right.

    At the level of the programmer, it's all in the person. There are some who take immense pride in the quality of their work; they view bugs in thier own code as an afront to their engineerring ability. There are those [javasoft.com] who only seek to work on the new/popular/fun stuff; their code often contains half completed implementations with comments like "this is uninteresting". (actual quote from actual shipping code from that last reference!). I count myself lucky to work with the former rather than the later.

  • At the risk of being flamed, I have to say that so far IE5 *works*, and damn well. On my 98/NT box, it's the *only* browser that never gave me any trouble. I can't say that about NS which seems to crash after roughly 5 minutes of use.

    I'm not a M$ supporter, far from it. I hate their business (read: monopolistic) practices, but one must admit some of their software pieces aren't that bad. And until quite recent times, OSS wasn't (and is still not in many domains) really offering any valid alternative to proprietary software.

    As far as I'm concerned, I take software as tools, not as holy war weapons. When it comes to bugs, of course I'm pissed-off. I'm even more when I have to pay for a bug fix, which should be free. With OSS, another sort of problems arises: version 1.0 comes from that "it compiles, ship it!" policy. OK, the bugfixes come quickly, but are IMO too frequents. I'd like to see some sort of code of ethics when it comes to software versions, really. When I get a 1.0 (or 2.0, 3.0) version, I'd like to be sure it's a stable version, not something I'll have to upgade every other day. Basically, it shouldn't be called a "final version" until it actually works at 100% of its specs. It'll make software development a bit more lengthy, but at least the user (also read: customer) won't be harrassed by frequent re-installations, patches, etc.

    If car makers were following software development practices, how many customers would they keep if they had to recall the cars every month for a bugfix of some kind?

    Heck, I remember the days when new versions were only appearing once a year, or even less. We also used to have quite stable pieces of software back then, software we could actually use "as advertised". I wouldn't mind seeing that kind of things happening again.
  • So far the commercial offerings are pretty even with open source offerings. Everything crashes.

    On my planet, the open-source OS I use... Linux... doesn't crash nearly as often as closed-source windows (95/08/NT, take your pick). Furthermore, most of the open-source apps I use exhibit a far higher level of base stability than their closed-source commercial counterparts, even though the open-source apps are generally far younger. On your planet, things may be different.
  • The article, like the rest I've seen that covered this topic, never addressed the Alice in Wonderland quality to life when you're used to Linux and forced to buy Windows for something.

    I normally run Linux exclusively and don't accept contracts for non-Unix work, but I recently needed business accounting software. So I bought the higher-priced software that I knew many of my clients used. It was Y2K ready. According to everyone I talked to, it was the best of the breed and could handle my company moving from single-programmer-in-a-garage stage to multimillion dollar company. (Yeah, right.)

    It was such a load of crap that I demanded my money back (and got it, since their packaging did make a money-back guarantee) and am doing those tasks by hand while the Linux accounting packages stabilize. I decided I simply couldn't afford to lose any more weekends to produce a "professional" invoice after 6 hours of struggle, instead of whipping out an "unprofessional" one in 5 minutes with vi and lpr.

    Some of the problems were related to the fact that I installed it on a laptop. The high latency display makes the "friendly" animations that appeared on every single fscking page a smear. But the software also clearly had absolutely no usability studies; e.g., I could enter POs but I never found a way to list open POs and associate checks with the PO.

    Oh yeah, that was probably covered in the tutorial. The one that used lots of multimedia (always fun on a laptop) to train a clerk in advanced computer skills like using the mouse to pull down a menu. There was, as far as I could tell, no way to get a top-level summary for people who know computers but not accounting.

    On the bright side, the company was willing to offer me support. At a fee for each incidence, and the fee was apparently *not* waived if the bug was because *they* screwed up the configuration. Nothing gives you a warm feeling like spending hundreds of dollars on a commercial accounting program, hitting 'create new business' button and watching it shit all over the floor because it's missing some value in one of its VB scripts. (That warm feeling is enhanced when you see that they want to charge you money to fix it. I think the feeling is due to acute alcohol poisoning....)

    Oh, I almost forgot. A few months after installing this program my laptop took a hard crash. I picked up a virus even though it's never attached to the net, always in my physical control, and I only install commercial software in the original shrinkwrap. Surely a coincidence.

    I actually enjoy it now when people tell me that Linux is hard to install. I tell them that I routinely install Debian in less than an hour, it takes me longer to burn the CD-Rs than it does to build a working system. But let me tell them about how long it takes me to reinstall Windows from my Toshiba disks (although that's not really fair since two hours are consumed removing those packages always in high demand in professional offices, like the big Disney Channel icon). Or let me tell them about the last "big, easy to use" Windows application I tried....

  • OO is a fairly good paradigm yes, but it has some glaring problems. Especially in the realm of this discussion....
    There is a piece of code I own that was written with entirely too much OO on the brain... every thing is an object, and everything is encapsulated. A simple trace through the section for one invocation involves something like 25 instances of about 40 classes (this is when inheritance sucks) on three threads. It has taken me over a year to have even a marginal feel for this code... and I understand it at the high level, and have full design docs at my disposal. This code by the way, is well under a KLOC in total... closer to 1/2 that. It doesn't need to be this complex; I've redrawn it on paper down to as few as 8 objects in the process of understanding it.

    Well, to begin with, I meant that comment as a reference to extensively huge code that you couldn't hold in your head anyway. That said..

    One problem I've noticed with a lot of C++ programmers (and hopefully they are still novices when they outgrow this mentality) has to do with the following.. C was a hammer, and C++ is a sledge. We have C++ because there were a few railroad spikes that C just couldn't nail down as elegantly as we would like. Unfortunately, now some people always bust out the heavy ammunition.. But to be honest, not every problem is a railroad spike. ;) C++ is deep and complex compared to C, and how complexly you program something should be directly affected by how complex the program needs to be. Not only that, but even if using C++ is deemed appropriate, you don't have to overkill just for the sake of overkill. =P

    In the end, a program is only as good as the programmer. Better tools often make for better work, but even a sledgehammer won't help a stumbling buffoon nail down a railroad spike if he's too drunk to even stand still..

    Anyway, enough of that.. I'll now simply comment on the post as a whole: I agree. Excellent points, all.

  • 3) When the marketing department trys to add stuff, you say "Is it in the spec?"--"Sure we can add it, but it is going to take X additional weeks".

    This is probably the single largest cause of failure in the producion of new systems.

    The specs keep changing, so the coding takes longer. The overall timeline is not allowed to slip and it is always the QA phase that shrinks to accommodate this. One of two things then happen, the QA people either have enough clout to delay the project, or a huge buggy mess gets released to production.

    One project that I worked on went through 18 months with no progress. The coders kept leaping from place to place to try and produce the latest cool function that the users wanted.

    Things didn't get sorted until a new CEO took over and said "Take these tablets of stone and engrave the system design upon them".

    And lo, after 40 days and 40 nights in the wilderness, the new system was produced. And the users saw that it was good.

    Unfortunatly most CEOs don't have enough political clout within a company. The IT department is doomed to produce late and buggy systems.
  • by tzanger ( 1575 ) on Sunday October 03, 1999 @03:11PM (#1641412) Homepage
    ... sucks.

    I am a software developper. Even if I can't spell it right. :-)

    While I haven't read a single comment on this article yet, I am willing to bet that most are "great article" comments from people thinking that it is the boss' fault. Let me ask you this: Who told the manager that they could do it in three months? Who didn't tell the boss to fuck himself when he moved the ship date up six months??

    Jesus that article riled me up! It makes my profession out to be a gang of unprofessinal 31337 dudes who like to do nothing but play Quake all day, scarfing pizza and mountain dew and whine that they didn't get enough time when the ship date draws near.

    If you see the development taking much longer than you originally estimated, you get off your ass and tell your boss. You tell them as soon as you see the problem, not 1 week before the ship date when nothing can be done. You don't sit there and pull all-nighters trying to get it working at the last minute. I know; I've been there. If your boss won't give you more time you tell him that the product will NOT be done and will be full of bugs. Walk if you have to. How many articles on /. of late have talked about all the boundless opportunities for people skilled in software?!!

    What else... oh yes... the "big code can't be tested" arguement. Whatever happened to programming modularly, testing it thoroughly and eliminating the bugs through good software design? Oh yeah, I guess that's too logical. The shuttle/MRI/microwave/car/elevator/aircraft computer programmers must have it all wrong, after all. Thank God someone in the games field has it right!

    "PCs are all so different!!" Gimme a break! If it doesn't work on PC 'X', and you suspect hardware after you've elimiated all reasonable doubt that it's your software, you buy PC 'Y' and try it. It sounds like these guys have zero problem solving skills! Also sounds like they're programming too close to the hardware; that's what APIs are for and if the APIs are bad... well then at least you've done what you could and can try to contact the vendor for a workaround.

    No software can be perfect; another fallacy. Sounds like laziness or lack of time. Or both. Either way, most people know what they're getting into when they sign on, or they find out shortly thereafter. Either way it's ultimately the programmer's responsibility to make the code good. Bugs will crop up after ship date, but if you've programmed correctly and used proper coding procedures you will have a detailed map of what goes where and can test inputs and outputs to find out exactly where the bug lies to correct it and bring you that much closer to a perfect program.

    GIGO -- it's as simple as that. If you hack it together, you'll be hacking it forever. Stand up for your rights and stand behind your work. Whatever happened to being proud?
  • by JordanH ( 75307 ) on Sunday October 03, 1999 @03:44PM (#1641423) Homepage Journal
    And every project that worked and was on time--happend because we did that old boring junk that no one likes to do:

    1) Write Specs
    2) Follow the spec.
    3) When the marketing department trys to add stuff, you say "Is it in the spec?"--"Sure we can add it, but it is going to take X additional weeks".
    4) Test

    I don't want to take anything away from your experiences. I'm sure you've had success with this process.

    I've been in projects like this where success was declared at the end. But, I knew then, or perhaps just a little bit later that things were not so rosey.

    Building something to spec is wonderful. Especially when it's a bridge, or a tower, a road. I've not seen as much real success with this when it's software we're building.

    There are many many problems to talk about in the design and development process that I could go over. But one that isn't often talked about that dooms software to "suck" is what I like to call the two customers problem.

    The people who write specs generally are marketing people, or managers, or maybe even, if you're lucky, analysts who think they really know the problem domain and have been around it for years and who are sure they know what goes into a good system.

    Software is not designed for or by users. It's designed by people who sit around and try to dream up solutions without ultimately taking any responsibility for the useability of the system. Even when the designers make an honest effort to study the problem, talk to real users and do useability studies, too much of the ego of the spec writers comes through. Often, the grand dreams of the spec writers are in opposition with the stability that the real users crave. By stability, I mean both reliability (it doesn't crash) and that a new piece of software should be familiar, should have similarity to the software presently used for this function.

    So, a project is initially judged a success or failure based on how you satisfy the management, the analysts and the marketing types. Testing pretty much proves that it works in the ways that the spec writers envisioned it being used. Unfortunately, the software will ultimately be judged by those who actually have to use it, and tested in the real world in ways the spec writers never dreamed. These two groups, the spec writers and the users, the two customers, have very different goals.

    There is some hope. Rather than the spec, build, test, release model, a spiral development or RAD prototyping can ultimately get you a lot closer to a satisfying solution.

    Even here, I've actually seen cases where management will seem to prefer that the system be hard to use or lack important functionality. You sometimes get the impression that management feels that if a piece of software satisfies the lowly user, then the organization is spending too much on software development.

    It's a sad state of affairs. It's ironic that study after study shows that the # 1 customer satisfaction factor is a pleasant experience with the bank teller, the store clerk, the phone order taker, etc. Management consistently shows an almost studied disregard for the tools that these people are forced to use.

    And while it is fun to slam managment/marketing, programmers have to take blame too: lots of time we say "Yeah, it *WOULD* take a year for someone else to do it, but I am a programming genuis. I can have it done in a month!".

    Very true. One problem is that management often shops for a team or programmer who will tell them the estimate they want to hear. And, when you actually have to "name that tune", corners are cut. The corners that typically get cut are in places that are not visible externally, like bad coding practices, lack of concern for modularity and reuse, memory leaks (As long as memory leaks stay beneath the level that testing finds, it's thought that it's OK to leak some. Enlightened testing checks for memory leaks in the stress tests, but enlightened testing is not that common). Ultimately, this cutting of corners is what leads to the software sucking so badly. There's rarely political will to rewrite large systems, even when they are implemented in sand, so each new release suffers with the accretion of sins of the past.

    Bringing this back to the subject of Open Source software, always on topic, never out of style on Slashdot, the problems I've outlined above don't really apply to Open Source development, at least not today. Typically, Open Source projects have not suffered the layers of management, analysts and thinkers that typical software development groans under. Most often, the user is the developer so the result is most likely to be satisfying (or if it's not, you know who to blame and usually know how to go about getting to satisfaction). Even when Open Source projects are written for someone else, the real testing is done on early releases by real users in close communication with the developers. So much organizational simplicity, so little need for endless meetings, project reviews, marketing "input", cost justifications, etc.

    I like to think that free software is really about freeing developers to serve people more directly. With GPL software, if you aren't serving the customer, then someone else can take what you've produced and serve that customer better. With proprietary software, the trick is to develop just enough functionality and value into your offering that anyone else who tried to clone your software would incur too much expense and lead time for your customers to bear.

  • New York, NY, Jan 13 -- People for the Ethical Treatment of Software (PETS) announced today that seven more software companies have been added to the group's "watch list" of companies that regularly practice software testing.

    "There is no need for software to be mistreated in this way so that companies like these can market new products," said Ken Granola, spokesperson for PETS. "Alternative methods of testing these products are available."

    According to PETS, these companies force software to undergo lengthy and arduous tests, often without rest for hours or days at a time. Employees are assigned to "break" the software by any means necessary, and inside sources report that they often joke about "torturing" the software.

    "It's no joke," said Granola. "Innocent programs, from the day they are compiled, are cooped up in tiny rooms and 'crashed' for hours on end. They spend their whole lives on dirty, ill-maintained computers, and are unceremoniously deleted when they're not needed anymore."

    Granola said the software is kept in unsanitary conditions and is infested with bugs. "We know alternatives to this horror exist," he said, citing industry giant Microsoft Corp. as a company that has become extremely successful without resorting to software testing.

    [I don't know who wrote this. I wish I had. -- ES]
  • by Morgaine ( 4316 ) on Sunday October 03, 1999 @04:20PM (#1641443)
    Question: For similar levels of complexity, why do software systems typically exhibit many more bugs than (digital) hardware systems?

    Answer: Because the component parts of hardware systems are almost entirely isolated from each other internally, ie. almost all interaction is through the component interfaces. When one part fails, the others continue working: in computing terms this is very much as if each part had its own processor. The failure of one part can of course stop the hardware system as a whole from functioning productively, but it is far more common that only part of the overall functionality is affected.

    Solution: Use the MMU to isolate software components from each other and to make their internal structure entirely hidden, leaving only their interfaces visible (an OO approach is implied). Then multitask their methods using a dedicated event-driven component scheduler.

    Needless to say, this would require a dramatic change in almost all our software tools as well. Backward compatibility would be minimal.

    In academia I used to do research in hardware/software for parallel OO systems, and this is one of the ideas that popped up. I've had a design for a Unix Object Infrastructure based on this approach on my whiteboard for some years now, but the spare week of kernel hacking I had planned has never materialized. Perhaps this should be turned into a Linux or BSD community project instead.

    It would be rather nice for the free/open operating systems to take a quantum leap beyond the traditional Unix feature set, and possibly solve or at least reduce the woes of the software engineering industry at the same time. :-)
  • by Black Parrot ( 19622 ) on Sunday October 03, 1999 @04:23PM (#1641446)
    > This is probably the single largest cause of failure in the producion of new systems.

    Imagine what would happen if your company was building a new office tower and some PHManager suggested adding another 20 stories after it was framed up, dried in, and interior finishing was underway. But didn't want the price or completion date to slip.

    That's the scale of cluelessness that we're up against.


    --
    It's October 6th. Where's W2K? Over the horizon again, eh?
  • If anyone's interested, we could set up a mailing list and start brainstorming.
  • If that means saying Yeah, we can do that when they don't have the first goddamn clue about how hard it might be to do that.

    Then that same sales critter gets a new asshole and gets to call the customer and tell them that he was wrong. I'd go as far as to say he got 0% commission if not negative commission for bringing in a useless sale. Far too often the sales force is clueless about what has to be done. They're just there because they said they've got the gumption to make the sales and get into people's faces. Who cares if they have no clue what's possible or not?

    If they're bringing in negative sales (sales which generate negative profit for the company due to impossibility, customer service, etc.), they should be compensated in kind. "Sure Bob, you brought in 300% raw sales than anyone else, but you also cost us 5000% more than anyone else because you keep selling what we don't make!"

    The big problem in our space is that no one is willing to say "no" to the customer. If you don't say "yes", then our competitors will, and we will lose the deal. It means that our competitors get the customer's money instead of us while the customer figures out the awful truth!

    And what of your company's reputation... the reputation of being the company that always says "yes" and ships shit? Customers do care about that stuff. Sometimes you have to risk a sale but the customer more often than not will then question the other tenders if one stands up and said "uh..."

    Our company even went so far as to regiment an ISO9001-like procedure that says every release candidate must to A, B, C, D...

    As I said in a different post, GIGO. ISO means nothing other than that there was a documented procedure to get from A to Z. You can still ship shit with a well documented procedure! Go for six sigma and then you'll have something to brag about!

    Basically what I see as the biggest problem is that the upper management... the guys who don't seem to do anything but in fact are responsible for the entire company... those guys need to have technical knowledge or at least have a clue as to how things get from concept to CD. If they don't, the company is doomed.

    Again, I know -- the company I currently work for just went through it -- the COO now has a clue and the entire company (~150 people) has a new 'life' breath to them... it's amazing.

    I'm not flaming you personally, and I do know I'm preaching to the choir here... I guess I'm just venting.
  • The following snippet of wisdom is attributed to Arthur C. Clarke: "Any sufficiently advanced technology is indistinguishable from magic." You've probably seen it come by thanks to /bin/fortune. For all intents and purposes we are at that point today with computers. Now, of course, if you ask someone, "are computers magical?" they will respond in the negative. But consider how people treat computers and how people treat mystical or magical objects. Magical things are acausal, random, and only to be handled by the priesthood. Computers are acausal, random, and only to be handled by the (computer) priesthood. In this regard, the flakiness of computers adds to their mysteriousness. And even the priesthood is not immune to this. The first thing a computer programmer will do when his or her program crashes is to run it again! As if maybe this time it will work and the bug will just be gone somehow. The social implications of magical technology are huge -- without a proper understanding of what technology is and what it can or can't do (and how it should or should not behave), the general public will continue to accept buggy software and, when it crashes, attribute it to the anger of the gods.
  • > encapsulation and modularization are your friends..
    OO is a good paradigm yes, but it has some glaring problems.


    This strikes me as..well..a really strange response. Object Orientation is certainly meant as a way of encapsulating and modularizing code -- but so is everything else under the sun! I can write modularized code in C, C++, Python, or Scheme if I feel like it; those cover about as wide a range of programming paradigms as I can think of. Modularity has a lot more to do with finding the right abstractions for your problem than with the specific way of programming you choose. Heck, I bet you could even write modular code in assembler (I'm not going to volunteer to try it though :-) )
    The problem with any encapsulation is finding the right one: one that's as simple as possible but no simpler. As you observed, object-orientation is just as susceptible to programmer error as anything else. I believe that the Linux kernel may actually be fairly modular for such a beast, and the Hurd certainly is -- both of these are written in C. Emacs is an unholy mixture of C and Lisp; I haven't closely examined the source but given that so much functionality is available as Lisp hooks I don't see how it can help but be well-written internally. (I've embedded scripting languages into some of my code -- it really makes you break stuff up in a better way) Shall I go on? Have I demonstrated my total cluelessness? :P Do you believe that modularity and the particular programming paradigm used are orthogonal?
    Daniel
  • Taking an historical perspective on software development, it is an art still in its infancy. The Programmer's Stone makes a reference to the design of cathedrals, how analysis using modern tools finds that they are well optimized. What it doesn't mention is that we only see the ones that survived. In the early days of cathedral building, many of them failed, literally crashing to the ground. Unfortunately for the people in them at the time, but fortunately for us, the failures were big enough that they could not be overlooked. And so the state of the art of cathedral building advanced with each failure.

    Nowadays most software failures are considered annoyances rather than catastrophes. Because of this, they don't justify the time and money it would take to advance the state of the art of software development. Nothing will happen until users recognize how much software, and for that matter hardware, failures are costing them, and start refusing to pay those costs on top of what they are already paying for the software itself, and become willing to pay the upfront costs to stop them in future development. That recognition will not come until something catastrophic happens. It will only be when enough people die because of a software failure that enough attention will be paid to the state of the art to make some genuine advances. Until then we'll be stuck with meaningless exercises in paperwork such as ISO 9000.

    Maybe then we can actually get some standards for hardware as well, so that the operating system doesn't care which sound card or which video card or which printer it has installed, just like when it sends email it doesn't care whether it is going to a Macintosh, a Linux box, a Windows machine, or a TV set.

  • by scrytch ( 9198 ) <chuck@myrealbox.com> on Sunday October 03, 1999 @05:54PM (#1641485)
    Why yes, just look at the thriving Chinese, North Korean, and Cuban software outfits. Oh and you can spare me the claptrap about how those aren't "true" communism/socialism/whatever, because central control in the name of "the people" has corrupted and failed every last time.

    Shove it.
  • Programmer's Guild. What a great idea! I looked up programmersguild.com net and org programersguild.org [programmersguild.org] is already a website, and looks interesting. .com is not used and .net is available. I'd also be willing to host something if anyone is interesting in creating another one (my boss shouldn't mind hosting something like this)
  • nevermind, I wouldn't recommend programmersguild.org they apparently are only concerned about higher wages and keeping out foreign compition. Totally different goal than I would hope we would be trying to achieve. Anyone have any ideas for a name/url?
  • Solution: Use the MMU to isolate software components from each other and to make their internal structure entirely hidden, leaving only their interfaces visible (an OO approach is implied).

    Using the MMU for this purpose is a poor engineering tradeoff.

    C and C++ are exceptional among programming languages in terms of the poor fault isolation they provide. Most languages do better. Some, like Java, do very well.

    However, while lack of fault isolation is one of the biggest problems in C/C++ software, even with excellent fault isolation, there are still plenty of bugs possible.

  • Using the MMU for this purpose is a poor engineering tradeoff.

    The standard allegedly "good" engineering tradeoff (providing isolation through language alone) is the source of precisely the problem outlined in the article. Since you mention C++ and Java, all real-life non-trivial programs in C++ are permeated with leaks in the object encapsulation, aided and abetted by the amazing actual complexity of the language which only becomes clear after you've experienced it in the field. [And as a C++ developer until recently, I have, alas.] As for Java, its far better attempt at protection makes programs sufficiently slower that the vast majority of commercial software manufacturers say "no thanks" and stick with C/C++ in order not to lose the competitive edge.

    The theory of pure software OO is great. In practice, it hasn't worked, and we're still stuck with the problem of endemic unreliability when programming in the large.

    All the above problems are compounded by the fact that all the objects are in one and the same reliability pool. One bombs its processor and they all bomb. Object-level, hardware-assisted multithreading is the only way around that on a single CPU machine.

    Finally, yes, bugs are still possible even given the proposed Unix Object Infrastructure, that's very clear. However, each object fault would kill only one object directly, while the rest of an application would continue running with a reduced functionality determined by how the failed component interacts with others through its object interfaces. Huge complex systems typically have a plethora of independent paths running through them, so the isolation of fault lines is statistically likely to produce a marked increase of resilience in the face of internal software bugs. And that was what the article was all about.
  • Note that if a JVM were implemented on top of a hardware-assisted system as described, UOI/Java would in many cases run faster than UOI/C++ or UOI/C, because the non-objective parts of C and C++ would have no hardware protection and therefore would be slowed down by substantial defensive programming code that would be almost entirely absent in UOI/Java.
  • I agree with you that paper is easier to read and annotate. However, it's harder to shape, reconfigure, and retransmit.

    While I'll grant you all of that, the guy who's post you're responding to was discussing publishing. And on that note, many compositors still remember how to shape, reconfigure and transmit paper. It's messy and slow but we can do it. Because our _real_ media has been film, not paper. Except for the few lucky souls with digital presses, it still is. All hail film.

    obProgramming: film based computers would just be like really really slow crt based computers (which were used in the 40s and 50s, IIRC)

  • That's a very interesting idea.

    It could have articles and documents about the software design process that programmers can learn from and send to their managers.

    Some interesting topics:
    • Why counting lines of code is a futile measurement.
    • How to write software that is truly intuitive and doesn't actually need signifigant documentation (but write docs anyway, of course).
    • How to write clear, concise docs.
    • Solutions to common programming problems/flexible software frameworks.
    • How to get your development teams to communicate effectively.
    • This list can go on for quite a bit, so I'll truncate it here.


    It would be a great first step to cluefulness.

    I'd be more than willing to help out in any way possible. I've had a lot of success with bosses in my own professional endeavours.

    But then again, my bosses have all been experienced engineers who actually listen to me and respect what I say. I guess I'm very lucky.
  • by werdna ( 39029 ) on Monday October 04, 1999 @02:07AM (#1641563) Journal
    Even when there exist objective criteria defining what software is intended or supposed to accomplish, and even when there exist objective and consistent criteria concerning aesthetics of UI design, art and related subject matter, software is just plain hard to make.

    Software is much tougher to make than soap.

    Why? Because even a relatively trivial program involves express specification of changes to a massively large state space. In the analog world, an engineer infinitely narrows the scope of vastly larger state spaces to be considered by making assumptions that things behave continuously -- not so with code, which grows combinatorially and discontinuously complex with each additional variable, object or control statement.

    Nowhere does this become clearer than when one is asked to engage in true quality control: proving a program correct. Methodologies that work beautifully and elegantly in the small to demonstrate the accuracy of a code segment grow unmanageably out of control when facing a 100,000 to 1,000,000 line program. And in turn, we then have the bugginess of the proof --viz. was the spec specified adequately-- to consider as a new "quality control" issue).

    Even the very process of quality assurance in code is harder than Q/A for soap. A scientist may often presume safely, and can often prove, that if the soap behaves properly at two ends of a temperature range, that the soap will behave properly between them. This is almost never the case with code, it being in the nature of digital things to exhibit discontinuous behavior.

    The bottom line is that excellent code is enormously expensive -- requiring only the brightest, best and most sophisticated management, quality assurance, design engineers and technicians. Noone wants to pay for what they claim they are entitled to expect. However, this is like most things in life. You can get:

    Good. Fast. Cheap.

    but you only get to pick two.

    Regrettably, management is evaluated more closely for fast and cheap, so noone really worries about good, just so long as its "passable" (and its often easier to pass blame to the coders for "good" than it is to blame them for "fast" or "cheap").
  • AFAIAA though, Microsoft aren't heading in the direction of hardware-assitance for their component architecture. They should. It might allow them to get back in control over their big unreliability problem.

    While it may not be quite so urgent for us in the free/open operating systems world, I think it would give us an interesting spur if Microsoft suddenly came out with this kind of thing. It might jolt us out of our complacency and move the Unix architecture on a notch.
  • by wocky ( 17453 ) on Monday October 04, 1999 @02:57AM (#1641572) Homepage
    Question: For similar levels of complexity, why do software systems typically exhibit many more bugs than (digital) hardware systems?

    In my opinion, they don't.

    I've heard the "hardware engineers know how to build reliable complex systems" argument numerous times, usually from hardware engineers. But the software they complain about is far more complex than the hardware by several measures.
    • It requires much more control state to describe the software.
    • The environment that that the software has to contend with is much less well-defined.
    • The functionality required of hardware is much less than that required of software.

    For a typical digital chip, there is in fact some software which is exactly as complex: the hardware description language code for the chip, and perhaps a cycle-accurate simulator written in something like C. How many bugs are there in these pieces of software? As many as there are in the chip.

    A modern CPU may have hundreds of errata, but most go unnoticed. You just don't get many level 3 interrupts while a floating point instruction is stalled waiting for the translation lookaside buffer. Besides, if the problem causes the system to crash, the user will just blame Windows anyway :-P.

    The hardware engineer complains "but those are very exceptional conditions..." Exactly, and they're similar to the type of situations that tend unmask software bugs.

    People are inherently bad at envisioning corner cases. The digital hardware engineers are lucky in that they at least have a well-defined finite-state abstraction to work with. You can buy commercial tools to help with test generation and state-space exploration. The corresponding tools don't exist for software.

    It's true that hardware engineers have to deal with issues such as signal integrity and noise, metal migration, power supply droop, and so on. They generally do a fairly good job on these, but these are all phyiscal issues related to implementing the digital abstraction. As such, they are amenable to the standard engineering practice of giving yourself a little extra margin. If we knew how to easily provide extra margin for discrete behavior, software would be a lot more robust.
  • ...ah, but those particular ones tended to be fixed in patches. Kinda like...

    user feedback: "Btw, your AI players get
    infinite-range missiles."

    Firaxis: 'k, we'll fix that in a patch.

    { patch fixes range for planet-bustin' missiles,
    but *not* for conventional missiles -- which
    nominally should have the exact same range }

    user feedback: Huh?

    Firaxis: 'k, that'll be fixed in the next
    patch.


    Strange. {shrug}
  • Agreed. Frankly, a lot of software development problems can be tracked down to management without accounting.

    Here is what I mean. Software development is usually a troika of marketing, QA and development. The marketing department wants the world--today. Hey, who doesn't? QA wants the most bug-free code possible. Hey, who doesn't? Development wants to build the coolest code possible and thus impress their friends. Hey, who doesn't?

    Often, marketing is put in charge of the development process. Thus, they can ask for all those features without slipping the schedule. And here's the catch--if it bombs due to bugs, they can blame development and QA. In this realm, Marketing has great power without great responsibility. Basically, they never have a reason to slip the ship date.

    Now this isn't a problem with Marketing. They are doing what they were hired for. They are just given the go/no-go decision without the responsiblity of failure.

    For better software, make sure that the people that make the shipping decisions have full profit/loss responsibility. This may or may not be the responsibility of marketing. This is not the responsibility of QA or development, because their skillsets are more technical than business.

    In the best of all possible worlds, there is a project manager with profit/loss responsibility, and said manager feels the pain of both late ships and buggy ships. Marketing reports to PM with what needs to be done to sell product. Development reports to PM with what they need to get the job done, and the current state of progress. QA reports to PM with the current stability of the product. Only the PM controls the schedule and makes the decision to ship.

    This deals with one cause of buggy software--marketing push. What this doesn't deal with is the "first mover effect"--the idea that the first to market wins over the second-comer with more featureful or stable software. If you believe in the first mover effect (I do), then you believe that it makes business sense to ship buggy software--that you lose money waiting to fix bugs.

    The first mover effect is a combination of two things. The first is that the consumer wants it. After all the marketing hype, people don't line up at midnight to get the first copy of a new software package unless they want the software. This sounds like blaming the customer, but think about it. If the customers buy early software more than they buy stable software, is that not telling us that the customer prefers the fast, buggy software, and that we should comply to the customers' wishes?

    Un(?)fortunately, it isn't quite that simple. The other half of the first mover effect is what used to be called "connecter wars". The principle of connecter wars is that the first mover gets to set the proprietary standard and thus the community. Remember that in a lot of cases, the value of the software is directly related to the size of community it lets you interact with. For example, people buy MS Office so that they can exchange documents with other MS Office users. The second mover forces the customer to choose between (possibly) better software and the community of the older software.

    Which brings us back to the obligatory Slashdot reference to Open Source software. The First Mover effect gets mightily morphed by OSS. The second mover can join the first mover's community, because the comm protocols are in visible code and thus snarfable. Better yet, the second mover can simply add their "better code" to the first mover's effort. Yet another reason that open source code tends to have fewer bugs than closed source code: "ship first" is no longer the imperative.

  • That's dumb. OSs like Windows and Unix allow considerable extension. If you don't do it right it's your fault.

    When writing a driver, yes, that's true. That's why I DON'T blame *nix when my pre-alpha driver blows the system up. My point there was that writing a driver is the ONLY time I have managed to cause a *nix system to blow up because of anything I coded. Windows (NT, 95 ) on the other hand are not nearly so robust.

    NT can hardly be blamed for application crashes.

    I don't blame NT for application crashes, I blame NT for allowing those crashes to spread to other processes or the OS itself. It's my job to keep my app from crashing, It's the OSes job to keep your app's crash from crashing my app.

    NT will terminate the process and report the error - or if it's a kernel level driver - do a BSOD to protect other processes.

    Interesting concept, murder/suicide to protect the innocent. Seriously, my problem with NT is the number of times it FAILS to terminate the errant process to protect itself and other processes. Instead, the process gets away with a few things and de-stabilises the whole system. The programmer didn't catch the bug because the process didn't get terminated during testing either.

    Short summary, if you defeat the safety and get hurt (write a driver and crash the system) it's your fault. If the safety fails and you get hurt (normal application crashes the system), it's the system's fault (and ultimatly the designer of the system). NT's safety seems to fail a lot.

  • No. Linux kernel 2.2 has *12* patches. 2.2 is (supposedly) the "stable" branch. 12 patches to a stable branch in less than a year???? And this is somehow "more stable" than Windows NT which had had five in the course of five years?? Please explain, I must be missing something.

    It makes no sense to measure reliability in numbers of patch releases, especially when comparing proprietary with open source software. How much is fixed in which patch? Remember that open source software makes orders of magnitude more releases than closed source software, by design and for those who want it. Linux has distributions so that end users don't have to deal with the pitter-patter of little releases.

    Look, I'm a full time EE and I work upteen hours every week. If you think I have time to fart around with buggy, unreliable software like Linux and its ilk and submit my patches you are dead wrong.

    If you don't have the time to fart around with Linux, what do you have the time to fart around with? I'm not ready to say that Linux is the most reliable OS around, but I am ready to say that it is in the upper echelon. That is, other OSs may be more reliable than Linux, but it seems that nothing short of mainframe-style enterprise OSs is much more reliable than Linux.

    If you're talking about enterprise-level systems like big-assed financial mainframes, I agree. You don't have the time to fumble with Linux--it's not built for the Big Guys (for that matter, neither is Unix). If you're dealing with Unix-sized problems or smaller, Linux is about as reliable as you're going to get.

    At my network shop, we have three platforms: Solaris/Sparc, Linux/Intel, and NT/Intel. From our experience, Linux/Intel is about as reliable as Solaris, and much more reliable than NT. IMHO, Solaris is the Unix benchmark, so Linux is beautifully reliable for the types of jobs it takes a Unix box for.


  • RedHat 6 came out months before RedHat went public.

    Besides that, RedHat did not write Netscape, so it is unfair to blame them for Netscape's bugginess (it hardly seems fair to blame Netscape for it -- I think the definition of a web browser has to include random crashes anymore).

    On this RH 6 box right here, Enlightenment is working just fine, with regard to focus. I don't use Gnome, so I won't comment on that, except to say that my X used to do the same thing, until I realized that the stupid onboard video chipset needed a tweak in my XF86Config.

    Yes, some companies care more about the bottom line than quality software. That doesn't mean that all companies do.

    RedHat may ship buggy stuff now and then, just like most everyone else, but until I see otherwise, I'm convinced they're trying to do the Right Thing.

    --
    QDMerge [rmci.net] 0.21!
  • Frankly, when I see a Windows app crash, I can't tell whether it is the fault of the app or the underlying OS. And if it's the latter, what chance do I have of getting that fixed?

    I tried Windows programming back in the mid-90s (I am a dyed-in-the-wool Unix programmer). I gave up because, unlike Unix, I couldn't tell my bugs from Bill's bugs. And if you don't have confidence that your code has your bug, how can you reasonably debug?

    In personal computing, this causes a lot of finger-pointing. I can't take responsibility for any Windows software I ship because I can't guarantee you that my code won't break Windows. I can pretty much guarantee that my code won't break Unix or Linux. If my code does break Unix, I can show the vendor what I did and show them a Unix bug--Unix is not supposed to allow mere apps to break it. If my code breaks Linux, I can hire someone to see how and fix Linux!

    It's this sort of thing that prevents customers from expecting their software to work first time, every time. Even the most clueless of newbies realize that Windows is not a rock-solid platform.

  • All these people want is to be able to do thier jobs without a big ritual or having to take a weeks worth of training for the latest-greatest. Sure new way the speed up a repetative task is great and they go for it.

    I think alot of software companies have missed the boat with the average home user:

    If it's too complicated, they won't use it.

    I'm not saying that they're dumb, they just have better things to do with thier lives than deal with something that's too hard to use.

    Amen to that, brother.

    I am a programmer, and I do a decent amount of word processing to document what I do (because I'm lazy, in the Larry Wall sense). I have Word (corporate standard, not my idea). I can keep 10 KLOC of Perl in my head. I am not just a luser.

    I can't get Word to do what I want it to do. There is simply too many possibilities, too many ways to screw up.

    Where's SpeedScript when I need it? Where's LaTeX?

  • Show me some major bugs in NT that stops software development.

    That comment is revealing in itself. Most people use 95/98, not NT. So why did you limit it so particularly? (And I see you've done it again in *another* posting on this thread.) I personally have to reboot 98 at least once every evening I use it, for some mysterious leak (that has survived the 95->98 upgrade, changes in devices, etc.)

    I still hear bug reports for the app I work on where the toolbar buttons have disappeared. This was caused not by a bug of ours, not by an OS bug, but because the user installed Internet Explorer 4.0. So I guess IE really is part of the OS...

    Things about Windows that have made development more difficult:

    1) NT vs. 95/98, where the two have similar but not bug-for-bug identical APIs

    2) DDE vs. OLE vs. DLLs vs. MCI vs. COM vs... 10,000 different ways to share data and code rather than getting it right the first time.

    3) Changing the OS with their own apps, like the IE 4.0 thing mentioned above.

    4) Once they move onto the new feature set (such as 98 vs. 95) they'll never touch the 95 code to fix bugs again.

    5)... there's more, but I have work to do.

    Part of the problem is that OS bugs, once detected, are the most difficult to "fix" (more properly, to work around.) Often they require the developer shift platforms to create the workaround. They may introduce inefficiencies for users who wouldn't have the problem to begin with. And there's no way to really fix the problem, since you don't have access to the source. And get the OS company to fix it? Hah! For one recent bug, I would need a fix for a product the OS company has dropped, and they laid off/transferred the entire development team (Apple and the QuickDraw3D team).
  • Anybody who read The Cathedral and Bazaar (most people here, I'm assuming), know that the entire PREMISE of the free software industry is "release early, release often" -- which means that free software uses the attitude described in this article, only on steroids.

    Anyone who has read The Cathedral and The Bazaar will recognize that this apparent contradiction, "How does quality, useful software come from this anarchic-no-formal-spec/testing?" is at the heart of the whole essay.

    I don't see that much similarity with commercial software and free software releases. Sure, the philosophy is "release early, release often" with free software, but there's absolutely no pressure to release before you feel it's ready.

    Feee (sic. Freudian slip? - JordanH) software usually doesn't have formal testing either. Instead of a dedicated testing team like most commercial software has, the testing philosophy is to release it and for users to test it. Not good preventive treatment obviously. Nobody is going to test if the new SCSI driver is going to wipe off your hard drive - it's left for the beta testers to find this.

    Well, the fact that this works at all may well be an indication that dedicated testing teams are overrated. With Open Source software, if you have a release, you can not only report a problem, but potentially, if you are so inclined, provide a fix. This is not the case with commercial software. In fact, in most commercial software shops, the testers are not even allowed to propose fixes. By eliminating this extra communication, things can be done rather more efficiently.

    Here's the Open Source scenario: A new kernel is released with a new SCSI driver. This is a development kernel. Literally thousands of potential developers download the kernel and get it running. Some small number reports that their drive was wiped. A few actually debug it and propose fixes. The kernel developers have an open discussion about this on the Internet and determine the best of the proposed fixes and it goes into the new development kernel. A patch is made available to fix the problem before the new development kernel is made available.

    Eventually, the development community blesses some set of code as a stable release. There's very little chance that a stable release will wipe anyone's hard drive. It, by some chance, it did wipe someone's hard drive, some set of qualified kernel developers would start to work on this isolated problem and come up with a patch, usually within a day.

    Here's the commercial scenario: A new release candidate comes out of development. The testers get ahold of it and exercise it with all of the SCSI cards/drives they have in the lab. It doesn't show any problems. It's released. A whole lot of people complain that their hard drives are wiped. The help desk people who receive the complaints first start to blame the customers for something stupid and tell them to reinstall. Customers reinstall and still have the same problem. Eventually, this is realized to be a problem, and the company's PR machine jumps into overdrive. Data is collected, but at arms length as much as possible. The company is afraid of liability so they admit nothing. Developers and testers duplicate the problem in the lab a fix is developed and a complete QA cycle is performed on the patched system (liability, you know). Weeks later, the customers get a patch.

    Now, MS has been using beta testers for years in much the way Open Source does, but with the considerable difference that Open Source testers are often able to provide solutions.

    We read in the article that QA testing is increasingly slipshod with commercial software. MS is big on the HUGE beta distribution so that they can get coverage of the many many combinations of software/hardware they need to test against. MS really HAS to do this as application programs on their OS's are known to bring the whole thing down. This is relatively rare on other OS's.

    As the article points out, the vendors attempt to explicitly exclude themselves from any liability if their system wipes your hard drive, so I'm not sure that I have much confidence that their software has been well tested either.

    I have also looked at the source code for many free projects such as GCC and GIMP and noticed that the code quality was quite low. For example, malloc() calls were usually unchecked (especially in GIMP). I have worked on commercial projects before, and checking malloc() is rule #1 -- if you happen to run out of memory while using GIMP, it'll blow up, where as commercial systems will simply fail to complete the current operations. If such a high profile package is of such low code quality, I expect the lesser profile packages are considerably more buggy.

    As others have pointed out, your analysis of GIMP is not entirely correct (it mostly uses a wrapper malloc()), but you have inadvertently brought up an important point here. Software quality experts have recognized that Code Reviews are the single most cost effective thing you can do to improve software quality. Many people are suprised that code reviews, on average, turn up more defects than do formal testing. Open Source is like one huge rolling Code Review, with the Internet discussion forums being the communication media. You have now weighed in on quality issues in GIMP and it will be reviewed. There's a good possibility that someone will fix it if you have a valid point.

    Contrast this process with commercial software where, in my experience, you almost never see review of code that has been formally tested and released. Usually, there's a lack of political will to make changes to code that's been "working" in previous releases to future releases. Check the CVS histories of some of the bigger Open Source projects and you'll see a lot of ongoing continual refinement. I like to believe that Open Source product quality is getting increasingly better, while most commercial software I've been involved with has gotten worse over time.

    Also, I think you have worked in rather good commercial shops with real standards (always check malloc() return, for example). I was just reading today that the MS Office products don't "like" to work with documents on floppy drives as they tend to create temporary files on the same source that the document comes from and they don't gracefully handle out of disk space problems gracefully. As the article points out, standards are falling rapidly in the industry as a whole.

    I guess the proof of the pudding is in the eating. Linux and FreeBSD now have a reputation for stability and quality. Maybe that reputation is undeserved, I don't know. If the software is of such low quality that it doesn't meet your stringent standards, then don't use it.

  • I think the thing that people don't understand is that programming is the only place(well okay, genetics is the same too) where your work must be perfect. You can inject typos in essays, we get the message. But you inject a mispelled command in a program => crash. I don't think everyday people understand this concept. You can't just do an allnighter, and get a "B". It must always be perfect.
  • "Capitalism ruins everything."

    Suuuuuure, it does. Right. "Tell me again how sheep's bladders may be used to prevent earthquakes ..."

    Responding to the entire post would overlap with the comments arelready posted (and as Mark Twain would say, would annoy the pig), but a few things stand out as too silly to let fly (and make me suspect that the whole things was flamebait anyhow ... ) So, OK:

    - " ... pesticide-laden, hormone-laced food"? Ah, strictly a capitalist invention there, you betcha! If you want some good wholesome pollution, or were wondering what the practical upshot might be between mostly-capitalism and sort-of-socialism, visit the former East and West Germanies, or the countryside outside of Prague, or many many places in the former Soviet Union where toxic (including radioactive) waste was brazenly poured into lakes and rivers. TMI and the Love Canal have nothing on consequence-free socialist management policies.

    - Time wasted in traffic jams? Surely you jest. Would you rather be waiting 10 years for a car, or queuing for *bread*? How about signing up 6 or 10 years in advance to get a telephone, and paying your brethren electrician a few bribes along the way? And besides, you don't have to take that job if you don't want it. Sorry. ("Aw daddy, I like all these diamonds, but they're so heavy! Why did you make me take so many?!") There's no pleasing some people, I guess.

    Get this much straight: Capitalism is actually fairly agnostic about what you *do* within it; it defines certain things as moral (free exchange of goods, including services, including philosophy, including whiny, illogical peaons to governmental oversight and meddling in everything, etc.) and after that, you're on your own to find the path you think is best. Captitalism defines possibilities; socialism draws up job lists.

    Cheers,

    timothy

  • Good point, but alas I doubt if the world is ready. We've lived so long with the current standard style of programming that the lessons learned on the specialist machines that emerged from the AI scene have been entirely forgotten, it seems to me. They were just too far ahead of their time.

    It's an interesting analogy though. Perhaps a hard object-oriented system architecture could fire the imagination in standard computing circles where a list-based architecture previously failed.
  • Sorry to disappoint you, but I'm a software professional, not hardware, although academia did teach me how the hardware fraternity did things because the EE and CSc sides were very sensibly integrated.

    That was a previous life though. The lesson that the real world then taught me is that software is more complex than hardware only because it is architected on an appallingly bad infrastructure that makes it almost impossible to increase the size of a software system without making complexity go through the roof.

    The reason for that is simple: lack of system-guaranteed isolation between components, which means that thirty years of development of structured programming languages *still* results in an unstructured nightmare in the presence of faults. That doesn't happen in hardware, except in the single instance of the power rail(s) being compromised by a major failure.

    And it's precisely at that problem that the above solution is targetted: to provide a little hard structure to control the programming complexity in the same way that the O/S controls the complexity among interacting processes, a well tried and tested strategy it seems to me.
  • Those are all good points, but you've missed something. Yes, software systems are often much more complex than hardware ones, but why? After all, hardware systems sprawl out across the entire globe in an interconnected mesh, yet when the hardware of a router in Paris goes down then the hardware of the rest of the Internet continues quite happily doing its thing. How come? Let's examine this a little more closely.

    Add software to the equation: now the software in the router in Paris goes down and, excellent, the software in the rest of the Internet continues happily working away. So far so good.

    Now consider that router again, except this time look inside it: its SNMP module (just one component of hundreds within the router's O/S) decides to express a coding bug, and what happens? Ooops, IOS has just trashed everything in memory and the router reboots or hangs indefinitely.

    What's the difference between these scenarios, and especially between the two last ones in which it is software that has failed? Simple, in the first two cases, there is faultline isolation between the components of the system, whereas within the router's software there is no isolation between software components in the presence of a fault at all. So many years of structured programming, all for nought.

    *That* is a key reason why software is more complex than hardware in so many cases. It's not just a matter of size and of the number of internal states. Complexity can be controlled by a simple strategy of divide and conquer, as long as black boxes can be made truly black. In standard computing, this is impossible in the presence of bugs, and all large systems have bugs. The approach is utterly flawed for computing in the large.

    What I'm proposing is a little extra structure to control the chaos, because chaos is precisely what the software world is fighting, although it's rarely expressed in those terms.
  • Yes, those are precisely the main areas of concern. You can definitely get good performance for a small number of large objects, but PC MMUs aren't really intended to cater for huge numbers of tiny objects directly.

    However, creative design may be able to overcome the problem to some extent if some feature can be exploited to make a good tradeoff, rather like caches do in another problem area. Whether this is possible here remains to be seen.
  • Almost all serious windows developers use Windows NT. This is a developmental issue is it not?

    Of course developers develop mostly on NT, but the applications they develop run mostly on 95/98. As an example of where this screws things up in the app I work on, we had a crash that appeared only on 98, and only on certain machines. I attempted to set up remote debugging on that machine, and it refused to work. I put MSVC on that machine, and it crashed while compiling. I was never able to fix that crash. Apparently it "went away" thanks to other changes in the code, but as far as I know it might reappear on any new compile.

    Unix/Linux machines often give you core dumps -- memory images of the program when it crashes -- which you can then use a debugger on to find the point in the code where it crashed. While this doesn't matter to the developer who is generally running the program under live debugging (MSVC isn't perfect, but it's not bad either), it does mean that it's harder to track down problems discovered by alpha testers (people in the same building.)

    I mean, what Win9x application have you run on NT that goes all funny?

    First off, there are a whole bunch that don't run at all. Second off, we've had to write workarounds for things that work in NT, not 98. Mercifully, I personally haven't had to deal with them much (other than the above crash), but I hear plenty of kvetching from my coworkers...

    Sorry, I just don't have the time to respond more, but DDE and OLE are totally different (DDE is text-based, OLE is like published C++ classes), and you still need DDE to do thing like change the start bar; installing an app shouldn't change the OS (and break other programs) at all, no matter what the app; and the Linux guys have issued 2.0.x revisions even after 2.2/2.3 started.

Your own mileage may vary.

Working...