Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Technology

A Framework For Quality Assurance? 97

Midh Mulpuri asks: "These days licenses like GPL, MPL and QPL make it easy for developers to release software as open source without the hassle of having to write up "TERMS and CONDITIONS" for each project. While the software is released as open source, it is mostly released with no warranty whatsoever. That is reasonable considering the software is often 'free'. Major projects often find service providers offering fee-based support. But for most part, the burden of providing service falls on the community and the developers. Further, there is no quality assurance other than the reputation of the project (and the developers). While for many this is enough assurance, the lack of warranty and quality assurance might still leave some people uncomfortable. To make 'free' software more appealing we would need a better way to assure quality. The greatest quality assurance for software is the developer. A framework should be developed to make it easier for the developer to assure users of quality." Does Open Source really need a framework for Quality Assurance?

"This framework might include:

  1. A "General Practice Agreement" or "Ethics Agreement" that states ground rules for open source projects and programmers in general. (I am using the word Agreement lacking a better word). This would be like the contracts that bind lawyers, doctors, architects and other professionals. Developers could sign the "Agreement" by placing a graphic on their site.
  2. Self-test guidelines. The guidelines would include a checklist that needs to be completed before any production release.
  3. Documenting results of the self-test on the project site and/or distributing the test suite with the software.

Most projects already have some form of testing. Documenting and publishing these tests would add to the credibility of the project. An ethics standard might be a more difficult issue. How would you come up with an ethics standard that is acceptable to all? A statement like "I shall document my code well" or " I shall make my software extensible and customizable" would be easy to adhere to. "I shall base my work on industry standards (when possible)" might not appeal to all programmers.

Such a framework would need the backing of prominent open source programmers to be developed and implemented. I believe that the assurance of quality would bring more respect to open source and open source programmers. If such a framework already exists (and escaped my well traveled mouse) please let me know. Otherwise, I request the Slashdot community to kick start the development of such a framework. What should a "General Practice Agreement" contain? What would be good self-test guidelines?"

This discussion has been archived. No new comments can be posted.

A Framework for Quality Assurance?

Comments Filter:
  • by Anonymous Coward
    "I shall document my code well" or " I shall make my software extensible and customizable" would be easy to adhere to. "I shall base my work on industry standards (when possible)".

    You're an American aren't you? You can always tell ...

  • by Anonymous Coward
    A large, very successful contingent disagrees [xprogramming.com] with that sentiment. The idea being that, if you can't test it yourself, you don't understand it well enough. It is your responsibility, as a developer, to ensure your code is as bug-free as possible before checking it back into --your VCS here--Extreme Programming says that relentless testing ensures trust in the code and enables rapid refactoring and progress.
    XP has test frameworks [xprogramming.com] for many popular languages.
  • I think your argument takes too narrow a view of support. True, until now most support has been in the area of configuration and installation support as well as troubleshooting basic useability issues.

    That doesn't mean things have to be that way. let's assume for a minute a company deploying a 100% Linux solution. It includes three prepackaged boxen [cobalt.com] types configured to serve as

    • a Human Resources Server Appliance
    • a basic Database Appliance
    • a Client Appliance

    Let's then assume these boxen plug into an ethernet cable and power and come up under DHCP with no user intervention.

    At this point the customer makes a call to support to schedule some time going through the configuration of the HR rules engine.

    We're talking about doing what used to be called Professional Services as a simple call to customer support. I know there are holes in this idea, but as you come up with arguments, come up with your own examples and solutions. I think you'll find quickly that improving quality never has to lead to a loss of profits, just more opportunities for premium services.

  • From the front page of our web site:

    All work is guaranteed to be free of coding errors for one year after delivery. In the event a coding error is discovered within 12 months of delivery, we will fix the error at no charge. Be sure to ask our competitors about their bug-fix warranties.

    Guess what? No customer has ever asked about our software quality ever! We have found that customers are much more interested in

    • How much will it cost?
    • When can you deliver?
    • Can you make it do...?

    There seems to be a fundamental assumption on the part of customers that software will fail. Their expectations are down to "Will it mostly work?".

    There is a serious lack of "professionalism" (to use a buzz word) in the software development business which is probably to be expected given the relative lack of maturity of the field when compared to other engineering disciplines.

    It should be in the best interests of good developers to change this. If you raise the expectations the customer has for the deliverables by educating the customer, you will raise the perceived value of those deliverables, which will raise the amount the customer is willing to pay, which will improve the bottom line for whoever is writing the code.

    Of course, then you have to actually deliver what is promised which is where sloppy developers will get in trouble.


    -- OpenSourcerers [opensourcerers.com]
  • First off let me clarify something:
    "The greatest quality assurance for software is the developer."

    BZZT! Wrong, try again.

    Development and QA should work together in order to create quality software, not one or the other so let's stop this useless bickering. Rock stable software (whether free or commercial) comes from proper documentation and requirements from the beginning. It also includes a structured and controlled test cycle and this my friends is usually a combination of QA and Development.

    Other than that misunderstanding, I can see how OSS would benefit from QA. Being a former UNIX QA engineer/manager now NT QA engineer (bleh), it's great to see the OSS community is contemplating embracing Quality Assurance.

    Too many times in my experience, QA has been brought into projects when the project is nearly done. That drives me nuts. QA should be in the project from the very beginning, discussing requirements, planning, and scheduling. Throughout the project, QA would be involved in CONTROLLING changes proposed to the project. As most of us know, most projects fail because of too much change from the initial plan. Finally QA would be involved in making sure the project is completed and implemented. This is a basic project life cycle my friends, and note QA is involved (along with development) nearly everywhere.

    The people who believe that QA analyst/engineers are those who couldn't make it in development, I'd like to shed some light on your opinion. Sorry, but a majority of the QA people I've worked have had greater than or equal skillsets to their development counterparts. Most times it is the QA person who finds the error in the code and lets the developer know. So lets squash that myth right now. Good QA analysts/engineers are in QA because they want to be, not because they are forced to be.

    While we are on the subject, OSS needs to come up with automated testing software! (Winrunner/Loadrunner , Visual Test, uhh *cha-ching*) It would be simply beautiful to see OSS embrace QA Software. (I'm getting excited typing about it)

    I would gladly offer my services to the OSS community for QA. I have a fairly strong technical programming background, but my heart lies in QA. If anyone wants any assistance with any OSS project, feel free to contact me.

    "Electric Relaxation" - ATCQ
    - Bwana

  • As stated, 1. Most people, even hackers, don't have the incentive or time to track down a bug in huge software products.
    It doesn't take most. In fact, if just 1/1000 do this, you probably have more QA going than any commercial product with the same number of users.

    2. Fixes made by people who aren't familiar with a code base are unreliable. Changing code is scary if you don't understand the code.
    In commercial software houses, changes are made all the time by people who don't understand the code. Often times, the person who wrote the code left several years ago, and noone is really even sure it works completely with the latest library versions. However, with free software, projects tend to be much smaller, and so figuring out what is going on is fairly trivial (I found a problem in inetd in under an hour - it ended up not being a bug, but a feature I didn't know about). Big projects (like Oracle Applications) are riddled with bugs, and without the source, your company is dead in the water waiting for the benevolent company :) to make a fix.

    3. Most people who touch OSS code don't do regression testing. Heck, most OSS doesn't have test suites available.
    This is your only good argument. We do need a generic regression-testing interface. Number of hours of QA time (mentioned by others responding to the article) is a really bad metric. Having well-defined test-cases to run all the time is a good thing. However, currently most free software is pretty good about being backward compatible. This is probably because there are enough users willing to put time into keeping them that way.

    4. Many "bugs" end up being differences of opinion or simply a user not understanding things properly. Jumping in and fixing those is bad.
    You are correct. A better fix is a configuration option (which is mostly how its done).

    Anyway, as I said, we need a good test suite. Now, some programs have excellent regression testing available, but noone has done a decent generic library for it. Examples of good regression tested programs are:

    • gcc
    • Perl
    • some Perl modules
    Anyway, thats just my two cents
  • Actually, QA will make the total amount of effort on the developer less. Finding and fixing that bug now will be a whole lot easier than finding and fixing it in the future. If he spends 4 hour doing QA on it, it could save him 20 or 30 later on trying to figure out why an obscure bug keeps cropping up.
  • You're implying that closed-source software comes with a warranty and GPL'd software does not. This isn't the case. There is no liability assumed with closed-source software either.
  • It's a troll: just because it bucks the local average does not make it insightful.
  • What I meant by that is, if a piece of software isnt of good quality, then people simply wont use it, and use other software instead, whereas if a piece of software is of good quality, no bugs, actually works, etc... then more and more people will use it, the software will gain a reputation as being good, and it will gain a reputation of being good quality.

    Of course, this doesnt apply to M$ windows, because for joe average user who doesnt even know how to hold a mouse properly, there isnt any other software to use apart form windows. But for those of us who dont like the quality of windows have decided to use other software (Linux etc..)

    So in short, I dont think any QA statement is needed, and people will decide for themselves what is good and bad.
  • Lets see about Solaris's "quality", considering their "recommended OS patches"

    Solaris 8: 14.5M

    Solaris 7: 31.7M

    Solaris 2.6: 42M

    Solaris 2.5.1: 52.2M

    Solaris 2.5: 45.7M (gee, their point release is buggier than the release before it, some QA)

  • A developer can be the best person to test his code.

    For reasons already repeated elsewhere, I agree that testing your own code (white box or unit testing) is essential, and that 3rd party testing (black box) is also essential. These leave out a middle ground, though, that works surprisingly well: having other developers test your code (and vice versa.)

    Typically, if they're working on the same project, they'll have a better understanding of the code that allows them to do deeper tests than a black-box tester could, while avoiding the "can't see teh forest for the trees" problems that you face when you try and test your own code.

  • You mostly list server/admin/console utilities in your 'good' list and on your 'bad' list two 'luser' GUI apps -Windows and Office' which have to do 1000 things for 1000 different (mostly uneducated) end users. Due to their complexity and GUI orientation, these apps are considerably more complicated than a webserver or mailserver utility, which is one reason why they are 'buggier'

    I'm not sure that universal OSS QA standards are enforcable or practical across the large range of OSS products, given the different purposes and motivations for coding, but having worked as tester and test manager I believe that intelligent independant testing is invaluable for any development effort.

    It should be obvious that end-user Office-type apps need a lot more prerelease formal QA than a utility like samba or squid would need. this is because experienced users (hackers/sysadmins) will verify functionality and ask for fixes/improvements of a network utility or such, while the real users of a consumer-type GUI will see the product too late and be too inarticulate or uncommunicative to give good feedback. QA for GUI apps often serves as Usability testing and user advocacy as much as functional testing. (I play an SQA on TV!)

    Where Linux/OSS seems to fall short sometimes is in the (insert windowing system of choice) GUI frontend to CVS, Squid, Senmail or whatever. By the time that a GUI-based end-user product gets released to its end users it is usually too late to protect the users from errors -they must be fixed prior to release -they need a 1.x release that has been blessed by QA for meeting the functional spec

    These people may not be happy with Windows, but that doesn't mean that they want to wade through forked or half-finished code, or camp out on freshmeat waiting for a new patch because the app doesn't cut and paste for some reason. As other posters have noted, writers of GUI apps need to adhere to GNOME/KDE standards, and realise that a lot of GUI design is a lot more subjective than, say writing a TCP/IP stack and that sometimes the endusers/QA understand some of these issues better than the developers do.
  • For the most part, there's no formal QA testing that goes on for OSS projects. However, they seem remarkably solid in alot of cases. Here's my theory.

    The 'popular' (ie, widely used) OSS products are used by mass amounts of people on a day to day basis. Some people will spot problems. Some percentage of those who spot the problems will either fix the problem, or report it to the package maintainer - either way, it'll get fixed.

    If instead you look at obscure very-rarely used open source software, you'll find heinous bugs - often the software has obviously not been run through its paces. With such a small audience of people, it's less likely that bugs will get fixed and released back into the mainstream.

    If the software ever gains a significant following, more people will be using it and finding bugs, and fixing them or reporting them so that they can be fixed.

    It's not formal QA, it's natural selection at work, and it does work.

    Azerov

  • I have to say I'm a civil engineer who is moving toward a masters in Comp-sci. I've worked at some places that have done extensive testing and some that haven't.

    What surprises me is the responsibilty and lack of oversight of programmers in general. In civil engineering you need licensing before anyone can build what you design (PE) and ususlly a design is looked over and signed by 2 PE's minimum before it goes out.

    Granted a civil engineering failure can cost a lot more in human cost than a computer one, but that is changing rapidly as computers start to control thing like aeroplanes and cars. Also software is very very complex in ways that some civil engineering stuff is not., but buildings tend to fail much less than software.

    Open source helps this by allowing many to review the code before using. Reading others code is hard, so good documentation would be a great start. They're needs to a set guidlines. This editorial is a good start.
  • by sohp ( 22984 )
    The questioner is under the delusion that there is an actual warranty or guarantee on commercial software. This is clearly not that case, and asking that one be imposed on open source or other free software is simply compounding the error. There was another slashdot discussion on UCITA and warranties [slashdot.org]. The gist is that no software is warranted for any reason and the shrink wrap agreements on all of them say pretty much the same thing.

    However, what the questioner is looking for is not a warranty but someone on which to put the burden of responsibility for problems. This of course is exactly what free-as-in-speech software is NOT about. Trying to go back to the author or the company that sold you the software and saying "Your software doesn't work, fix it or we'll sue you/use harsh language in a public forum" is exactly the problem that can't be solved with legal shackles.

    Using free software means that you as the user are taking responsibility for its use and the consequences of your actions. The old dodge of pointing fingers at the other vendor when your customers come with complaints is what the questioner looking for. To be able to say, "We have this GPA on the software we use, it can't be our fault you are having problems" is just a dodge.

    Free-as-in-speech software is about taking responsibility for your product, not about finding a way to cover your ass when you ship a buggy product.
  • All the "quality assurance" systems I've seen
    implemented in companies for software development
    have been useless at best, and a total lie at
    worst. Ultimately the quality comes down to
    the developers and their dedication to writing
    automated test cases and documentation. You can't
    measure that, but natural selection assures that
    good software will prevail.
  • This could almost be considered a good idea, but it'll never happen. Most people writing OSS do so as a sideline to their main gig. Personally, I'd rather let the public review the software than expect the programmer to take up more of their time trying to make the code conform to some set of standards that they are not likely to agree with.

    If your company wants a free, secure, reliable, open source, "just push this button to install" piece of software then you're barking up the wrong tree with OSS. We are not going to develop your companies entire IS platform for you, and then accept a nice pat on the back. OSS is for people who can _understand_ the code and tweak it for their specific application. Do your own damn QA...
  • The problem here is not that the source is open, and that people can make changes to it. The problem is that there are problems in the software which allow it to be exploited.

    If the FTPd in question had been rigorously inspected and reviewed, it wouldn't be able to be hacked like that. If the protocol which is being spoofed had authentication, encryption, etc., it wouldn't be spoofed as easily.

    If anything, using open source leads to higher quality releases, one step beyond. While the software in question might not be perfect in 1.0 or 1.1, by 1.2 or 1.3 they've probably fixed the issues which allow it be easily cracked/hacked. And if there are serious protocol issues, 2.0 will probably solve those.

    Sorry to respond to blatant trolling, but as long as Back Orifice exists, your claim is completely without merit.

  • After all there are people out there who love working with open source software who want to contribute yet have no development skills. They report bugs back to the developer. And the developer if s/he is actively working on it will probably fix them, or at least update the TODO list in the distro to say that these bugs are present and maybe someone else will fix them.

    I really doubt that most open source developers really want their name on the 1.0 release of their product when it's full of bugs -- ever notice how almost nothing out there is past the 0.xx stage in development?

    Trying to bind someone to a process flow just isn't the answer. Bugs get fixed if the developer has time. If they don't follow standards, people may or may not use their product. It all just works out on it's own - the fittest products survive and the rest sit at 0.01 forever.
  • Under UCITA, a software company can write buggy software and not be responsible. In many ways, this is what the GPL is already. Since no warranty is implied, there is no reason to expect that the developers are liable for anything. This has its place, but it also means that GPL software won't ever be used under fail safe conditions. So what I would recommend is perhaps a new GPL which would add QA as a requirement. In other words, a GPL with warranties. Obviously it wouldn't have to apply to anything, but I think having and supporting such a license would be a good idea. Software released under the new GPL would meet certain requirements. I guess the main problem would be determining what requirements made the program safe. This would be a way to gain respect even if UCITA was everywhere, because there would still be a license that would guarantee quality.
    P.S. Please correct me if it looks like I don't have a clue.
  • Open Source really doesn't need a set QA system, it already has one. It's called peer review. Say Hacker X creates a program with obvious bugs. He/she releases it, and Hacker Y downloads it. Because Hacker Y has access to the source code, he/she can find the bug and fix it. That fix is then submitted back to Hacker X who then releases a new version. Traditional QA systems would take forever to find all the bugs in a piece of software. The whole point of OSS is that it lets more eyeballs view the code and submit fixes, freeing the developer from some of the testing duties. Now, I will grant you that the system isn't perfect - it requires at least one Hacker Y to find a fix the bug, or at least notice it. However, if OSS is supported by a community of members who are committed to making better software, then the system is more efficient than a traditional QA system. (Think of it as distributed QA.) Open Source works like a capitalist economy. It works well without need for regulation. If something happens you don't like, then you have other choices. I think the simpler we make the whole process, the better.
  • QA is a good thing, and should be SOP for anything that someone takes pride in...
    However, reading a standard shrinkwrap license such as Microsoft's, the *end user* is ultimately left swinging in the wind as regards to the quality of the software. If you buy a copy of WindowsblowME and it doesn't work with your equipment, you're screwed out of however much you paid for it because the EULA nullifies any implied warrantees of fitness of use or merchantablity.

    Usually, handing money over for a product implies some sort of merchantablity and fitness of use for simple things like garden trowels. Buy a garden trowel at your local garden center, and if it breaks under normal use, you can (usually) either get a new one gratis or your money back from the store where you got it.

    Put simply, most commercial software doesn't even meet the standards of a garden trowel in merchantablity and fitness of use.

    Now enter OSS. It's free as in freedom, but it's also free as in beer. No money changes hands for it. There is absolutely no implied merchantability or fitness of use. It's not buyer-beware because nobody is _buying_ anything. We are now supposed to hold _this_ to higher quality than Microsoft software as a _standard_? And back it up? With what? How is the author responsible for the quality of the software? I see no way that an author can be bound by a QA program if he/she is not charging money for the product. There's no implied contract betweent the user and the author.

    IOW, if we want software authors who write OSS to be bound to some sort of QA program, it will have a chilling effect on someone who wants to scratch an itch and release some code, however bug-ridden it may or may not be. Mpst decent OSS projects start out really ragged around the edges. Putting an arbitrary QA program on software like this is just _nuts_.

    Someone up there said there IS a quality assurance program already in existence. It's called Peer Review. I agree with this. OSS has more in common with scientific research than it does with manufacturing. For example: The funk-soulbrother-programmer sez "check out this proggy I wrote" and many people do, sending him feedback on bugs and whatnot. If he takes pride in his work, and pays attention to the feedback, the next release is better, and so-on. But this is _entirely_ up to the author! If he decides he wants to just cast-adrift the software, he should be free to do so, and not be held to an arbitrary QA standard.

    QA is good when there's an actual implied agreement between the end-users and the author by the exchange of something of worth (usually money, but sometimes goats or sheep or pecan pies...or my favorite, beer.). But I fail to see where it applies when the "product" is distributed freely (as in beer and in freedom).
  • Just as a reminder, unless you pay for software, you won't get any kind of warranty. (Even if you do pay for software, you probably won't get one either. Checked your shrinkwrap agreement lately?)

    There is absolutely no way that any OSS developer is going to risk being sued becaused they "signed" some "testing standards" statement and a jury might rule that they didn't meet all of the requirements. Lawyers, Doctors, and other professionals get paid, and buy insurance to protect them in the event they get sued. An unpaid OSS developer will never be able to afford that insurance.

  • Testing is done for two reasons: to find bugs so they can be fixed and to demonstrate how good the product is. E.g., the FDA requires formal QA to do the latter. Users, coders, and testers are all equally adept at the former. Any large company has some body tasked with determining the quality (and suitability) of potential products and services that will be used company wide. They don't delegate or trust someone else to do that. This is in addition to any testing the original team did.

    If there is a make check target, I feel a little better about the quality. They did some testing, they automated it so it probably get done with every release and most builds, etc. I know that some attention has been paid to testing. For example, the GNOME XML library has dozens of tests and takes about as long to run the tests as build the library. I has some assurance that there is some concern for quality.

    Excuse the cynicism. My experience with QA people has not been good. There may be even fewer really good QA people than programmers. I have encountered a number of great programmers in 25 years in this business. I have met one great QA person (thanks Roger Mason wherever you are). The rest reinforced the notion that those that can do, those can't test (and criticise).

  • I am an OSS proponent as much as the next Good Guy, but this is an area that is a bit misunderstood IMHO.

    The idea of many eyeballs making for little surviving bugs is statistically sound, but nothing is ever said about the amount of time that has to pass before the principle kicks in.

    Eyeballing code is one thing, but that is not quite the same as testing a product. In other words, considered, tailored and targeted trial-and-error is much more effective than manual code review.

    (see my other post for a different take on how to deal with this)

    Schmolle
    (QA Manager for FT.com)
  • You've definitely made a good point - I'd like to expand upon it if I could.

    It's not just a matter of documenting the code itself, it should also extend to the program requirements. There should be a clear list and description of everything the program should do. This doesn't necessarily include nit-picking details like, "the program shall have a pull-down list enumerating options for blah located 50 pixels to the right of... ," but it should definitely include things like, "the program shall adhere to the html x.x standards as published by ...."

    Once the requirements are listed, they have to be stuck to, with no willy-nilly adding of features. If there is something you want added or changed, it'll have to be brought up as a requirements issue, and the maintainer will approve or disapprove it before any changes are made.

    I really think a process like this would help one of the major problems I've seen with open-source projects, and that's feature drift.

    You can't test a program before you have specifically said what it should do.

    "If I removed everything here that I thought was pointless, there would be like two messages here."

  • One thing I would add (and I won't address what I would remove from the above terms) is an "Agreement" that the code will compile as long as the instructions included with it are followed. This would include the precondition that certain libraries are installed. This is one thing that bothers me... when developers use non-standard (i.e. stuff that isn't included with major distributions---or any distributions) libraries and don't make it clear in the documentation.

    As a programmer, etc., I can usually figure out what I need but many people can't do this. I can understand how a developer could forget to mention this kind of thing, they work with the package every day and the extra libs are things they always have installed etc, but not everyone is aware of what the developer is using on their personal development machine...

    Anyway, a standard practice of trying to build your package on a machine that is not normally used for development (or getting a few friends to do so) can go a long way to making installation instructions clearer (and more useable) which, in the long run, will get your package used and tested by a larger group of users. Seems to be a good thing for everyone.

    --8<--
  • Maybe this is just a cheap quip, but it seems like the "Documentation" should server as the 'Framework' For Quality Assurance.

    Commenting the source code just explains how s/w *should* work -- the documentation should explain how it does(n't:-) work.

    Currently, marketing seems to write the specs and documentation, then programmers write the code, then QA verifies that the program works according to the documentation. Put off the documentation until QA -- have them write it.

    Alpha testing would be the coders, Beta testing would be the QA and documentation effort. Then release it for Gamma testing to get the bugs out of the documentation!!!

  • We, the users and those who edit the code of OS software, act as 'testers' as well as users.
    ----
  • User Acceptence testing is certainly something that a lot of OSS projects could do with. Given that most OSS is writen by a hacker, who gives more thought to "function rather than form", some OSS is absolutly horrible software to use from a users perspective.
  • Only if you work in a shit software house. In a proper development enviroment, the product doesn't shift until QA says it can, and if that means the deadline is missed, so be it.
  • OSS doesn't have test suites available

    This brings up an interesting point. Are there Open Source automated testing suites? At my company we use Rational Robot [rational.com] to do automated testing which makes regression tests trivial because we just run a script overnight and it tests everything. We also have manual test plans for areas that can't be tested automatically or scripts just haven't been written for.

    Do Open Source projects even have test plans? From what I can tell they don't. The developers just submit their code and they post it. I'm sure they might run through some simple tests, but sometimes I doubt they even do that. They seem to be depending on peer review too much and, as you mentioned, sometimes the problems in code are more involved than just a few lines need to be changed in which case making larger changes can sometimes cause undesired effects in other areas of the product.

    I do think that with a dedicated test team running something resembling test plans that the overall quality of an Open Source project will improve.
  • You want something that you can develop and change for yourself, yet you want a warranty or guaranty that it will work just like you expect it to... COME ON! At very least, where's your pioneering spirit? Do you also want to climb Mount Everest but expect someone to guaranty that you will not fall or hurt yourself in any way?

    I have to disagree here. Maybe Open Source and the community is in a pioneering mode Right at This Moment(tm), but it won't always be.

    Sure, there will always be some people on the bleeding edge, but for the most part, OSS developers write things that somebody needs somewhere.

    For example, I just started up an OSS ticket/call tracking system (like Clarify, Remedy, and a few others), using Interbase 6, C, and Perl. This is hardly pioneering or bleeding edge, and a perfect case where some third-party quality control and evaluation would be useful.

    Yes, it can still be argued that OSS in general is pioneering because of the way it's being distributed. That'll change soon enough, when more and more companies and individuals pursue the concept.

    Just my two cents...

  • The only person who can assure quality is a third party (Non developer), who does a full QA test on it. Trust me on this one.

    QA testers are really useful (read: necessary) as they use systematic methods for the tests and have a fresh look on the project, but quality not only happening when the software goes through tests or walkthroughs.

    Quality is good docs, "tested" analysis (the software is up to specs, but are the specs up to the needs of the customer?), and many other really important things... See any software engineering 101 course...

    "The only person who can assure quality is a third party (Non developer)" is false... since for many (not all, and not the majority) quality assurance steps the developper is the right person to do his part of the job.

    I don't know why I answer to this post, because actually, the biggest mistake was in the slashdot article: "The best person to assure quality is the developer"... This is oh soooo false.

    phobos% cat .sig
  • As a full time QA Engineer, I will tell you that there is a very different mindset between a developer and a QA engineer this differance is what makes QA esential to any good prosess. There are things that someone trained in QA the QA prosess will atumatically look for that a software designer/programer will never think to test. I break things for a living as a consiquense I take the team of developers out on social events reguraly so they know I am not attacking them personally. Not one developer I know uses the QA prosess on his/her software nor do they use this prosess on other software closed or open source it matters not. I submit many bugs/defects to the OSS projects that I use their program almost all have been very happy to recieve good well written bug reports from someone who knows what they are doing.
  • There is actually a pretty decent framework that my company uses for product development. It is a framework, and as such is only designed to be a starting point to be adapted to your project's needs. I think review of it could provide open source developers with a good basis for thinking about their own product development roles. Coding is only the beginning.

    Keep in mind, this framework is designed to be used by shrink-wrap firms, so lots of things may not apply. Nevertheless... the roles are as follows:

    • Program Management - This person or group's role is to provide the liason with the customer. The customer might be your intended internet audience, another business unit within your organization, or a paying shrink wrap customer. The idea is that this role acts as the advocate for the customer to the team, and vice-versa, communicating information about requirements, development difficulties, etc.
    • Project Management - This person or group owns the schedule and the budget (if there is one). They provide information on projected slippages, etc., and generally provide everyone involved with a view of the project's progress.
    • Development - This person or group owns the codebase and is very often the only group in many open source projects. Their role is to make sure that the software fulfills the requirements as set out in the requirements docs.
    • Testing - The reason this group is called testing and not QA is because QA often embodies procedures and standards for development. This group's only role is to make sure all issues are known and addressed at release time. This doesn't mean they have to be fixed. It just means that the team knows about them and understands their impact.
    • User Education - This role is in charge of developing materials like help systems, paper docs, etc. They may also conduct usability testing, etc. Basically, they act as the advocate for the end user to the team, commenting on UI, etc. where appropriate. Very often, the customer and the end user are not the same person, especially in business systems. Consumer software, windowing systems, or any other package designed to be used by the mainstream user can benefit from this role.
    • Logistics - This group handles all the administrative tasks involved with building and deploying software. It might include your webmaster, the guy who administers your sourceforge site, your build lab people, among others. All the people who provide the infrastructure for development fall into this category.
    On a small team many of these roles may be shared by one person. For example, it is no problem to have the coder be the same guy who sets up the servers. Some roles don't mix well, however. As most of you know, the last person who should be testing software is the guy (or gal) who wrote it. Every team is different, and so it should be used as a starting point for thinking about the composition of your team only. Since my company has adopted it's own version of this framework, however, we've found that we've had great success in delivering the right software at the right time. I assume that is what many open source developers wish to do also, as open source becomes more and more mainstream (and provides a living for more and more developers.)

    By now, many of you may have realized I'm talking about the Microsoft Solutions Framework. Flames to /dev/null please. Take it for what it's worth (a random comment you read on Slashdot). If you want more information, however, and don't mind surfing over to the evil empire, you can find whitepapers at http://www.microsoft.com/msf [microsoft.com]. I am not compensated for my opinions, nor do they indicate those of anyone else at all. Just my two cents worth!


    --- Brent Rockwood, Senior Software Developer

  • I think one of the reasons OSS is regarded as "not-so-safe" when it comes to safety-critical systems - is the lack of open standards. Very few OSS-contributors buys the IEC61508 just to make sure their product complies with the standard. QA should be the first step on a long journey towards making rigid open source safety standards.

    My point is - if (or when) OSS is _better_ than other software - how do we confirm that ?

    I hope some day in the near future I will be able to say to my Airline company: "I hope your plane is controlled by 100% Open Source. I would like to review the source code before takeoff."

    I would _really_ not like an answer like: "Uh - no . We're still using Win95...."

    -rune

  • But I agree with the previously-made point about interface testing. You need real (non-CS) people for that.

    Definitely. There's not much reason for programmers to be good at this, as it demands its own skills and experience and is mostly orthogonal to correctness testing.

  • The testing suite that I have been most impressed with is the one in use for GCC. IIRC, when a bug is reported, they write a testcase for the bug, *then* fix it. The goal for a release then is 0 regressions.

    Because they produce snapshots on a regular basis, everyone who downloads them is encouraged to submit the test results to this database so that regular information can be collected: http://gcc.gnu.org/testresults/ [gnu.org]

    I was reading "Extreme Programming Explained" which seemed to also have a good philosophy (although I haven't gotten very good at it yet), which is to *first* write a test case for the objects that you're designing, and *then* write the code. When the tests pass, the code is complete. This way the automated test suite can be run many times a day, and you can track regressions that way.

    I'm rambling, so I'll stop now! =)

  • Quality Assurance??

    This seems to be a major area Microsoft hasn't seemed to touch with even a TFP (ten-foot-pole).

    I haven't had any problems with open sourced software that related directly to the quality of the product. This 'issue' seems to be focused at pushing open sourced into a commercial space. If that be the case, commercialize it dad gummit! Or at least have some organization or corporation that is at the head of such a project. Like the posting stated, if a development group has a reputation their is less to worry about, so why don't software projects that could fit nicely into the commercial space go there? Most likely gumption - it takes skill for that.

    Maybe this is the way things should stay. There could be *major* liabilities with doing something like this!

  • If there can only be one tester, it should be the developer, if he is consciencious.
    I agree with this entirely. I speak from the experience of being downright masochistic about my own coding, and knowing that I don't work particularly well with others when it comes to programming. I've always felt the only way to get your program right, however impractical, is to do it yourself (while writing everything modularly and basically pretending, every day that you work, that you're a different person on your "development team"). To an extent this goes for testing too.
    But I agree with the previously-made point about interface testing. You need real (non-CS) people for that. But, as was recently pointed out over at Alertbox, you don't need more than about 5 people for that.
  • I find it strange hat everybody seems to be talking without considering sourceforge. This site is doing more than offering development means to developer. It is an incentive to have a serious development process : - configuration management/ bug tracking - control of releases - auditing (throught statistics and votes) - planning (not mandatory but pretty) - compile farms to test portability and run tests Currently they do not offer a way to show off the regression testing developers do. But when they find a way to show them in their pretty interface, developers who did not want to make formal regression tests on their code will have big incentive to do it. That's what I find great with sourceforge, it works through incentive and brings a uniform way to describe software projects.
  • "Quality Assurance" is nothing more than a buzzword that overpaid, technologically clueless executives and marketroids like to throw around.

    There is exactly one and only one quality assurance method. And that is to fully map out all possible states that a program can be in, how they can reach each of those states and that none of those states is an error state reachable from the start state. This is only possible for the tiniest programs. Stuff running inside a PIC chip, maybe a simple device driver, etc. We're talking a few single digit K of binary code tops, and more often than not, well under 1K. Huge apps like an OS or a word processor? Forget it. There can be no "Quality Assurance" for such things, only rigorous testing of "simulated average use". But like in mathematics, proof by exhaustive example does not a proof make.

    Advertising how many hours of live (not automated or unattended) beta testing a product has undergone and by how many different and unique people would be a far better guage of a product's reliability than any "Quality Assurance" methods could produce.

    And these two numbers, hours tested and how many doing the testing are orders of magnitude greater for Linux than for any Microsoft OS. So while Microsoft spends orders of magnitude more dollars on "Quality Assurance" programs, Linux comes out more stable. You tell me. Which is the better indicator of product reliability?

  • I've been hosting my personal site on linux/freebsd for 1.5 years.

    I have never had (at least noticed) QA problems with:
    - Apache/PHP/Perl
    - Sendmail
    - Pine
    - vi

    I *have* had problems with the following:
    - Windows
    - MS Office
    - Almost every 3d card drivers
    - Creative lab drivers
    - HP *
    - Bob's linux driver for my TV tuner card

    How can I argue that more QA isn't better? But I really couldn't ask for any better QA than what I've seen on my "production" machines. It seems the number of QA problems I have with a product is proportional to the cost of the product. It directly follows that OSS doesn't need better QA.
  • Last piece of commercial software I bought was Diablo II. Came with absolutely no warranty. Incredibly buggy. I've actually fixed bugs in the data files myself (My mod is here [slashdot.org], if anyone cares). Diablo II would unquestionably be higher quality if it were open source. Of course, that seems to be because a QA department can't overcome pathetic programmers, not because QA isn't helpful.

    Perhaps at the very high end, commercial software will be higher quality, due to QA. But as far as every piece of software I have ever bought, I think Open Source can match it.

    So while Open Source is great for desktops, I guess it just isn't ready for servers yet. :)

  • Not one single major commercial software product has a warranty. Not Quicken, not Windows, not Excel- if they don't have warranties, why should an open source product of lesser importance? My Gnome themes don't work right! I want compensation!

    The system works great as it is. It is easy to find out who puts out a quality product. Debian, FreeBSD, Redhat, Caldera, Apache, etc etc etc....Yeah, the suits need all kinds of reassurances. A better approach, instead of selling ourselves into legal bondage, is learn some social and business skills and demonstrate the superiority of these products. Yelling won't work.

    It aint broke. Don't mess with it.

  • Don't just dismiss QA because nothing you've ever done was important enough to need it...

    Um, just 'cause you pushed me there, I am a Field Engineer for a Semiconductor Equipment Company and responsible for millions of dollars of product and equipment. QA is very crucial for what I do, so I believe I am right in my thoughts.

    Next time, do your homework before you play with flames.

    'WOW, that felt good!'


    --

    Vote Homer Simpson for President!

  • hah! good qa/test is hard enough to get when you pay for it. for free? guess again. qa always sucks, especially in commercial firms; most qa people are total morons who couldn't get a real job.

    the best qa people are sadistic technology freaks who JUST KNOW that the developer cheaped out on that corner case, AND who actually care that they ship a good product.

    developers suck at QA. developers have other things to worry about besides corner cases, unless that developer had lots of free time or happens to be a qa zealot.

    open source QA? Peer Review? More like mutual masturbation. yEr c0d3z great d00d!
  • As the team lead for QA in a well known internet company, with prior experience in manufacturing process design, "Big 6" consulting, and a Masters in Management Information Systems, I absolutely support most of the views presented thusfar. I have worked in organizations where the QA actually produces a worse result than no QA, and others where QA really did its job, insuring the customer a solid product. The essence of the problem is not solely in development or QA; it's in the process by which both take place. Any product needs to go through a series of phases which are part of what is known as the "Software Development Life Cycle". there are many well known books on the subject which explain the concept quite well. I am particularly fond of the books by Ed Yourden. I would also recommend reading about the CMM- Capability Maturity Model. As for developers being the best for QAing and testing their work I'll ask this: if you were an oncologist who had cancer, whould you treat yourself? If so, you have a quack of a doctor. If you were a lawyer on trial, would you represent yourself? If so, than you have a fool for a client. By the same token, QA is by no means the answer to all problems. QA usually only has the perspective of functionality- not use-ability. Because most QA is done in comparission to the product requirements, they probably don't have the deapth of understandiong that programmers do. On my team, I would prefer to cross train all personnel in several areas so that we can have the appreciation of those areas and be better testers. OK, I've droned on long enough. Thanks for reading. michael.
  • So if you can't find all the bugs don't look for any? Put down the bong and come down from your ivory tower.
  • Question: What does a company, when they have more customers than they can attend?

    Answer: They enlarge. They move into a larger building, hire new staff or source out.

    How do _we_, the open source community, solve this problem right now?

    -> Outsourcing: SuSE, RedHat and other companies focus on training, support and consulting.

    -> Prevention: There are a few dozens of howtos and faqs, and the application's documentations improve and enlarge.

    -> Own support: Developers, project maintainers and many many mailing lists and IRC channels provide support to newbies, but these get crowded more and more (e.g.: IRCNet channel #linux)

    So far, so good. But in my opinion, we could do better:
    Why do I see on any project's homepage a call for new developers, but none for an editor?

    The concept "let's sit down and write an OS" worked fine so far, what about the concept "let's sit down and create documentation, Q&As and Howtos for this OS"?

    Open Source developers often get reputation from their work. Looks fine in a letter of application when you can show a reference code from the linux kernel or any other large project.

    And I guess, that would be the same in the media branch. So why GNU, OSDN etc. don't face on the facts and help improving the service? Let's call for editors and lecturers. Let's make LDP source no. 1 for questions on Linux.

    When starting this, Rich Stallman argued that software should be free, because he had the idea that being able to choose one's software leaves control where it belongs to - at the user.
    But what if the user's unable to take control, because he doesn't understand?

    In my opinion, giving *good* manuals with the source and providing support and training is the last step towards Free Software. And - to get another bit more illusionary - let's see what comes next.

    Armin.
  • One way of course would be a body distributing certificates of quality. We can see this today with organisations like . These only provide very loose approval though, and for the final product only. The thing I love about open-source software, is, as the same suggests, the source code itself. [gamelan.com]

    The drawback of certification though, is that uncertifyed software tends to get rubbished, and looses out.
  • Any attempt to create some assurance of quality would undermine the GPL's explicit clause relating to warranties (that none is implied). You can't simultaneously imply there's no warranty and imply that certain environments are perfect. Any court of law in the US would see through this and declare an implied warranty to exist, and cause endless legal hassles for RMS and the FSF. (I can't speak for other countries; maybe Canada is more sensible, but I doubt it.)
  • That's odd. As an American, I can think of so many better reasons we should be the laughing stock of the world.
  • Exactly. As much as I hate cliches in general, if you want something fixed, fix it your own damn self, lazy.
  • This is only partially a myth... 1. Most hackers wouldn't use large software packages in the first place. It's called mix 'n' match, whatever works works. 2. If you're not familiar with the code base, why are you trying to fix it in the first place? If you don't understand the . then don't fuck with it. 3. Why should we? Testing takes to long. If it works, ... If it doesn't, don't use it. 4. Re: 2. -- Sometimes I think I'm the only one who understands common sence. Then I find another hacker.
  • A. No one looks forward to playing a game where you stand in a circle and lose if you do something. B. Structure, smucture, I hate 'em both. If it works, don't fix it, and it is broke don't brake it. C. And no, the internet did not start as a beautiful, nor is it still. The internet started as a network for the U.S. DoD, I hardly call that beauty. And with complete and total freedom, I hardly call the modern internet beauty either, it's chaotic, anarchic. Don't ya love it? -- Sometimes I think I'm the only one who understands common sence. Then I find another hacker.
  • Organizations require structure, and moreso as they grow. I can't say, though, that I look forward to the time when the OSS community becomes bureaucratic. It will happen and QA systems are sign posts of its happening. Pride in your work as a method of quality assurance will not disappear, but this method resembles the system used for goods produced by craftsmen (women). Remember that the internet began as a beautiful thing; the beauty remains, but reduced by many warts and blemishes.
  • I would encourage you to continue with this attitude, the more people like you there are out there who have no idea what the contents of a software engineering textbook are, the more highly I get paid to fix your messes...
  • For anyone wanting to begin using a quality methodology in their programming, I make the following suggestions:

    1) Read [Steve McConnell's] [construx.com] three books. It's a good start. What he writes is based on solid research, which he shares with you. His books aren't the complete answer to all problems, but reading them and using a bunch of the tools/methodologies he describes is a great way to begin doing things better.

    2) Do a Google search on "Software Capability Maturity Model" and start researching. Eventually you'll come across the Software Engineering Institute, and their [summary of CMM] [nasa.gov], which is well worth reading.

    3) Do a Google search on "Bell Canada Trillium" and start researching. The [Trillium model] [nasa.gov] is well-respected, and is based on ISO, CMM and other best practices. Where it differs is that it actually tells you what to do; the others tell you what you need.

    4) Do as much as you can with the structures that are described, plan on how to do them all, and adapt them to your needs. Identify what works and what doesn't, and fix those things that don't.



    --
  • Eh, you haven't had problems with Sendmail?

    And here I thought Sendmail was on top of the list of buggy programs which lead to security problems. Guess I was wrong.
  • Yes, some software does need to be subjected to a formal QA process. Do you really think any OSS tool is going to be incorporated into Space Shuttle's systems without this?

    Of course, the latest version of nethack probably doesn't need a *formal* QA check - the players ensure swift resolution of any problems.

    Many commercial organizations use RedHat because of the support. They assume that some QA has been done to minimize the need for that support. This work is what helps build the RedHat brand.

    John Doe buys Colgate toothpaste because he trusts the brand - and what it implicitly stands for. If OSS is to take over the world, it needs to build those brands.

  • First, in response to a piece suggesting a framework for QA in open source software, you say:

    "This is the kind of bullshit thinking that drives companies out of America and to the waiting hands of Countries that welcome them for being just a company,"

    and then you say: "QA is very crucial for what I do, so I believe I am right in my thoughts."

    It seems to me your thoughts could do with a bit of clarification.

  • This is the kind of bullshit thinking that drives companies out of America and to the waiting hands of Countries that welcome them for being just a company.

    No, this is the kind of thinking that turns American corporations in global behemoths capable of crushing all competition. If open source wants to compete, ultimately it'll need to use some of the same techniques. Don't just dismiss QA because nothing you've ever done was important enough to need it...

  • As I read this article I tried to think of any piece of software where there was any sort of warranty. Most software comes with a warranty on the actual media should it be deffective, and some provide limited technical support. But most of the "warranty" attached to software says something to the effect of, "if this breaks and you lose a bunch of money because of it, it isn't out problem."

    I mean look at how big Microsoft is when they release software that is notoriously buggy. People seem to be quite content to have unguaranteed occasionally buggy software. OSS doesn't need a guarantee, it's guarantee is that if you have the skill and knowledge you can fix your own damn bugs yourself. It's guarantee is that if you e-mail the makers of the software and ask them nicely, they'll probably fix your problem (if its significant). Ultimately it isn't really a "guarantee" but those OSS products that grow and thrive generally have this feature.

    As a developer I have worked with many products that were deffective. That is to say, they made claims about what they were capable of doing that were blatantly wrong. With one product I even ran into a situation where I told them that it didn't work with another product that they said it did. Rather than fixing it, they simply modified their website to no longer claim that it worked that way. Generally these products are black box commercial software, and if they break I can't do a damn thing to fix them. If I'm lucky the fix pack for the software will come out a couple months down the line and fix it, but I may have to wait until the next version.

    On the other hand, with products like Tomcat, Apache, Linux, etc, I can dig in and look at what's wrong. I don't have to wait for a bloody fix pack or new version (if I know what i'm doing). And I have to say that despite having no guarantees of anything, I've found that the most responsive and helpful technical support I've ever gotten has been through an OSS mailing list and I didn't have to pay a dime for the software or the support. After working with all OSS stuff in college (like I could afford commercial software), I found it tedious and frequently ineffective to try to get phone support from a commercial vendor.

    ---

  • Also, bear in mind that most commercial software comes with no warranty either. Just check the "shrink wrap" or "click wrap" agreeemnts. "THIS PROGRAM COMES WITH NO WARRANTY, EXPRESS OR IMPLIED."

    sure, there may be formal QA in commercial softwre, but what does it really mean if the user can't get his money back?

  • As another QA engineer, I second the motion. All too often QA gets relegated to the bottom of the food chain, but they belong on the top right next to the developer.

    Some of the major quality problems I see with a most non-commercial OSS are:

    1) Lack of specifications and requirements. A project that does not have a detailed plan right from the beginning is courting crud. (and a major reason why a lot of commercial code is crap as well). I offered my unpaid services to a project a while ago saying "I will write a comprehensive test plan for you if you give me access to your specs and reqs". Unfortunately, the only documentation they had was the source code tree and a README that said "sombody write this...".

    2) Not adhering to the specs and reqs. This is almost as bad. All you users out there, get and read the GNOME and KDE user interface guidelines. If an individual GNOME or KDE program doesn't follow their self-stated rules, log a bug immediately.

    3) Passing the blame on to someone else. It's all to easy to blame another one of the myriad projects in a Linux or BSD system as being the culprit, but that's a copout.

    4) Passing off beta (or even alpha) software as "stable". Do you think just because Microsoft can get away with it that you can too? Don't call your project complete until every last one of your serious bugs are fixed, and issue a list of all ramaining minor bugs and annoyances with work-arounds. And this brings up another point. Get someone independant of the developer to rate bug severity.
  • Open software enjoys the benefit of massive code review which provides dramatic improvements in overall quality.>/em>

    So what? Code review is not quality. It's only the first tiny step on the long road to quality.

    The problem is I can't say how many people are testing it and how.

    Absolutely! QA is one of those non-glamorous jobs that OSS often ignores. But its a needed role.
  • As a QA engineer, the software needs my approval to ship. Of course the CEO can ship it anyway, but it takes his explicit veto to do so, and he would find an empty engineering department if he ever did so. I suspect that there are quite a few companies like this, especially those that have been around for more than a few years.

    But some companies do put deadlines ahead of QA. If you work for such a company GET OUT NOW!

    One of my questions I like to ask QA candidates is if they would sign off on the software if their manager told them to. So here I am, fifteen minutes into the interview with another 45 minutes left, and the candidate says "yes" to the question. I reply, "I'm sorry, we don't hire rubber stamps for $25 an hour when they're only two bucks at Office Depot."
  • Sounds like Mozilla :)
    We have a growing community of QA and testing volunteers who have been working and learning QA process on one of the largest open source projects out there.

    mozilla.org provides daily Mozilla builds for 3 to 4 platforms here [mozilla.org]. We provide an open (and kickass) bug reporting and tracking tool Bugzilla [mozilla.org] Our QA and testing docs are getting better all the time (Mozilla QA [mozilla.org]) with published daily smoketests as well as detailed functional test suites for all areas of the product.

    If you're intererested in getting involved with a one of a kind open source QA and testing project please take a look at our Getting Involved [mozilla.org] pages or stop in to #mozillazine or #qa on irc.mozilla.org We have a weekly help session (BugDay) every Tuesday for new folks interested in getting involved. So if all this talk about open source quality has sparked your interests let us know.

    -Asa
    Mozilla QA and Stuff
  • I trust you, and I agree. I also make my living as a developer.

    FINAL test (at any level) really does need to be done by a third party.

    The QA person should be in on code walk throughs - but at that level, he is more of a "throw questions at the developer", and the developers will be able to tell the QA prson if it will work.

    AFTER the developer has run his OWN unit test, you feed the unit to QA, and trust me, unless you're a VERY strange developer, a GOOD QA person will find things that you didn't

    Once you are into final test, QA becomes invaluable. Developers just don't have the time to dedicate to things like full regresion testing, and that alone can save your butt.

    OK, let's just say your developing Windows Software (Boo, Hiss, I know - it's just easier to give a screwed up example)
    Have you:
    Installed on Win95 (Base, NO SPs - unless you don't support that)
    Win95B?
    Win98?
    Win98se?
    NT4.0?
    NT2k?
    (and then run a FULL test of everything?)
    The have you installed Microsoft Office over you app, and run all the test again? (all OSes please)

    The have you done it in opposite order (Base OS + Office) then install your app

    If you want a really stable system, you'd better try all of these (Plus different versions of IE etc)

    Of course, we have the same problems in Linux - Have you tried it on as many differnt flavors as you support? With other software installed (the worse behaved the better)?

    This is where a GOOD QA department makes their living. Gad, I wish me present company had one

  • Open Source really doesn't need a set QA system, it already has one. It's called peer review. Say Hacker X creates a program with obvious bugs. He/she releases it, and Hacker Y downloads it. Because Hacker Y has access to the source code, he/she can find the bug and fix it.

    While peer review cuts back on bugs, it's no substitute for QA. What you fail to mention is that a hundred other people download the same version as Hacker Y, and it's just as buggy as ever, in fact more so than a project that had QA. Also, there is less pressure on the developers to test properly, since "we can always just release a new version" is in the back of their minds.

    I think the simpler we make the whole process, the better.

    I agree. This doesn't mean ignoring even the most rudimentary QA, instead putting out a release that probably has a lot of bugs, and hoping that other people will find them.

  • Open Source really doesn't need a set QA system, it already has one. It's called peer review. Say Hacker X creates a program with obvious bugs. He/she releases it, and Hacker Y downloads it. Because Hacker Y has access to the source code, he/she can find the bug and fix it.

    This is a myth. Problems:

    1. Most people, even hackers, don't have the incentive or time to track down a bug in huge software products.

    2. Fixes made by people who aren't familiar with a code base are unreliable. Changing code is scary if you don't understand the .

    3. Most people who touch OSS code don't do regression testing. Heck, most OSS doesn't have test suites available.

    4. Many "bugs" end up being differences of opinion or simply a user not understanding things properly. Jumping in and fixing those is bad.
  • I have a qualified disagreement with this statement also.

    While I agree that THE developer is not the best person to check their own code, ANOTHER developer (esp. if the other developer hasn't been working too closely w/the 1st one, but has a firm understanding of the internals of the existing code) is an excellent way of improving one's code.

    It's a question of the granularity of testing - a QA test guy can only test the overall product like a black box, exploring errors that can only be activated directly through the external interface, and if they discover a problem, they can only describe the symptoms.

    A _developer_, on the other hand, can test & review the code from the inside out, resulting in robust components & code maintainability which an external tester only has an indirect interest (and a very limited set of ways that they can reach some of the internal code). The developer can also point directly to the source of problems, saving incredible amounts of redundant debugging effort (e.g., to reconstruct that really annoying bug which only shows up under a certain race condition...)

    A good QA test suite is still essential to make sure that final overall product looks & behaves like it was supposed to, but you don't make robust bottom up, incremental improvements by blackbox testing.

    Unfortunately, most of the companies I have seen/worked at aren't willing to expend the resources necessary to have another developer THOROUGHLY review & test each other's code. (As far as I can tell, the managers seem to think that this kind of activity is "not productive", and/or it will cost too much to hire enough developers to do such testing in a reasonable amount of time.) Usually, I end up in a few code reviews where a bunch of people scan the code quickly, looking for typos or uninitialized pointers or other such simple problems.
  • I didn't intend to say that the QA teams are composed of grunts, but that unless they are MUCH more talented developers than the actual developers (which is definitely not the case if they are usually composed of "entry-level" programms), they aren't going to be able to come up with ways of testing the internals of the code & pinpointing errors as well as people who were on the original development team.

    For the most part, the QA team will test the interfaces to the various components of the code (not just the UI interface), as well as correct any superficial problems.

    For finding & correcting deep-down, fundamental design-type problems, you need experienced developers with a gut-level instinctive understanding of the code that they're working with (unless you're talking about a trivial amount of code).

    Furthermore, the best place to find & correct bugs is early in the development process - BEFORE a lot of structure has been built around that code. By the time a serious bug is discovered by a QA team, the developers are going to have to do a lot of work to patch the problem - and in the face of a deadline, this usually results in sloppy work. Whereas, if the bug is discovered earlier in the development process, it takes much less effort to the correct the problem.

    You can still say that the QA team is responsible for finding THOSE kinds of bugs earlier & earlier in the development cycle, but then the QA team pretty much gets merged into the development team - which was my point to begin with, that the QA process should really be an intimate part of the development process, from bottom-up and top-down.
  • A much simpler solution is to just publish the test plan for the package. Then the user of the software can decide if it has been adequately tested. Just because the author (or a third party) claims that it has been tested to a certain level doesn't mean anything, especially if the certification is accompanied by a disclaimer of liability.

    I work for a software consulting company and one of the deliverables on every project is the test plan. But every customer is not willing to pay for exaustive testing (and in some cases really don't have any need for that level of QA.) On some projects, the test plan might be "we ran through the changes and it appears to work". In other cases, it would be thousands of test cases, with both automated regression tests and manually executed cases. For a program that prints labels for a CD, you don't really need extensive QA. For a compiler, you do.

    It certainly wouldn't HURT for some person or group to publish different levels of certification, along with suggested test plans, tools for managing test cases and plans, and automated test tools (perhaps based on DejaGNU, etc.) Sounds like an excellent Open Source project in fact. Providing tools to actually help authors and development teams would be much more useful (and much more work) than creating a committee to write up some bogus standards that aren't really appropriate for any specific project.

    If the author of this question is really serious about the quality of OSS, then I'd suggest that they just go test something. If you're concerned about the quality standards for a specific piece of software, contact the author and volunteer to set up an automated regression test system and generate sufficient test cases to prove it works. I'm sure that the developer would welcome the assistance. Open Source works much better when led by example, rather than by creating some arbitrary standards.
  • This kind of thing (QA or testing, whatever you want to call it) goes into the category "Things that you could do to aid an open source project even though you aren't necessarily a programmer"

    Anybody with experience in larger projects will be able to verify that when projects of any form or shape get beyond a certain number of people involved, there is overhead involved in maintaining the project.

    One could think of administrative duties (such as maintaining a web site and communications beyond a message board and a mailing list) but also the things done in commercial software development that are not actually coding.

    In other words: why not have QA people (good ones, not bureaucratic, satanic, ignorant organisational outcasts) as a part of an OSS project. That would free the developers from (often tedious) testing work and it would speed up the elimination of bugs.

    A possible scenario is that QA people that (like programmers) want to contribute their time would do nothing other than have a box with a known (and published) configuration (VMWare!) that they would install new revisions of a product on and run extensive test suites on it, feeding back the bugs into the project.

    btw: this is quite common for Big OSS Projects, but the smaller ones could benefit from it just as much.

    Food for thought, I hope.

    Schmolle

    (QA Manager -- FT.com)
  • The only person who can assure quality is a third party (Non developer), who does a full QA test on it.

    I agree with this... I feel that a developer can, with discipline, build a comprehensive test plan that scours every requirement, etc. but you just can't beat a 3rd party for this. It's far too easy to gloss over areas you "think" are working, or to not find a bug because you just didn't think "anyone would ever DO that".

    Further, 3rd partys (or any non-developer) can really help point out interface issues. I know what a boolean operator is, and how to build a query string using them, but I wouldn't dream of asking a search engine user to enter a query in disjunctive normal form, etc, yet if I recently found myself using a bunch of "tree" terminology when trying to write prompts and messages for an application I was working on that organizes user contributions in a tree structure. It took a non-CS person to try out our interface and laughingly ask what it meant for a form field to be "non-empty", etc... And so on...

    --8<--

  • For IBM, this is no problem; IBM can say: "we'll make it work or give you back your money." That's credible. For Joe's Garage Software, this statement may not have the same level of credibility. If Joe has some past history in business, he may well be able to get a performance bond to stand behind his work, and ensure that funds will be there to fix problems.

    It seems to me that this problem really bites the folks who have a really new package, and no past history. I'd guess that for sendmail or apache, there aren't any difficulties here; lots of places will provide support for them. You don't even need the developers anymore for that. Third-party support has been the answer to this question so far in free software. But linuxcare isn't going to offer support for some thing they've never even heard of.

    Maybe the beginning of an answer lies in that last paragraph: software with a good reputation WILL be supported by third parties, and you (or your client) will be able to hire some outfit like linuxcare to ensure that your application runs for a client, IF you have earned a good reputation. So, to provide quality assurance in free software:
    First, build it and make it good.
    Second, keep developing it, make it better, and get users to adopt it.
    By now, you should have a good reputation and lots of firms standing ready to stand behind your work.
    I'm not sure how we fit a quick IPO and early retirement in here, but maybe you shouldn't expect that out of the free software model.

    Nels

  • I would agree with some form of Quality Assurance. The easiest thing would of course be clear code documentation. I know that many a programmer wouldn't be too fond of it, but it would help for implementation. Companies are for more interested in software that has some kind of built-in reliablity level. We're looking to get some software coded for us, and documentation of the code is really vital to us, so that we can make changes or easily fix any bugs found.
  • Strong, but qualified, disagreement.

    A developer can be the best person to test his code. But only if he truly respects the value of testing. This is a respect that must be cultivated in most people. Unfortunately, most projects (volunteer or commercial) fail to do this. Commercial projects generally forsake the effort and dedicate a different team to testing. Most (not all) volunteer projects simply forsake it (but ESR's bazaar priciples can make up for this lapse).

    I don't disagree that even a consciencious programmer can be blind to his own mistakes. However, if he cares about quality, he will also be the most acutely aware of the weaknesses and possible problem areas in his code. This should let him target his testing to maximum effect. A dedicated tester on the other hand will basically shoot at random parts of the code.

    Don't get me wrong--I'm glad my company has testers. They are more cost-effective at catching many bugs, especially those that only show up when all the pieces are put together. But they should only start testing after developers do their own testing and debugging. Otherwise, their time is wasted reproducing, confirming, isolating, and reproducing simple bugs that the developer could diagnose and fix in minutes.

    If there can only be one tester, it should be the developer, if he is consciencious.

  • If your testers only test random parts of the code, then your testing department is seriosuly lacking. Part of the duties of a good QA department is to create test plans that cover the entire code base.

    Mabye in very mature development organizations. But lets face it: the space of inputs is enormous and testing resources are limited. Since tests are basically discrete (to use a rough analogy, a single test covers a measure zero subset of input space), coverage in any rigorous sense is a mirage. Inevitably, entire categories of possible inputs will be missed.

    Saying that testers fire "at random" was obviously an exaggeration. They certainly have some notion of what's important to test. But they don't have the sniper's accuracy of a careful developer.

    Also, the QA department should be involved right from the very beginning, even before a single line of code is written.

    Bah, this has way too much overhead for most projects. Give me a realistic example of an early design mistake that would likely be found by a tester.

  • I know I'm by no means to bring this up. I've seen this addressed by far wiser folks than myself on both sides of the issue, but for this topic I believe it is appropriate.

    At what point in time does ease of use and quality start to eat into the profits of a company that bases it's entire revenue stream on support? Just so I have someone to pick on in this discussion, let's pull a RedHat out of a hat. They're a fair example being that they essentially don't sell software, but a service supportting a software package.

    Mind you, I don't mean to suggest that RedHat makes any attempts to make Linux difficult to use, and there's certainly ample evidence to suggest that they're invested in projects to ease usability. Where the concern comes from is in the long term, when ease of use starts to collide with the only product they truly sell, support.

    In the traditional software market, support is not a profit center. Instead, it's considered to be a liability which charges folks for this service as a way to cover costs. By it's very nature this model has to strive to make ease of use a top priority to be profitable.

    If software configuring becomes too simple, this is going to have an impact on support oriented companies. Perhaps the very notion isn't even possible. Still, one method must strive (not saying it's ever gotten there) for support free solutions, while the other must involve enough difficulty as to require a support agreement to make money.
  • There is no quality assurance with any free software. That's why fee-based software is so much more popular (like Windows).
  • You beat me to it. There is NO WAY the developer is the best person to assure quality. In fact, that person is very often the absolute worst.

    Nice of you to put in that comment about OSS projects, but I have to disagree. I've found most OSS projects to be far inferior in quality to commercial projects (and to the people who can't read, that does not mean that all OSS has inferior quality to commercial software).


    --

  • I have been thinking about this since /. posted a story on the ethics of free software [sdmagazine.com], where the author makes a big point about that it may be more ethical if a company provides software for $50 that never crashes and comes with a money-back guarantee than free software with no warranty.

    It occured to me that I can't see anything stopping anybody from selling GPL software with a warranty, a warranty provided by the company that sells the software, not the developers. The no-warranty is there to make sure the developers will not get sued for failings, but businesses selling free software should be able to provide a warranty, in connection with e.g. a support program.

    I don't know if this allready exists, but I think it would benefit the OSS community, as such a company is likely to do extensive and formalized tests of OSS software, and come back with patches if they find bugs.

    Also, it may impress the suites that a software company offers warranty for a product others develop, out of their strict control.

  • This is the kind of bullshit thinking that drives companies out of America and to the waiting hands of Countries that welcome them for being just a company. In a twisted way, you can say your thinking is distant cousin to all the extremely ignorant political-correctedness that is weakening America's position as a major country and quite frankly, making it the laughing stock of the world.

    You want something that you can develop and change for yourself, yet you want a warranty or guaranty that it will work just like you expect it to... COME ON! At very least, where's your pioneering spirit? Do you also want to climb Mount Everest but expect someone to guaranty that you will not fall or hurt yourself in any way?

    Stop whining and release your creative spirit! If it doesn't work like you want it to, then add the code that enables it do sing or dance for you!


    --

    Vote Homer Simpson for President!

  • "The GPL is like making adultery illegal: a net loss of individual freedom for a net gain in morality."

    This reminds me of George Carlin talking about prostitution: "Selling is legal, Fscking is legal. Why isn't selling fscking legal?!"

  • "As a QA tester, i can assure you that a developer is not the best person to assure quality. They just don't see their own bugs, or code that isn't to spec. (This may not always apply to OSS projects)."

    Damn right on that one. Tim sweeney coded the Unreal Engine, complete with mouselag, sound system lag, and network code that outright sucked (even he admits to this). And he STILL won't address the lag problems in Unreal Tournament; he seems too arrogant to even admit defeat.

  • Remember the printf() bug scattered about ALL the libraries in Linux, Unix, and BSD! That was Linux's back orifice, and it could've been solved by the engineers actually checking their syntax! Gee! Ever notice how the bible-thumping software engineers never seem to check their syntax?
  • Right now, I don't trust anything GPL. Not solely because it's free, but because it's free to be hacked, cracked, and whatnot. Sure, it's illegal to do that, but who says that it's impossible? Haven't you heard stories about someone using the source code for an ftpd and using it to DDoS FTP sites? That's a very possible scenario, and that's why I'm partially opposed to open source: the L337 H4X0R factor. Just look at what happened to Gnutella; flatplanet.net used the source code to force advertising on all the clients.

    Releasing the source code to your software means taking the risk of having someone hack it for malicious purposes. If it wasn't for Pure Server, Q3 would have cheaters all over the world due to one unethical programmer fiddling around with the source code (wait, that's already happened!). Q2 is a good example too; remember the Ratbot? Counter-Strike has the aimbot. Let's face it, there's plenty of immature programmers out there who don't want to play fair (in more ways than one).

  • by Howard Roark ( 13208 ) on Saturday September 23, 2000 @04:54AM (#759746)
    I think I can shed some light on this. In my company we track technical support calls and attempt to characterize them. Over time, as our efforts to improve product quality have paid off, we notice that the character of our technical support calls have changed. Instead of "Help! I can't your product to work!" calls we get more and more "Help! How can use your product to solve my particular problem" calls.

    Just as Red Hat makes it easier to install their product, their support offerings will change too. You actually don't make as much money on support helping a single user install a buggy version as you do helping a large customer with a large implementation solution.
    --
    Howard Roark, Architect
  • by Howard Roark ( 13208 ) on Saturday September 23, 2000 @04:27AM (#759747)
    While it is well known that the Free/Open method of software development produces high quality software, some users still need formal QA. I actually run a QA department in a software company and see a real need for this. For example, pharmaceutical companies in the US are required by the FDA to validate software used for production purposes. I think this is very resonable.

    Open software enjoys the benefit of massive code review which provides dramatic improvements in overall quality. That is, if it got a massive review. With most projects, just don't know and that is the QA problem. Quality systems require that projects be developed according to documented procedures and that "quality records" be produced that show that the documented procedures were followed. You often hear the phrase "Say what you do, do what you say."

    I happen to be working with the final beta of KDE 2 (which is very impressive, by the way) as we speak and I know that a lot of people are looking at it and are finding bugs and reporting them to the developers. The problem is I can't say how many people are testing it and how. I don't know if a particular feature or environment is being ignored by the testers. I don't know if all the documentation has been reviewed, etc.

    I think it would be useful for the community to have a set of agreed upon standards for development. We already have some, like the GNU project's coding standards, but we need more and they have to include a method of producing a trail of quality records that backs up any claim that a project followed its standards.
    --
    Howard Roark, Architect
  • by Junks Jerzey ( 54586 ) on Saturday September 23, 2000 @12:46PM (#759748)
    In commercial software houses, changes are made all the time by people who don't understand the code. Often times, the person who wrote the code left several years ago, and noone is really even sure it works completely with the latest library versions. However, with free software, projects tend to be much smaller, and so figuring out what is going on is fairly trivial

    Sorry, that's wishful thinking. In a commercial development house, there are people who work with the codebase day in and day out. Even if the original author has left (very common), there's always someone who can scribble on a white board for 10 minutes to set you straight. Often what looks like an obvious fix turns out to be incorrect, making you glad you had that 10 minutes of scribbling. Maybe the biggest problem with fixes to OSS is that you don't know the reputation of the person making the fix. Maybe it's from a careless programmer. Maybe it's from someone who can program but doesn't understand the importance of thorough testing. Who knows?
  • And these two numbers, hours tested and how many doing the testing are orders of magnitude greater for Linux than for any Microsoft OS. So while Microsoft spends orders of magnitude more dollars on "Quality Assurance" programs, Linux comes out more stable. You tell me. Which is the better indicator of product reliability?

    You don't know what you're talking about. For some bizarre reason, Linux advocates simultaneously claim that Microsoft is the worst run software company in the world, and then go on to use Microsoft to prove the "superior" OSS development model.

    There are other software companies, you know. Ones that make Linux quality look like the crap that it is. Yes, I said crap. Only in comparison to, say, Win/95 is Linux stable and reliable. It is riddled with security holes and has many bugs. You personally just don't see them because you don't stress your box in any sort of way.

    Linux is not even close to the best version of Unix. Take a look at AIX sometime, or even Solaris. Got news for you: commercial Unix blows Linux out of the water. Or heck, look at the AS/400 if you want to see reliability.

    The bottom line is that the OSS development model is far from proven, and in fact, has very few examples of producing top quality software. Apache is one. And I can't think of another. Almost all the best software is closed source and commercial. And one of the reasons is having a QA and testing staff.


    --

  • by Vanders ( 110092 ) on Saturday September 23, 2000 @03:12AM (#759750) Homepage
    The best person to assure quality is the developer? No chance! I assume you've never worked in a commercial development enviroment?

    As a QA tester, i can assure you that a developer is not the best person to assure quality. They just don't see their own bugs, or code that isn't to spec. (This may not always apply to OSS projects). The only person who can assure quality is a third party (Non developer), who does a full QA test on it. Trust me on this one.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...