


Testing Products for Web Applications? 250
"I've seen a lot of automated test suites advertised and I've always assumed that they were no substitute for careful testing by a human. However, as the number of web pages that we need to maintain grows, I've begun to wish that we had something that we could kick off at night, that would follow all links on our system and fill in values for the various forms it encountered, then when we arrived in the next morning there'd be some sort of report available detailing its findings. It could flag any pages that returned something obviously incorrect, such as a SQL error, a blank page or just the word 'error'.
Does such a thing exist or am I just engaging in wishful thinking to imagine that there might be something flexible enough to do the job? What do other people do to test their software?"
mercury (Score:4, Informative)
Re:mercury (Score:1, Informative)
Downside: Expensive (but it is all relative to the savings to be made.)
Re:mercury (Score:2)
So, if you're looking to test 5,000 simultaneous users, open the pocketbook.
If everything can be exercised via HTTP calls, give OpenSTA a try. http://www.opensta.org
That won't cover interface changes, necessarily, but it will provide load testing.
Re:mercury (Score:4, Informative)
Re:mercury (Score:2, Interesting)
This depends entirely on how important QA is to you. I see QA and development as two sides of the same coin. QA people should be accustomed to scripting. Loops, variables, arguments, procs and functions; this is coding. Everything else is just perspective. Simple black-box stuff is fine for training, but QA people need to learn more to effectivly describe the deeper issues.
Inevitably, development pushes the due date for their code, but the final date does not change. Automation is the best way to do regression tests. The human eye can then focus on new functionality.
Re:mercury (Score:3, Interesting)
The scripting tools are nice.. Recording with a browser. But the best part of that software is being able to script (*read program here*) using a "pseudo" C language.
The library at your disposal are awesome. You can post random data, or data from a include file. And then compare every value received from your post.
You can throw transaction failure and log.
Doing so will even enable you to stress test it. Let's say you build one script checking every function of your web app. Then add some randomize for value, login, password, etc....
Then put 100 clients doing those things at the same time.
The report generator is neat and easy to read.
There are many ways to test your application from DCOM, SOCKET and HTTP requests.
Checkout loadrunner from mercury interactive.
Those software will probably give you all you need and maybe too much some time. Learning curve is steep but worth every bit of it.
Re:mercury.. how about ACT? (Score:2)
If you have a copy of VS.NET, you can use the included ACT to test applications. You basically click on the links you want to test as it records and then you can run the scripts, simulating number of users and for how long. Since it outputs to a
I wouldn't have mentioned this on
Re:mercury (Score:2)
Roll your own? (Score:2)
With the right perl modules, or even perhaps in shellscript with things like CURL, you could roll your own http/html regression tester. Won't handle javascript of course, and won't notice browser-dependant problems. You might be able to find a generic javascript/DOM library for unix to do the JS thing from your regression tester though.
Re:Roll your own? (Score:3, Insightful)
Don't worry. (Score:2)
Slashdotting. Don't worry. It was probably just a little Slashdotting. Works fine now.
Another topic -- The U.S. government, Microsoft: Before you support the U.S. government in invading Iraq, you should know that the U.S. government has been (mostly secretly) causing violence in numerous countries. See What should be the response to violence? [hevanet.com]. (The article takes a long time to load, and is badly in need of updating.)
My research indicates that the U.S. government support for violence and Microsoft's inability to treat its customers well are related. They are both are part of a social breakdown caused by a kind of low-level mental disturbance in which people become progressively insensitive to themselves and others. See Windows XP Shows the Direction Microsoft is Going. [hevanet.com]
For testing the HTML itself:
Amazingly great software finds HTML errors, and edits HTML:
HTML Tidy (Win 32 version) [rcn.com] finds HTML errors and corrects them automatically if possible. See the configuration options for HTML Tidy at HTML Tidy Quick Reference [sourceforge.net]
HTML Tidy works best as a plug-in to HTML Kit [chami.com]. (The command-line software is used as the plugin.) HTML kit positions the editor at each line with an HTML error when you click on the error.
Truly awesome free software!
Re:Roll your own? (Score:2)
You're obviously out of your league if you think writing a regression suite for an html user interface is really difficult or money-sucking, and you're a clueless consumer of bullshit if you actually go out and buy such products from other software companies.
The only rollling you should do is another joint to take your mind off of the deadbeat life you've had since your overpaid useless position was eliminated in the dot-bomb blowout.
If you didn't already know about it... (Score:5, Informative)
http://webtool.rte.microsoft.com/
The tool is too limited for complex apps & tes (Score:2)
I post the same thing and get modded down. (Score:2)
It's the same information (I missed this post earlier.) plus an additional link to WCAT which is not easily found.
Re:If you didn't already know about it... (Score:2)
We also could have used PureLoad which I recommended, Java based and not too expensive (wind river's load runner (sorry if I am off on names here) was extremely, extremely expensive.
The uselessness of WAST came because I was testing a tomcat-based web proxy and (for one example) error message pages would simply contribute to the average bytes transferred without telling me there was an error.. so it is very hard to tell what is going on unless everything is working perfectly. I used sar (linux) to get data from the (linux) server. Would have been much better off making scientific tests than trying to outguess such a squishy app as WAST. Get PureLoad!
Cactus (Score:3, Informative)
try a latka (Score:2, Informative)
it's a java XML solution to writing automated suites of functional tests. and it's free.
you forgot the link (Score:3, Funny)
Cactus (Score:4, Informative)
Try using people and kids! (Score:3, Insightful)
Seriously, I don't know of any software that does that, but if you find one, I'M INTERESTED!
I don't know if you're looking for advice or not, but try putting in negative numbers or things like #(-3+1000)
Hopefully that helps a little
Re:Try using people and kids! (Score:2)
I don't know if you're looking for advice or not, but try putting in negative numbers or things like #(-3+1000)
Never ever, ever trust user-supplied data. <input type="hidden"> fields are user-supplied, cookies are user-supplied, etc. It shouldn't matter if they modify a GET param when you expect a POST. They can forge the POST nearly as easily as the GET.
LoadRunner / TestRunner (Score:1, Insightful)
Cactus or HTTPUnit (Score:5, Informative)
Both Cactus [apache.org] and HttpUnit [sourceforge.net] allow you to do unit tests on web components. Both are extensions of JUnit. Cactus allows you to do unit tests of servlets and JSPs, while HttpUnit allows for unit tests of the resulting HTML code. (Cactus also integrates HttpUnit to a certain degree.)
Obviously, these tools are targeted at Java development. I have less experience with HttpUnit than with Cactus, but I imagine it could be used as a general test suite.
Re:Cactus or HTTPUnit (Score:2, Informative)
protocol level. There is an object model for parsing elements of web pages
to check values, set form elements, etc. While the tests themselves are
written in Java (not to mention the tool), I believe the tool is capable of
testing sites created using other tools and frameworks. I've not used it
myself, but it does seem pretty capable from the docs and things I've heard
from folks who have.
Re:Cactus or HTTPUnit (Score:2)
http unit is good for probing web pages, parsing content, verifying links off it work. Cactus is more for testing the classes behind the web page, if that makes sense, a kind of RPC back door into the beans.
httpunit works against any web site, not just java, it just gets, posts, and analyses the results.
Best book on httpunit is 'java tools for extreme programming'; 'java development with ant' also looks at it from the context of automated build and test processes. you can see that book applied in apache axis, under test/httpunit.
-steve (who co-authored java dev with ant)
Re:HTTPUnit (Score:2)
-a good trick with httpunit is to run it under ant's then the results in a summary page -this never fails to impress.
Web Site Test Tools (Score:3, Informative)
Re:Web Site Test Tools (Score:2)
I suspect it would be much more easier to use real web browser to run tests than emulating it with Javascript.pm.
More important ... Interface testing (Score:3, Insightful)
So I guess my point is, make sure you don't simply rely on automated testing. A bot won't get sick of clicking unnecessary buttons, and won't develop RSI injuries. Humans will, and you'll get great feedback because of it. At my old company, the programmers were very nice about fixing these flaws once I brought them to their attention, and grateful for our input.
Cheers,
Anaphilius
Automated testing. (Score:3, Informative)
The only automated testing tools you can find is for regression tests. Basically, you make "build 1". You use the tool to 'record' the tests you currently run, and have it check for successes and failures. You make "build 2", and run the tests, to ensure everything that once worked, still works. Now you test the new stuff, record these tests with the tool, make "build 3", etc...
There are three major companies with good automated regression tools. Mercury Interactive's WinRunner, Rational's Robot, and Compuware's QA Center. All of them are great tools (and you can get them packaged with load testing tools if you'd like).
Re:Automated testing. (Score:2)
Excluding, of course, things like HttpUnit [sourceforge.net], where you write code that drives a simulated browser and then check the results.
I've used it for automated testing of a couple of sites, and I like it plenty. Between HttpUnit, JUnit, and Test-Driven Development [objectmentor.com], we launched a complicated web site and have had it in production for six months with a total of one user-reported bug. And that bug was when the graphic designer broke a link.
There are services that can do this for you (Score:2, Informative)
I'm most familiar with LoadPro and Test Perspective..and of course scripting it.
With Test Perspective, you can record the way the web app works, then have them play it back for you with lots of variations with however many number of users you want.
LoadPro (http://www.keynote.com/solutions/html/keyreadine
Scripting it yourself is pretty easy too, but you want to make sure you use one that does http 1.1 (perl LWP doesn't) and you want to model your users accurately.
As for purchasing a tool, there is SILK Performer and Segue, both traditional functionality testing tools
Donald E. Foss
Re:There are services that can do this for you (Score:1)
LoadPro [keynote.com] can be found at http://www.keynote.com/solutions/html/keyreadines
DFossmeister
You have it right here (Score:3, Funny)
jUnit / HttpUnit (Score:1)
I've never used it, but it seems like it would probably be pretty helpful.
-NiS
OMG (Score:1, Troll)
Man, this is basic stuff anybody with 2 years experience should be able to handle.
This is suppose to be part of the design, coding, and implimentation. Maybe I should do an ask slashdot:
dear slashdot readers, I've been programming for a long time, and now I have to write an accounting package, can anybody show me how?"
sheesh.
One of many... (Score:2, Informative)
SilkTest [segue.com] from Segue is good at both scripted testing & stress testing.
--#voxlator
Re:One of many... (Score:2)
Segue vs. Mercury vs. Rational vs. Radview debates!
As if emacs vs. vi and kde vs. gnome wasn't enough.
Personally most of our testers at the lab prefer the Rational stuff but we use whatever the customer has purchased licenses for.
Re:One of many... (Score:2)
many open source test frameworks available (Score:5, Informative)
I recommend httperf and http_load for banging on lists of URLs really hard. At one place I worked, one of our developers rigged up some shell scripts that would play back log files through httperf and that worked pretty well.
If you want to record browser sessions for testing specific paths through the site, look at http-recorder [sourceforge.net] or roboweb [sourceforge.net]. There's also webchatpp [cpan.org], HTTP::WebTest [cpan.org], and HTTP::MonkeyWrench [cpan.org] on CPAN. More info on this can be found on the mod_perl mailing list [apache.org] or on PerlMonks [perlmonks.org].
Re:many open source test frameworks available (Score:2)
What if it relies on Windows-specific IE feataures? (Such as for intranets in an MS shop).
Any non-MS product would have to mirror IE bug-for-bug to test right.
The quick answer is don't rely on propriety stuff, but often the tester/developer does not make that call.
Get a good QA person (Score:4, Informative)
OK, maybe I am a little biased, as I have been in QA for 8 years. :-) But my comments still stand.
That said, we are currently using Rational's products to test our application, which includes a web piece. Hint: Don't use javascript if you plan on using Rational. They have SiteLoad, which I believe is free, but rest assured the rest of their products are NOT. Their licensing scheme is nothing short of trying to balance the budget of a small country. If you are wanting to implement their products in a big project, to handle requirements (Requisite Pro), Bugs (ClearQuest) and test plans (Test Manager), then prepare yourself for headaches. If you just want to get Rational Robot to record/playback user actions for testing, it is pretty solid. Rational purchased all different components of their system, so they aren't the smoothest to integrate. I have spent many hours with their phone support people.
I have also worked with Mercury and SilkTest, but to a lesser degree.
Oh, and if you are constantly changing critical code, you need to worry more about your development practices and not your testing.
Re:Get a good QA person (Score:3, Insightful)
Not true in many developement shops. With short iterations, refactoring, rigorous unit testing, collective code ownership, and continuous integration, code can be constantly changing but stable. Take for example the Mozilla Tinderbox [mozilla.org]. Development proceeds on many components and the builds and tests are run continuously. There are daily build smoketests (download a daily build and you'll see the smoketest menuitem), and sometimes things are broken for an hour or a day, but overall things just get better.
Embrace Change.
Re:Get a good QA person (Score:2)
I have to really wonder how efficient this is in the long run. Sure, I understand that this *can* work in some instances, but it won't in all. The prototype/spin cycle approach isn't the right one for every project. In this case, tests are reactionary. How on earth are you advancing your testing if the code is constantly changing (especially if the UI changes)? If that is the case, forget system test automation, it won't work. You have to have a reasonably stable, unchanging base in order to automate testing or you will spend all your time re-automating it. The entire purpose of automating your testing is to *save* time in the long run. In this model, there is no long run, everything is done in the short term.
Embrace Change.
I do embrace change, but not simply for the sake of changing. I have to have a good reason to change.
Re:Get a good QA person (Score:2)
If all your test automation is top-level, yes. For unit testing, however, the UI is irrelevant at tests targeted at lower-level systems. Even better, if you use XP methodologies, then developers are obligated to update the tests before updating the code -- so you don't have issues with the test code being forever racing to keep up.
Yes, you need to be able to do some QA on on the top-level interface -- but if the lower levels are stable, the UI itself is much less of a problem.
Re:You miss the bigger point (Score:2)
Read, for example, Wicked Problems, Righteous Solutions [amazon.com] , written in 1998. Not exactly radical bleeding edge stuff.
Re:You miss the bigger point (Score:2)
Re:Get a good QA person (Score:2)
Yes, that is precisely what I mean. That makes QA's job much easier. There are many levels of testing that need to be done, so if we get code that is unit tested, that takes away some of the easy stuff. But unit testing does not mean that a product is tested.
very basic web regression test tools: (Score:2)
Better then testing... (Score:1)
This in fact enables you dismiss all tests.
Remember: the key to successful programming is not to find all error but not to make in first place.
Java Tools (Score:4, Informative)
I'd highly recommend picking the book:
Java Tools for eXtreme Programming [slashdot.org]
This is a great reference for all of the tools being mentioned and shows you how to integrate them into the development cycle if your using Java. You should be able to write the functional tests if your app is not written in Java.
As an aside, if your not developing these apps in Java, you really should look at using Tomcat, XDoclet and Struts for simple DB frontends, and then move to EJBs with JBoss, Jetty or Tomcat, Struts and XDoclet. If your lazy and don't want to write a lot of code, you'll love these tools. Reuse is high in Java, and the code generation tools like XDoclet take away most of the pain of using frameworks like EJB and Struts. Besides JSP taglibs allow me to have good looking pages made pretty by people who care about the differences between browsers for CSS, DHTML and what not.
Good Luck.
Come on guys (Score:1)
New Web Application Testing Program (Score:2, Funny)
To use this miracle of modern computing, simply submit a story link to your Web Application and the webmaster's e-mail at the bottom of the page! Not only will you be able to test your server bandwidth, but every know-it-all Slashdot Web Guru(tm) will e-mail you with exactly why your Application is not worth the electrons it's stored on!
For added bonus, have your site flame one of the following groups for extremely extensive testing: Any Goverment, Adobe, Microsoft, Intel, Creative Labs, CowboyNeal.
Call Now!
Operators are standing by!
Also consider how you maintain code. (Score:1)
By reviewing how you work with the actual code, you can avoid making a lot of the bugs in the first place. When making solutions where more than one module/frontend depend on various backend functions I find that I usually avoid most problems with the API changing if I simply carefully map out whats needed of the API and deciding on how I want to access it once and for all. Once that's decided upon you can change the code as much as you want as long as you leave the actual API alone.
One of the things made easier by OOP for example
I know I might be pointing out the obvious here, but experiece have shown me that thinking about how you design your actual development cycle is a topic which is too often overlooked, with painful results.
Silk (Score:2, Informative)
of commercial tools that support extensive
script-driven testing of web applications.
SilkTest is the testing tool.
At my previous startup, we bought and used
these tools and developed extensive test
libraries for our product.
There are also companies that will test your
product for usability on many different platforms.
Look at http://www.otivo.com/ for one such.
Automation is over rated (Score:1)
*Commercial endorsement is not intended.
Re:Automation is over rated (Score:2)
Re:Automation is over rated (Score:2)
I think that's true generally, but I've been thinking we can make some progress in that direction. There's an automated software design analysis tool called Small Worlds [thesmallworlds.com] that offers opinions on OO design. It's pretty good.
I suspect that similar metrics could be calculated for web UIs so that we could help UI designers focus on dangerous areas, and so that they could be alerted if areas get worse. Even basic things like "links per page", "fields per form", and "percentage of page below the fold" would give you reminders to find pages that were unusually complicated for your site.
I've been doing this for years. Take some advice. (Score:2)
Some words of advice if you care to follow them.
First off, ignore anything with the words "stress" or "performance" in the titles or descriptions. They are not the tools you want, and are focused primarily on simulating multiple clients rather than simulating users.
Second off, seperate the kinds of testing you want to do. Simple form validation requirements will most likely mean you can get away with a tool that bypasses the browser interface (typically a unit testing tool). More complicated user simulation should be done by a tool that actually drives the browser, such as SilkTest or Rational.
Finally - Hire a dedicated resource just for this purpose. A QA Engineer with experience in automated testing, REAL experience, not just playback and record experience. (My resume is available on demand).
Real humans *are* best... (Score:2, Interesting)
WWW::Automate (Score:1)
Its A Question of Cost (Score:1, Interesting)
Try WebTestSuite 1.02 (Score:1)
involve the people who want the app (Score:2)
It's CRITICAL, IMHO, that the people requesting the application get directly involved with how the front ends should work. If they don't, you're just asking for UI rework pain.
maxq (Score:1)
It has two modes.. One is a proxy server which you'd set your browser to. It records the post/get arguments you used for the page and records it into a jython file. The backend then uses HTTPUnit to fire off the pages.
It isn't a complete solution.. I had to create a perl filter work with mime-multi-part data, or indeed any form-data that has carriage returns.
But since it's a simple mixture of python and java, it was relatively easy to apply statistics to the processes and search for all sorts of possible error-types.
The problem with simple non-human web crawlers is two fold. First there are pages that require valid form-data. Secondly, a "nightly sanity test" is going to be operating on production data.. You'll need to carefully manage such data.
Mercury Interactive (Score:1)
PPPHHhhhtttt!!! (Score:1)
One advantage of web applications is so changes can be implemented QUICKLY and CHEAPLY. If I used an includes to build drop down boxes and I changed the core include then EVERYWHERE I included that code I have to test to make sure it won't screw anything up. In the real world your not able to test everywhere when they need the change NOW.
As well the errors might not be 'true errors' in programming but simply making workflow harder or impossible to do with the new changes.
If you have a set of screens designed to do A. Individuals start using the set of screens to do B through F with and then the set of screens is modified to do A+1 but that stops B from being used through that set of screens while C through F is several hampered. THAT is what happens with not just web applications but all. THAT is still considered an error by the end users.
There is never a true replacement of humans from the testing of an application. Data and Workflow issues takes humans to determine the problem
This misses the point of testing and QA altogether (Score:3, Insightful)
Frankly, what you need are probably consistent programming methods (because your front-ends are probably being written by liberal arts majors who taught themselves --insert language here--), through error handling, documentation, a consistent testing mothodology, and much more upfront requirements analysis.
This stuff ain't cheap and you need to factor it into your pricing. I'd say that 10% to 20% of your budget should be QA and testing and you should insist that the budget be used for that. Too often QA time is used for actual development, leaving no QA.
Re:This misses the point of testing and QA altoget (Score:2)
You can't test simply one subset of the API.
Almost agree.... (Score:2)
There is absolutely nothing that'll find a bug as well as a good QA person who thinks "how can I break this?" However, that QA person should have recorded the sequence of events that breaks the code for two reasons...
Transactional Based (Score:2)
The best thing to do is to ensure your testers are familiare enough with the back end and the transaction processes to be able to run cross checks on the Database -- to ensure everything is working as it should. Common things like missing where clauses on deletes, in statements like 'a,b,c' rather than 'a','b','c'. Just simple things that automated tools could never catch. The bad part is that things like this take time and bodies. Atr least were I am sitting -- not near as many of those around here these days
design your code to be tested (Score:2)
I might get murdered, but Microsoft WAST does well (Score:2, Informative)
WAST [microsoft.com]
and
WCAT [microsoft.com]
They both seem to work really well and are freely available if you agree to the license. It's been a while since I've used them but I think they'll work fine with testing an apache or any other web server.
Can someone explain to me how this is offtopic? (Score:2)
There's a better solution than Mercury!!!!! (Score:3, Informative)
XMSGuardian's feature list includes:
Re:There's a better solution than Mercury!!!!! (Score:3, Funny)
"The XMSGuardian(TM) Console requires Microsoft Internet Explorer 5.0 or higher running on Windows 95/98/NT, 2000 or XP.....
Pricing and Availability:
XMSGuardian(TM) is now available as a monthly subscription. Pricing begins at $1,995 per month for a single URL...."
And not a downloadable demo in sight. Buh-bye.
Sigh.. (Score:3, Insightful)
When you write a function for your program, you need to write a test unit that is in the debug project. How it will work is that you write some tests in which you take an input, perform the operation, and test the output versus a contstant answer. Have one of these for each case that it handles in the unit. That way, you can always compile the test unit and examine its output versus the constant known-good value. That's good software engineering practice.
What you're asking, well, is a joke. Nothing's going to save your project if you've been just adding functionality without QAing at each step to verify correctness.
hellbunnia [hellbunnie.org] asks "I work with a team of developers who spend most of their time adding functionality to code. While we enjoy just cramming more code onto a source tree, we really never test anything. But even if we tested it, I think we'd miss a lot of bugs because we have no design policy. It's a lot to be tested, and it's all interrelated! So my question is, does anyone have a quick and easy solution that will save us from rewritting things with a proper design?"
"I've read a lot of freshmeat listings for testing, but I've always assumed that they were merely 'Hello, World' programs because nothing beats real testing by real humans. However, as the amount of code grows, I've begun to wish that we wrote a carefully designed set of unit tests as we added functionality, rather than trying to magically make it all work 2 weeks before our shipping deadline. I'm hoping we have some magic QA program which will do everything for us, except actually fix our squirrely code.
Does such a thing exist, or should I start updating my resume? How fucked am I?"
Re:Sigh.. (Score:2)
Amen!
If you find a bug at the end of a development cycle, you have months of changes to rummage through and try to find the problem. This sucks; you'll never get them all out; you just get the biggest ones and then you ship.
The right way is to write the tests first, before you write the code. Going back and retrofitting good tests will take time and careful thought, two commodities in short supply in the pre-ship rush.
One Example (Score:2)
QA Wizard (Score:4, Informative)
Another case of /. Deju-Vu (Score:2)
Since that article was posted, I was asked by my company to do some load and scalability testing and I've had great success with OpenSTA [opensta.org]. Give it a chance. It's awkward at once but once you get a feel for the HTTP/S (http scripting) language, you can do some very complicated scripting with it.
For example I wrote a script which interacts with one of our web products and navigates through several pages, submitting queries, retrieving 'wait' pages, and continuing on when the results are ready. Can't do that with wget... heh. And it gives excellent feeback on timing and can remotely monitor CPU and memory usage.
As far as I know it is only available on windows, though it is open source.
PushToTest and TestMaker (Score:2, Informative)
I would advise you to not take a decision to implement an automated test system lightly. Your decision commits your business to maintain the system and that can be expensive and complicated. All of the commercial test tools require an engineer to instrument all of the Web pages to be tested. They give you GUI tools to click through a Web site and the tool writes a test script that the test system can run. Eventually you wind up with a library of test scripts that need to be kept up-to-date as the Web site changes.
Additionally, these tools are reading Web pages to build scripts. One of HTML's shortcomings is that it mixes presentation data (font sizes, paragraph locations, etc.) with the actual content. HTML is very loosely formatted so test tools often fail to automate the script-writing process.
I've been building and testing complex interoperable systems for the past 15 years. In my experience the best way to build an automated test system is to give your software developers a test tool that lets them build tests while they are coding. The same tests may then be brought out of the developer's lab and used to check the service in production for scalability, performance and functionality.
One other thing to point out: there is little difference in functionality between the commercial test tools (which cost $20,000 to $50,000) and the free open-source test tools. I recommend you look at my open-source TestMaker project (http://www.pushtotest.com/ptt) and JMeter (http://www.apache.org.)
TestMaker comes with a graphic environment, script language, library of test objects (TOOL), sample test agents and a LOT of documentation. Plus my company PushToTest is the "go to" company for enterprises that need to test systems in Web environments. We're here to add functions needed by our customers, to run tests and to train your team in how to use the tool for their own needs.
Hope this helps. Feel free to drop me a line (fcohen@pushtotest.com) if you need additional help.
-Frank
wow, someone gives you time to test? (Score:2)
WWW::Browser Perl module (Score:2)
Whenever I write a web app I begin by creating a script just for that app that uses the browser object. As I add features, I add routines to the script that check that the features work. When I change anything, all I have to do is run the test script.
I don't have the Browser object on CPAN yet, but if you email me at miko at idocs dot com, I'll be happy to send you the package. Put WWW::Browser in the subject line.
my advice: stay away from automated testing (Score:2, Insightful)
**Writing and maintaining automated test scripts takes lot time.** Someone else posted a metric of 10-1, which I believe is quite fair. You really need to treat those scripts as its own mini-development project. You need to map out scenarios for each script and what goal each should accomplish. Coding (yes, even for those record/playback tools... you need to spend quite a bit of time tweaking it). And testing. Testing test scripts? Absolutely. If your test scripts are wrong, you could end up masking real bugs and creating false confidence.
Now the questions you need to ask yourself along these lines are: What is the lifexpectancy of my application? How often do release new code to production? The relevance of these two questions are of a cost/benefit ratio. If I'm going to spend x amount of man-weeks (yes, weeks) to create an automated test suite, am I going to get the cost savings back when I know v2.0 is 8 months away? Maybe. What if I only do two releases in those 8 months? Most likely not. (if you're releasing code to a production system on a per fix basis... well that's another slashdot topic)
In lieu of automated testing, I do have a few suggestions for improving testing.
1) incorporate "impact analysis" as part of your design/code reviews. If someone is planning on touching function y in module x, your architect / tech lead / rest of developers should be able to identify what other areas are going to be affected. When it comes time to test, you know exactly what areas you need to really focus on and which areas can do with a spot check.
2) come up with a sensible schedule for bundling multiple code fixes into incremental releases. Every time you touch production, there's an inherent testing overhead. Bundle a multiple fixes together and that overhead is better distributed.
3) hire dedicated testers. Having someone full time on QA (or part time, split across multiple projects) does wonders. The good ones bring both a great deal of experience for finding "common errors" as well as a fresh perspective to the table to see things that the developers overlook because they're too deep in the trenches. Now of course, dedicated testers may not fit into the budget. Even if you can afford them, developers should always be on the hook for testing. Which brings me to my next point...
4) tell your developers that they better learn to test or fire them. sounds harsh, but testings part of the game. I don't want anyone who doesn't understand the value of testing -- and isn't willing to put in the effort to test -- on my team.
my 2 cents and then some...
Re:my advice: stay away from automated testing (Score:2)
About 30% of my time and 50% of my code goes into automated testing. I write unit tests, integration tests, and end-to-end functional tests.
So you're right that testing takes time. But they payback is immense. If I write the tests as I go, I spend almost no time debugging. I have almost no bug reports. And when v2.0 comes, I don't throw out the old source code; I get to use it all again, as I know it's solid, and thanks to the test suite, I can change it radically without fear of breaking anything.
I don't want anyone who doesn't understand the value of testing -- and isn't willing to put in the effort to test -- on my team.
I'd agree with that, but manual testing takes a lot of time. Much better to spend that time on automating the tests. People are bad at doing boring, repetitive things; computers are good at it. Teach them how and let the developers focus on developing!
Use the Puffin Web Testing Framework (Score:2, Informative)
As featured on IBM's devWorks site ...Puffin Automation Framwork [puffinhome.org]
What is it? see for your self. :) [puffinhome.org]
I'm surprised nobody has mentioned the best (Score:2)
OpenSTA
OpenSTA is primarily designed to be a pluggable test rig that has a lot of plugins designed for stress testing. It has served us very well and with a bit of scripting it can be adopted to do functional regression tests too.
I urge everyone to give OpenSTA a try especially if you're after a load testing solution. It's just a tool that's really powerful and well respected in the industry. And the best part is that it's Free as in OpenSource :).
My expirence (Score:3, Informative)
I've used a few. I strongly recomend you invest in one. However you need to beware of the limitations of these tools. They only test what you tell them to test to make sure it works the same as last time. You will have trouble with dynamic data. (even Dates. The tool can be told to ignore things, but then it is ignoring data, so make sure it is ignoring the right thing)
These tools do NOT substitute for the first time through testing. You will still need a QA person to examine all known changes and verifty it they work right, and then tell the tool how to test for the new change.
It is a daily job (Often full time) to update the tool. In fact you should not let the tool guy go on vacation until he has a (several?) replacements who will do the job while he is away. In little time, enough changes that by the time you catch up you are often better off starting over from scratch. Do not let your updates slide, no matter what, or you will regret it.
The tool is not a substitute for first time testing. In fact if you want something that will only test your pages the first time you write them, you are better off doing it by hand, part of teaching the tool how to test a page is to test it while the tool watches. However once you have tested the page once, the tool has no problem testing it every day to make sure nobody accidenly changed something on it. Fortunatly this latter testing is the boring part nobody wants to do. Just make sure that everyone takes the time to write the test for each change. (or at least has the tools guy write the test, depending on your process)
We found that it was as much effort to write the test automation as to do the test for each version change (this was software not web pages), but once the test for each version was written you would press the button and run the test each time a patch was released, and everything would be tested. Once in a while bugs were found, but not very often. Many of the "bugs" found were not bugs, but changes in the way the product worked and we needed to change the script.
Finially the pay off, if there is one, will take more then a year. Warn your management right now about that. Somehow you need to keep metrics (and I'm not convinced any reasonable metrics exists to take) to compare the before and after case. Not everyone who has done test automation is convinced it was worth it. If you think it will take away a lot of the work you are doing now, then no it is not. If you want it to find a lot of bugs you are finding much later, then yes it is.
Overall, test automation is MORE work than you are doing now (just a guess, but likely), but it will catch more bugs faster. Try it, but remember a fair trial is a lot of work and it will take some time for the pay out.
Re:My expirence (Score:2)
Very well said. Automation is not for everyone and if you are in a RAD environment it certainly isn't for you. Even worse if you have any intention of shipping in the next month or less.
If you are? You're Doomed (tm)
Sorry, but it's a proven fact.
Signed, someone who has been there (is there still), and strongly advises companies to avoid automation unless they know what they are getting into.
Re:My god (Score:3, Funny)
>Iraq, and your talking about TESTING PRODUCTS
>FOR WEB APPLICATIONS? MY GOD PEOPLE GET SOME
>PRIORTIES!!!!
If the web is full of buggy applications, the terrorists have already won.
(my talking about testing products? what?)
Re:You are a programmer right? (Score:2, Funny)
I know what he is talking about though. I like programming when something is a challenge, and I'm not sure how I'm going to accomplish it. But as soon as it gets down to small details and testing, i find it very tedious. Oh well... Those are the breaks!
Re:You are a programmer right? (Score:1)
Try them
Re:Speak for yourself (Score:2, Insightful)
Re:Speak for yourself (Score:2)
The way we handle this at work ( in the eCommerce dpt ) is that when we use a function in a page we document it in a database. We can cross search what functions are used in a page or what pages use a function. This way when we make changes to a function that has a scope larger than the page we're working on we can test it in all of those scenarios
Re:Speak for yourself (Score:2, Insightful)
Clueless newbies and kids will find the problems first. The problem is that they don't report very well. What I want is testing software that tests like a ten year old, but reports like a senior programmer!
Re:Speak for yourself (Score:1, Funny)
Maybe they could work for "military intellitgence" too, since we're going for oxymorons...
Re:Lucky bastards! (Score:2, Funny)
Something tells me the photoshoots aren't happening in the server room or on developers' desks in cubicleville.
Re:WinRunner and LoadRunner (Score:2, Informative)
I guess it really depends on what you're testing - for finer grain control, I would choose LoadRunner, the moderately constrained C variant scripting language allow for some need tricks (of course, you can shoot yourself in the foot...always nice to have a memory leak in a test script, but it is nice to easily be able to call your own custom dlls and existing C code).
Silk Performer has some very nice playback and verification features, and their tool is much better for scripting at a higher level (IE, if your pages have a lot of javascript that dynamically builds links, handles form inputs...). The BDL is a Pascal-esque bastard language, and the script editor is awful.
So, LoadRunner: can generate tons of load not doing complex requests or workflows, SilkPerformer: can generate a lot of load and do a good job with complex workflows and funky scripting.