Comment Re:Pet peeve - the purpose of testing (Score 1) 98
Uh no, it's to demonstrate that the code "works". The problem here is what it means "to work". Part of the usefulness of TDD is that you might not fully understand what it means "to work" yet, and the tests help you flesh that out.
Let me clarify, so you don't think I'm 100% ditching what you're saying versus stating it a different way. A test suite will tend to have BOTH tests for what the correct behavior *is* and also tests for what the correct behavior *is not*. In other words, what you're doing is defining the BOUNDARIES between correct and incorrect behavior. You're right in the sense that if your *strategy* is to write only *optimistic* tests (i.e. "proving that it works"), you'll miss subtle areas where the behavior isn't fully clarified (i.e. corner cases).
But here's the problem: for absolutely anything in the universe, there is an INFINITE number of things something *is not*, but only a finite amount of things something *is*. I've seen people go too crazy with using tests as a way of type-checking everything where smarter data types would have been a better choice, or performing a hundred "this isn't what I want" tests that could have been handled with a single "this IS what I want" test. My point is that you're supposed to program for the correct case, not design as if you always expect everything to go wrong. Write for the correct case, test for the correct cases FIRST, test for the EXCEPTIONAL cases, and write handling code for the things that are exceptional. Don't write an infinite test suite of what something is not.
CONCLUSION: Write the most EFFECTIVE tests you can that covers the most ground. Don't write *pointless* tests you have to maintain later if there was a better test. If a test covers a lot of logical ground by defining the boundaries of what something *is not*, then write the test for that. If it covers a lot of ground by defining what something *is*, write the test for that.