I can't claim that there is a scientific germ theory equivalent for the practices I listed...
I could skip replying to the rest of your comment, because you admitted my point, but you seem to have misunderstood my finer points, so I may as well...
First off, I understand TDD and I know the difference between TDD and Unit Testing. I love refactoring, and when IntelliJ IDEA was first released, I thought it was like the light of God shining down from heaven upon me. I've done pair programming. I get it. What I'm saying is that none of those can or should be applied without knowing when to apply them, which you can't do. I can't do it. Nobody can.
No amount of explanation of how a buzzword works will make my point invalid. It's still not scientific, and doesn't always apply. Until it is a science, it is snake oil, as far as I am concerned.
Go back in time a little bit. Remember when Object Oriented was the buzzword of the day? The purported advantages of OO sounded an awful lot like the advantages of popular project methodologies: OO would help prevent code breaking, because new code would not change existing working code. OO would help big teams work together by defining interfaces. OO would help encapsulate code to prevent bugs creeping in due to excessive cross-dependencies. Etc...
Now, go and tell Linus Torwalds that he's an idiot for using a non-OO procedural language on one of the biggest and most successful programming projects of all time. I'd love to see you have that conversation.
This is a lot like you saying that every programmer should be using Agile, even though there are enormous and wildly successful projects out there were produced without Agile.
I've heard more than a few anecdotes of Agile not working, and resulting in major problems. Sure you say, maybe they just didn't apply Agile the right way? Maybe they didn't get it? That's a lot like saying the priest just didn't pray hard enough, that's why his church was struck by lightning. He should pray harder! He should pray the right way! No? How about penance? Maybe even self-flagellate, see if that works?
This means that full TDD results in 100% passing unit tests with full code coverage at all times.
Code coverage != testing for everything that needs testing. That's one of my points.
Yes, or even tripling the amount of code. But if you still measure productivity in any relation (positive or negative) to lines of code written then I'm not sure we have much else to talk about.
There's an awful lot to talk about, because time is money. Tripling the LOC could send many projects over budget, and hence into failure. Just because the code passes tests, doesn't mean the project is a success. Which do you think businesses care about most?
Actually, refactoring (including test refactoring) is a significant part of the effort...
You misread what I said. I assumed refactoring is a given. Based on that, a TDD project will also require changes to the tests. A non-TDD will not, saving time. Some refactoring processes are 100% safe, and TDD just adds pure overhead. I'm thinking of the type of refactoring done by IntelliJ IDEA or Visual Studio, where the refactoring is automated and done on a statically typed language.
It isn't, but it's a superset. I was talking about test-based development methodologies in general, not just TDD specifically.
I've used TDD with C++, Java, Objective-C, C and Pascal.
You've just named 4 unsafe languages, and one that has type erasure and is typically littered with casts from "object" which is functionally equivalent to "void*". Mmm... safe.
Try using C#, F#, Haskell, or a similarly modern languages with proper type safety, extensive use of templates, and higher-order programming. I've heard of people releasing Haskell libraries without ever actually running them.
I haven't done a google search for this, but I'm wondering if you did?
Good try shifting the burden of proof. Your career is the one based on the theory project management, not mine. So you're telling me that you don't even know if the researching backing up your teachings exists? Aren't you concerned by that?
That said, I admit it might have happened somewhere.
So then.. as a professional, you would spend the time to find the research that shows the percentage of time wasted on tests failing on correct code, right? And you would research on which types of projects this is likely to become a major problem? And you'd spend the time figuring out how to detect such cases ahead of time instead of after the fact? I bet you would, but there's no such research, because TDD is not a science!
I feel a _lot_ better about my TDD code since I _know_ it works and has no cruft.
You've just hit two one of my points in one sentence: you may be getting a false sense of security about your code, even though you could be fooled by seeing lots of nice "green ticks" despite major problems with the code. On top of that, you claim "no cruft", but you've doubled or even tripled the lines of code. Hmm...
I work with my clients to help them quadruple productivity as measured by business results
Which might have nothing at all to do with the specific methodology you've applied. Read up on the Hawthorne Effect, which shows that merely getting some attention from an external party may alter behavior significantly. The benefits could be just a side effect, much like a placebo. The company has spent a bunch of money for you to come out there and train the staff, so it must be a success, right? Sure, when you're looking over the shoulders of the engineers and they can't goof off. Similarly, enforcing a rigid structured methodology -- any methodology -- on an unstructured team is likely to lead to improved results.
You don't know if Agile is the best approach each and every time. What if it's not? What if there's a more appropriate methodology that should be applied, but you just don't know how to determine when it is appropriate or isn't?
Talk to me when project management reaches the level of modern medicine.
Until then, all I'll be hearing is "anecdote, anecdote, citation needed, anecdote, snake oil, anecdote". 8)