Yeah, Joel sometimes has useful things to say; this is not one of them. It's a very knee-jerk "as a manager I like to save money" response, and it isn't helpful.
The Duct-Tape Programmer is the programmer whose work I get paid to come in and fix.
The programmers that do a good job up-front and leave code that's easy to maintain have a real irony to them - they leave the door open to Duct-Tape Programmers. There's not much incentive to write maintainable code at most organizations, especially if you have one of these Duct-Tape types in your organization or looming ahead - because the results to a manager who doesn't ask any questions will always be the same: "The 3 months programmer 1 spent got me 30%, the 3 months Duct Tape spent got me 70%. Duct Tape is my guy." Well he got there by throwing maintainability to the wind and leveraging what that good programmer set out as a foundation.
Unlike this article, what would actually be insightful is legitimate information about the relative maintainability of code at a given point. There's no industry-standard way to assess code for how maintainable a code base is. Perhaps we need one.
The most obvious potential method that comes to mind is to have 10 programmers (probably Duct Tape) attempt to write 10 very small features in a few days. The relative success or failure of this would indicate to you the flexibility of the code as it was when this experiment was begun. This would be a fairly expensive way to assess code, but with 10 programmers actually digging in and having to deliver, you avoid the high-level "barely read the code/have an opinion anyway" opportunity, and you average out a lot of the obviously subjective nature of the results.
You would have to select your 10 small features carefully so that they ideally capture either a broad picture of the code, or a narrow picture around the kind of flexibility your organization cares about/expects to leverage soon. The strength of the results would rest largely on how well you selected the features tested.
In any case, I assure you, the Duct Tape Programmers of the world would get a very low score on any such assessment. It's something the industry could really use - without it it's only logical you'll keep seeing counter-productive opinions from decision-makers like the one stated in this article.
There's probably cheaper ways to do this than paying 10 programmers for 10 days. In Open Source projects you could hold contests with prize money for implementing the 10 features, and see what people gravitate towards and what they avoid - Feature A gets 1000 entries and Feature B gets just 1, and it failed - found a code maintenance problem. But maybe people here have even better ideas, that work for closed and open projects.