I wish it were as simple as this thread implies. The truth of the matter is that most commercial developers who are paid to worry about maintainability don't understand how to do it much better than their academic counterparts. Managers notice this and put all kinds of process in place to enforce good practice--requirements and design docs that are practically books, compile-time coding standard tests, smoke tests, regression test suites, automated tests and so on and on and on. These do not, however, turn developers into good programmers. They only turn them into safe ones.
Another thing the thread ignores is that 90% of all robust mission-critical code is in error paths. Academics rarely put those in, and great developers count on great code structure to save themselves much of that trouble. Let a few mediocre-but-safe programmers at that great well-structured code though, and the error paths multiply and must be addressed (usually by more mediocre-but-safe programmers). So for large systems, the starting point doesn't matter very much.
See, 2,000 safe programmers can write systems that enable a company that writes mission-critical applications to reach billions in sales, regardless of whether the initial code base was academic or extraordinarily good. Twenty genius programmers cannot do that, as a rule, even if they are 100 times as productive as the worker bees. Managers and executives understand this and go with it. At some point the weight of poor-but-safe code overwhelms the system's ability to grow and evolve, and it's time to start over.
------------
A hundred buggy lines in the code, a hundred buggy lines.
Fix a line and recompile, a hundred one buggy lines in the code.