You are redefining "premature" to early.
We all are talking about "premature", you are not
There never was a defect/bug report in my carrier blaming the defect on me, except the one I mentioned (that is _production code_).
And as I said before: most of the time I worked the last 30 years in teams that had ZERO bugs in production. The issue trackers etc. prove that.
The severity "showstopper" or "minor" has absolutely nothing to do with that.
The total change might take a century. As in: average temperature is ow X and in 100 years Y. Or sea level is now L1 and in 100 years L2.
But there are small, localized, changes as in Syria/Iraq, that happen over the course of 3 to 5 years.
Then again, if for them reason push comes to shove, as with the ice on Greenland (a Vulcano, e.g.) and the whole ice drops into the ocean over the course of a couple of years, then mankind has a problem, a serious one. One is for sure: the seal level rise won't be a constant X mm per year, but change rapidly due to "weather" or other reasons we don't think about now.
Same for agriculture areas that suddenly, over the course of a few years, get wiped out. Even if they can be "reused" for other fruits/crop. You can not switch from grapes to olive oil in a course of 10 years
Or we have a runaway effect because of melting perma frost and releases CH4
There are thousands of things thinkable that can turn extremely bad in an surprisingly short time period.
But: likely you mean with "mankind" the few people rich enough to relocate any time
Unless you know it is only called once after the software is deployed.
Or you know you have to ship in a few days, and writing it "perfect" takes longer than those days left.
Should I continue? I probably find 200 reasons why "premature optimization" is as unpleasant as other "premature
I worked in plenty of organizations that only shipped bug free software.
I personally had only one single bug (created by myself) delivered into production the last 30 years.
However the last years I often worked in organizations that unfortunately accepted bugs going into production
Falsification is proving.
It is just a silly word with a meaning counterintuitive to its spelling.
Blame the guy who "invented" it.
Sayeth the noob who didn't think about how long testing the change would take...
Agreed that replacing tested/working code with new "more efficient" code does incur a re-validation cost.
On the other hand, that's also an argument for writing the more-efficient implementation the first time, rather than waiting until some later release. Since you know it's all going to have to go through the testing cycle at least once, why waste your QA group's time testing slow/throwaway code, when you could have them spend that time testing the code you actually want your program to contain? (Assuming all other things are equal, which they often aren't, of course)
The shortest distance from A to B is a straight line.
Freshman level science course material is we can never proof a negative. Only conservative Christians pseudo science uses that premise since we can't prove for example God doesn't exist. We also can't prove there are no dinosaurs either.
But in a real universe the laws of physics are different than a virtual programmed one like ours
I came here to say this, mostly.
I *know* that there are plenty of places in our software that I could spend an hour or two, and rewrite an algorithm to run in 1/5th the time. And I don't care at all, because the cost is too low to measure, and usually, performance bottlenecks are elsewhere.
Who really cares if I can get a loop to run in 800ns instead of 1500ns, when the real bottleneck is a complex SQL query 11 lines up that joins 11 tables together and takes 3 full seconds to run?
You are always doing something marginal when the boss drops by your desk.