I like your "Fight Club"- style math breakdown there, but much like the main character's insurance company deciding whether or not to to a recall based on the average out of court settlement for the catastrophic failure of a car's safety feature, the OEMs are just doing what saves them the most hassle and returns the most money. The fact of the matter is that the vast majority of "average" computer users without specific computing needs (programming, design, etc.) will be just fine with Windows, and never know what they are missing or that there is a life outside of Microsoft's corporate clutches. The concerted spread of a pro-Linux/free OS message could help counteract this.
It doesn't surprise me that a problem like this has surfaced. As several posters have already pointed out, it's almost impossible to tell what kind of problems a prototype is going to have in the field under live conditions. None of us know what the exact Microsoft (or Apple, or Google, or whoever) testing conditions are before they release a product. To be sure, a wide, varied testing protocol would ensure the best outcomes, however, these are giant corporations with lots of money, but who also have to ensure a significant return on investment. It's likely that the testing methods are at some balance point (possibly arbitrary) between cost and sample size. The flip side of this is that Microsoft's huge market share in terms of home PC users (I still call them IBM-compatibles, but that has started drawing weird looks in public) may tend to make them a bit blind in terms of quality assurance. Maybe they'll learn their lesson over time, but I wouldn't hold my breath. They are a giant steel behemoth doing whatever they are going to do. Vote with your wallets and take whatever evil you think is least.