In the previous era of console games before they started supporting patches, games were treated much more like hardware in this sense because once you made the gold master and started printing copies, you couldn't change it. When you compare PC games of that era with console games, the rate of crashes and bugs was much higher on PC games. This was partly, of course because they had to run on a zillion configurations and depend on buggy device drivers, but also because the console makers had fairly rigorous submission testing requirements, and could hold up a game from shipping if it didn't meet these requirements (In the case of 1st party games, of course, there is an inherent conflict of interest there, but typically the approval process was fairly independent of the console makers' publishing arms). By contrast, PC games (like other PC software) have absolutely no oversight, so developers/publishers just do however much or little quality control that they feel they want to.
The PC approach is to let the market reward or punish software companies for how buggy their products are. Unfortunately, so much software uses various means of lock-in to prevent users from switching, that it ends up being a situation where the focus is on getting users to pay for upgrades, sometimes merely to fix things which shouldn't have been broken in the first place, and even that incentive is being removed as more software packages try to force users to constantly upgrade (e.g., make different versions non-interoperable but not sell older versions, so if you add new seats to a company you may be forced to upgrade the whole office)
For software used in high-risk situations (e.g., medical software, aeronautics, space flight), the penalty for failure is high enough that people are willing to pay (and wait) for extensive quality control. For most other software, this is not the case.