This isn't a very good analogy, unless you're going to constrain it to Free/open-source software.
In proprietary software, there's new versions every now and then, which both remove useful features and add new feature of questionable value, not because people found flaws or bugs, or because people really needed some new features, but rather because the company behind the software wanted to make more money by selling customers something they already had, and the people writing the software needed to justify their jobs. So we get crap like Windows 8/Metro. We get newer software which has new bugs which weren't present in the older versions, which run slower, which do less, which are uglier and have worse user interfaces.
It's not confined to proprietary software either. Just look at Gnome3. People were perfectly happy with Gnome2, but they had to toss that out and create something totally new and different (and incompatible) just because they wanted to, maybe because they had nothing better to do with their time, maybe because they wanted to justify their existence.
Your statements work for lower-level open-source projects like the Linux kernel, the Linux init systems (some people didn't think sysvinit had the features they needed, so they created upstart; some other people thought that was buggy and not architected right, so they created systemd, etc.). But for user-facing things, there's frequently completely different (and not so utilitarian) dynamics at work.