I was talking about things like Libre-Office
So I did a little digging. It looks like the first WYSIWYG functionality in personal computer appeared with the Apple Lisa in 1983, and WYSIWYG quickly made its way into the GUI releases of Word and WordPerfect around 1985. Still, this is 1985 we're talking about. You're lamenting the lack of consumer-oriented open-source desktop applications back at a time when any commercial enterprise trying to build and sell such a product was taking a huge risk. The reason Microsoft's mission statement was "a computer on every desktop" wasn't because the hardware was already there! Even Scott Adams mocked Microsoft for their motto in his 1995 book "The Dilbert Principle"; the concept was still seen as far-fetched even then.
KPlayer, KMPlayer, Kaffeine, Dragon Player.
Incidentally, VLC was just one example - there are plenty of others like I listed above, and then Totem and MPlayer.
Reference as many different projects as you like, they sit on the shoulders of the same group of people who've gone to monumental efforts collecting samples, reverse engineering, documenting and reimplementing a massive variety of codecs. It doesn't matter what skin you put on the thing when the hardest part is the compression and decompression of undocumented multimedia formats.
GNUCash is admittedly there, but I doubt that they're so much into working w/ institutions like banks or brokerages than in just being a personal finance manager, which I guess is fine. However, I was thinking not so much about tax software, but rather, something like QuickBooks, which has nothing like it in Linux or BSD environments.
This is exactly what I thought you meant.
I agree that those monumental efforts would not have been easy. However, my argument in my previous post has been that instead of spinning a gazillion different Linux and somewhat fewer BSD distros, people would have done better in pooling their efforts towards making liberated applicaitons software.
Open source and free software isn't powered by some top-down, collaborative idea saying "hey, we need to all work together to do $x", it's powered by a bunch of people doing what interests them, for the reasons they're interested. There's no one, two or even three people who could say "everybody, pull in this direction" and get more than a few hundred people rallying behind them, and certainly not more than for a very limited period of time.
Also, look at the reason the various distros exist. They're iterative technological improvements on previous packaging methodologies (Slackware in response to linux-from-scratch, RPMs in response to Slackware's tarballs, Apt in response to pre-yum RPMs, Portage in response to unoptimized and inflexible binary distributions), philosophical differences (Debian's DFSG as opposed to RH's more lenient policies, Ubuntu's pragmatism vs Debian's strict policies), role/niche-specific distros (FreeNAS, pfSense, netbook-targeted) or political (such as how OpenBSD split off from NetBSD because some of the core devs didn't get along with Theo, and he didn't want to jump through hoops to get work done. (If I read the historical conversations right))
The core distros seem to continue on inertia and contributions from derivatives. Derivatives come and go as they experiment with basing themselves on different software...such as Mint's using Debian to build a rolling-release system.
This is how competition, invention and improvement work; you have to allow for things to break into many overlapping pieces if you want to see which ones work, which ones don't, and which ones beat the pants off of everything that came before.
I expect we're going to have to agree to disagree on this, because all of the solid intellectual reference on this is based in command economies vs free markets, and Hayek vs Keynes. All I can suggest is that you read some a book or two by F. A. Hayek.
I'm going to summarize this here, and it's the last time I'll go to the trouble of specifically spelling it out: You cannot focus the open source community, because everyone in the open source community works on what interest them, and not what some central entity suggests.
On the bloat that you are talking about in Windows as a result of having to maintain compatibility, I'd argue that that's a good thing. Admittedly, even MS broke it when going from XP to Vista, but I'd argue that a good place to have broken compatibility, if it had to happen, was to break compatibility b/w 32-bit and 64-bit. As it is, apps had to be re-written for Windows 7, so the right place to have broken compatibility would have been going from 32-bit Windows to 64-bit Windows. But back to the question on the unix side of things, having sophisticated applications that use the above libraries, like Qt, GTK, glibc and so on break whenever the OS ships w/ changes to the userland is extremely disruptive.
Let me tell you a bit about myself. My day job is writing C++ code to run on Windows. Some of that is maintenance work for legacy applications whose earliest versions had them running on Windows 95. Some of that is ground-up writing of new programs. I've been doing this for five years, so, yeah, I've been around the block. Now, in my hobby time, I run Gentoo systems at home, administer a Debian system for my website, and do software dev on projects that strike my fancy.
Now let me tell you a little about the history of software dev on Windows, starting with Windows 3.1, since that's the code Vista really broke. Windows 3.1 came with an API called Win32s, direct (and mostly compatible) ancestors of which are still the way you do native-code GUI application dev on Windows, up through Windows 7. If you wrote a program that did not do direct hardware access, and ran under Windows 3.1, chances are it still worked fine up through Windows XP.
Windows Vista was not the first version of Windows to operate in 64-bit mode. There was a 64-bit version of Windows XP, but driver support was spotty. (Nobody was shipping XP systems with more than 3G of RAM, so nobody was shipping systems with XP 64-bit. Since nobody was shipping XP 64-bit, driver developers didn't need to build 64-bit versions of their drivers.) As far as I know, Windows 3.1 apps worked on this version of Windows.
If you wrote a 32-bit application to run on Windows 95, and you didn't do any fancy hardware access, chances are your application still works fine on Windows 7. It'll probably even work on x86 (not ARM)-based Windows 8 systems.
After all, even in the Linux world, distros don't usually maintain older versions except perhaps under LTS, and since the newer versions are 'free', they have a good reason not to. As a result, someone who's using, say, Mageia 1 today might decide to, for security updates, go to Mageia 2 (I just picked this distro @ random from distrowatch - use any that you feel like). In doing this transition, quite a few things change. GTK goes from 2.24.4 to 3.4.1. GLIBC goes from 2.12.1 to 2.14.1. GCC changes from 4.5.3 to 4.6.3. Qt goes from 4.7.3 to 4.8.1. The Linux kernel used goes from 18.104.22.168 to 3.3.6. I see what you said above about most programs linking or compiling just fine, but to use a phrase you used in our previous encounter that you cited, there are too many variables, or moving parts here, if you will. As a result, when they don't work, debugging them can be a bitch.
This is a big driver in the popularity of rolling-release distributions like Gentoo, Arch and Linux Mint/Debian. Periodic atomic releases do indeed make porting software to subsequent versions of a distribution difficult, and a lot of work. That's why it's the responsibility of the package's maintainer within that distribution to ensure the package continues to work.
This is also why there are best practices in software engineering. You don't change your API unless there's either an immediate critical need, or unless you've given advance warning. That's why API developers have the word "deprecate", and that's why consumers of APIs should not use deprecated components.
Rolling-release distributions have the same issue, but it tends to happen in far less-overwhelming chunks.
And, yes, when there are many moving parts, things inevitably break. The trick is to only have moving parts where necessary and useful. Unfortunately, this means you can't have a "one size fits all" system, as what's necessary and useful in a desktop environment isn't necessarily the same in a desktop environment shipped over a terminal server, and certainly not the same as headless server. Yet, with Linux, all these disparate environments sit on more or less the same core.
If you don't like that, you can try to build a one-size-fits-all distribution. Many have [tried].
In this case, there are at least 4 such variables (I'm assuming that few apps will ever use GTK and Qt at the same time), which makes mere porting from one version to another a nightmare if each of them chooses to break compatibility in one aspect or another - all this despite all the source code being available.
Open-source isn't a panacea; it doesn't guarantee that things will continue to work. It makes understanding why they don't work easier. And, thus, it makes fixing things easier. In the proprietary software world, it's really not any easier. I don't even get debugging symbols for some of the crap components I have to bring into my address space.
For hardware, SSE continued to support the same instructions that MMX did, even if differently, so it's not like apps developed to take advantage of MMX no longer ran. It's one thing to argue that an app needs to be recompiled or re-linked to run optimally on a new platform. However, it's another thing to argue that an app needs to be recompiled to run at all on the new platform, if it happens to be an upgrade from the older one.
You missed the key point in my example: Let's say I have a library which needs to meet performance guarantees to users of my library. If the CPU changes to make things more expensive (while not raising an exception), then my performance guarantee is broken. This could be critical in real-time environments such as multimedia, machinery control or medicine. Yet the CPU manufacturer is free to make such changes, even if it blindsides the user of their product (as that particular case involving MMX did). (And machinery control and medical equipment manufacturers need to test the hell out of their platform, too.)
As far as linking things, you can still run very, very old apps (at least, as long as they use ELF). Working around this kind of issue is what LD_PRELOAD is for; I've got a friend who managed to get a libc5 app running on a libc6 system this way. (Boy, was I impressed!)
"slotting" is another mechanism distros use to get around version incompatibilities. Gentoo uses a slotting system so that apps which are known to only be compatible with a range of versions of a given library are only linked against that version of the library...and it will compile and install different versions of libraries side by side in order to do this. Works pretty well, actually. (Though sometimes you wonder why ye olde version of blah is installed, because it's forcing a dependency on stuff you really don't want on your system)
Understand that the current "to the cloud" movement is the immediate-term solution to what you're describing. Everything is being made cross-platform by shoving it into one platform...that has implementations on every consumer-facing system out there. Now, I get the impression that you're a multimedia consumer and creator, and that that's one of your primary interests. I understand that shoving a crapton of raw video up to some cluster a thousand miles away isn't something that's going to work for you.
I also understand that using a binary blob from a proprietary vendor can be a royal pain in the ass. What you need to do in that case is use exactly the distribution and software version the vendor certifies the product with. Thankfully, this is a lot easier these days with things like libvirt, vmware, virtualbox.
If you take the time to learn Gentoo, you may also be able to get it to work there; that slotting system I was talking about works wonders with things like Skype, Flash and other binary blobs.