- 1) We had a problem
- 2) We found a technical solution 30 years ago
- 3) We still have the same problem, I have no idea why.
I would like to disagree with the moderation of your comment, it is *not* funny. It is $&#*ing tragic. There was a problem "every printer needs it's own #!*& driver", there were at least two solutions, postscript and PCL that date back to at least the 1980s. But, unless you've got something fancy enough to be considered a network printer, odds are that "the printer still needs it's own #!*& driver". Postscript printers were not-so-common in the 1980's because it was computationally expensive and microprocessors and RAM were not cheap back in the 1980's, but they *are* cheap now. So, let's recap:
Those gconf xml files remind me an awful lot of the windows registry...
I'd love to be able to transfer files that fast; I can't be the only one who misread the title.
So Microsoft claims 11% less users use the start menu in Windows 7 vs. Vista. I'll believe that; I assume everybody using Vista uses the start menu, so they're only deprecating a feature used by 89% of all users.
The problem with, say Ubuntu, packaging a lot of open source software without contributing upstream is that Ubuntu doesn't just package the software unmodified, they make changes to add desired features, fix bugs, and get different pieces of software to integrate better. But, since they don't contribute many of their changes upstream, the upstream developers will change the software without any regard for whether the downstream Ubuntu patches, so if the Ubuntu people want to pull in a new version of the upstream code, they also have to update their patches. As Ubuntu introduces more and more patches, they do more and more work maintaining their local branch of the software, until it gets to the point where they are expending as much effort as if they'd just wrote things themselves. For Ubuntu, this is most pronounced with GNOME, Canonical was not involved enough in GNOME development, so the GNOME guys went off and did their own thing while Canonical developed Unity. Now you have two frontends to GNOME, but the Ubuntu guys can't get certain upstream changes made, because upstream doesn't care about Unity, only GNOME shell, which means more work for Ubuntu developers. It's true that if you don't need to patch anything, then there's really no incentive to contribute, but when you *are* going to make modifications then you're in the situation Zemlin's talking about; it's almost always in your self-interest to get the changes merged in upstream. It's a little bit of extra work in the short-term that saves you effort in the long run.
13.0.782.107 (Developer Build 0 Linux) doesn't do that, although this is "chromium" not the branded "chrome".
Despite massive code-churn, chrome's UI has been pretty much static, at least for as long as it's had a Linux port. I think they're on to something. Once people get used to using the browser (or any program, for that matter), they don't want to relearn the interface after every update, they just want the damn thing to work.
That's linked in TFA, so apparently it didn't work in this case.
I sincerely doubt XFCE will ever get a major UI overhaul like Gnome and KDE periodically do, for one, I don't think they have enough developers that they'd even want to try.
In my experience, that rate of hardware failure on Apple hardware is pretty close to the rate of failure on non-Apple PC hardware.
I imagine you could probably get someone to give you a pentium-based computer for free, and it may indeed perform sufficiently well to accomplish some basic, low-load server type tasks, but I think most people would prefer to put processors that were designed sometime in the last decade in their servers instead. The mac mini is a great little machine, and it's not even over-priced if you *want* something that small, but it's kind of limited in it's I/O capabilities. You can put two hardrives in the latest revision, as long as you're OK with losing the optical drive, and then you can spend the better part of a day trying to take one out and replace it if it fails. I suppose other people may have different opinions, but if I ran a server for my business, I'd want the hardware maintenance to be dead simple. In fact, I'd want hot-swappable drives, so I don't even have to take the machine down to do the drive replacement. And I'd want at least the *option* to install at least 4 sata/esate drives. The "mini-server" runs $999 currently, for that price, I'm sure you could build or buy yourself a much bigger, but also much more convenient to work on tower, with as powerful a processor, and significantly expanded I/O options, which also probably consumes a bit more power as well as performs slightly better. Then install CentOS and you're good to go.
(B) is un-true for an OEM license of any operating system AFAIK
Indeed, if you choose not to use dropbox, unison is almost certainly the proper choice for the job.
Aren't both iptables/ipchains Linux only? Doesn't NetBSD use pf like OpenBSD?