Forgot your password?
typodupeerror
User Journal

Journal jd's Journal: Are distros worth the headaches? 6

One of my (oft repeated) complaints about standard distributions such as Gentoo, Debian or Fedora Core, is that I slaughter their package managers very quickly. I don't know if it's the combination of packages, the number of packages, the phase of the moon, or what, but I have yet to get even three months without having to do some serious manual remodelling of the package database to keep things going. By "keep things going", I literally mean just that. I have routinely pushed Gentoo (by doing nothing more than enabling some standard options and adding a few USE flags) to the point where it is completely incapable of building so much as a "Hello World" program, and have reduced Fedora Core to tears. That this is even possible on a modern distribution is shocking. Half the reason for moving away from the SLS and Slackware models is to eliminate conflicts and interdependency issues. Otherwise, there is zero advantage in an RPM over a binary tarfile. If anything, the tarfile has fewer overheads.

Next on my list of things to savagely maul is the content of distributions. People complain about distributions being too big, but that's because they're not organized. In the SLS days, if you didn't want a certain set of packages, you didn't download that set. It was really that simple. Slackware is still that way today and it's a good system. If Fedora Core was the baseline system and nothing more, it would take one CD, not one DVD. If every trove category took one or two more CDs each, you could very easily pick and choose the sets that applied to your personal needs, rather than some totally generic set.

My mood is not helped by the fact that my Freshmeat account shows me to have bookmarked close to three hundred fairly common programs that (glancing at their records) appear to be extremely popular that do not exist on any of the three distributions I typically use. This is not good. Three hundred obscure programs I could understand. Three hundred extremely recent programs I could also understand - nobody would have had time to add them to the package collection. Some of these are almost as old as Freshmeat itself. In my books, that is more than enough time.

And what of the packages I have bookmarked that are in the distros? The distros can sometimes be many years out-of-date. When dependencies are often as tightly-coupled to particular versions as they generally are, a few weeks can be a long time. Four to five years is just not acceptable. In this line of work, four to five years is two entire generations of machine, an almost total re-write of the OS and possibly an entire iteration of the programming language. Nobody can seriously believe that letting a package stagnate that long is remotely sensible, can they?

I'll finish up with my favorite gripe - tuning - but this time I'm going to attack kernel tuning. There almost isn't any. Linux supports all kinds of mechanisms for auto-tuning - either built-in or as third-party patches. And if you look at Fedora Core's SRPM for the kernel, it becomes very very obvious almost immediately that those guys are not afraid of patches or of playing with the configuration file. So why do I end up invariably adding patches to the set for network and process tuning, and re-crafting the config file to eliminate impossible options, debug/trace code that should absolutely never be enabled on a production system (and should be reserved solely for the debug kernels they also provide), and clean up stuff that they could just as easily have probed for? (lspci isn't there as art deco. If a roll-your-own-kernel script isn't going to make use of the system information the kernel provides, what the hell is?)

This discussion has been archived. No new comments can be posted.

Are distros worth the headaches?

Comments Filter:
  • Reminds me of this: http://www.dragonflybsd.org/goals/packages.shtml [dragonflybsd.org]

    I haven't followed DragonFly BSD lately, but last I checked they were looking to implement some interesting solutions to package hell. Maybe you've outgrown Linux?
     
    • by jd ( 1658 )
      If so, I'd miss some of the cool tech that Linux has that has never really made it to the *BSDs. But, then, I felt exactly the same when I moved from the Jolitz' 386BSD to Linux, when 386BSD simply didn't cut it for me. My loyalty isn't to a specific OS. My loyalty is to Total and Absolute Power And Dominion Over My PC. cue evil cackle and howls of manic, satanic laughter
      • If so, I'd miss some of the cool tech that Linux has that has never really made it to the *BSDs. But, then, I felt exactly the same when I moved from the Jolitz' 386BSD to Linux, when 386BSD simply didn't cut it for me. My loyalty isn't to a specific OS. My loyalty is to Total and Absolute Power And Dominion Over My PC. cue evil cackle and howls of manic, satanic laughter

        I suppose it depends on whether you're looking to use this for a server environment or for desktop. In the desktop arena, no doubt Linux has the lead there. For servers, that's probably more of a religious preference though I tend to go BSD as it seems to fit with my way of thinking. But if it's total domination you want, I think it's time to break out the assembler. :)

  • If you need rock-solid stable package management, I think you might be better served by an enterprise Linux OS. I know you don't always get the latest packages, but you do get more stability, and less chance that the latest update will blow up your system or hose your package database.

    I use Fedora Core 6 on my desktop and I understand completely that I could install an update that will completely hose my system tomorrow. It comes with the territory. My servers run Redhat Enterprise Linux 4 and I never ha
  • I am sometimes tempted to install the development tools and build everything else by hand. But what happens when I upgrade dynamic libraries? Would I have to rebuild the entire userland?
    • by jd ( 1658 )
      Provided the library versioning follows conventions, the ABI should not break for either minor or trivial version number changes. It might get added to, but backwards-compatibility should be preserved. Not everyone follows that, but it's so "normal" that if you do an ldd on a file you can see that it's the default assumption. The practical upshot is that if a specific ABI is needed, ldd will usually pluck the right shared library out of the filesystem, even if a newer version exists.

      For a roll-your-own sy

Real programmers don't write in BASIC. Actually, no programmers write in BASIC after reaching puberty.

Working...