Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Should You Pre-Compile Binaries or Roll Your Own? 301

Jane Walker writes "The completion of pre-compiled packages and maximizing machine performance are two powerful incentives for Windows admins to use Linux and compile an OSS package." TechTarget has an article taking a look at some of the "why" behind rolling your own. What preferences have other Slashdot users developed, and why?
This discussion has been archived. No new comments can be posted.

Should You Pre-Compile Binaries or Roll Your Own?

Comments Filter:
  • Gentoo? (Score:5, Informative)

    by corychristison ( 951993 ) on Tuesday March 14, 2006 @05:14PM (#14919156)
    I feel that Gentoo Linux [gentoo.org] offers the best of both worlds, with their ebuilds. :-)
    • Re:Gentoo? (Score:5, Informative)

      by MightyMartian ( 840721 ) on Tuesday March 14, 2006 @05:21PM (#14919205) Journal
      Actually, I'm becoming a very big fan of FreeBSD. Their ports and packing systems give me the best of both worlds. Where I just need a relatively generic version of a piece of software, I just install the binary, but where I need a few more options (common with stuff like Samba and FreeRadius) I can compile.
      • Re:Gentoo? (Score:2, Insightful)

        Obviously, not everyone [gentoo.org] agrees with you...
      • When I started using ports, I was simply amazed. I'm not a real huge fan of *BSD yet (still beating the unholy crap out of chillispot as installed by ports), but I certainly have a huge amount of respect for the ease with which ports handles it all - and compiles it to taste.

        /P

      • by Noksagt ( 69097 )
        You can install source dpkgs and binary ebuilds too. (I also use FreeBSD, but think that portage has smoothed out installation from source & that dpkg is fine for installation from binaries.)
    • Re:Gentoo? (Score:5, Interesting)

      by stevey ( 64018 ) on Tuesday March 14, 2006 @05:26PM (#14919248) Homepage

      The story, and comment, is almost certain to generate a flamefest. So I'll get in early.

      I'm a Debian user, and there are three things I know about gentoo:

      • The distro is based around compiling from source, which many suggest gives a huge speedup.
      • They have some neat tools for working with file merging conflicts in /etc, which sometimes happen when upgrading.
      • They make use of "USE" flags which can disable parts of programs you don't want/need.

      As for the first I think that compiling from source may well give you a speedup. But when my computer is setting with me at the desktop/ssh session very few processes are running and the network latency / my thinking time are most likely to be the biggest source of delays.

      True for heavily loaded servers the compilation might give you a boost but I'd be suprised if it was significant.

      Next we have USE flags. These do strike me as an insanely useful thing. But I have one niggling little doubt: I suspect they only work for code that supports it. e.g. project foo has optional support for libbar. If the upstream/original code doesn't have a feature marked as optional I don't imagine the Gentoo people would rework it to strip it out.

      So the ability to remove things from the source must be neutered, right?

      Finally the merging of configuration files in /etc seems useful. But I wonder if this is the correct approach. My distribution of choice, Debian, already does its utmost to preserve all configuration file changes automagically. I find it hard to understand what Gentoo does differently which makes it better.

      Ultimately I guess there are pros and cons to source based distributions depending on your needs. But one thing is true: If you're building from source and making use of modified USE flags and compiler flags then changes are you're the only person in the planet with a particular setup - that means bug reports are hard to manage.

      Theres a great deal to be said from having a thousand machines running identical binaries when it comes to tracking down bugs. (Sure diversity is good, especially for security, but there comes a point where maybe people take it a little bit too far).

      ObDisclaimer: I'm happy to be educated about Gentoo, but be gentle with me, k?

      • Re:Gentoo? (Score:5, Informative)

        by EnigmaticSource ( 649695 ) on Tuesday March 14, 2006 @06:35PM (#14919893)

        Well, as a Gentoo user, I'll tell you my personal reasons for using portage (speed isn't one of them).


        1.) Maintainability: I don't have to fiddle with 30+ binary dependencies when I upgrade a package, nor do I worry about having multiple library versions within the same major release


        2.) Simplicity: Well, it's not particularly simple (in fact, until 2006.1 it was a nightmare) to setup, but once everything is in line I simply don't have to worry about be various `gotchas` of any given package, it's all been abstracted away


        3.) USE Flags: An extension of the above, USE is like a homogeneous ./configure, no more silly --without-some-foo flags, or include paths that I forgot about 30 seconds after I installed a library. It's not so much about making things optional (at least in the real world) but more about keeping things simple (I specify all of my USE flags at install time, and simply add them to my list when new ones are created)


        4.) Lack of Binary Packages: As an old slackware user, I got used to not finding package `foo` as a .tbz and having to deal with RPMs that are/were broken and took more time to install properly than to compile. By using a source based distribution, if I have a one-off or patched library I don't have to worry about will feature X work or why Sodipodi crashes, because whatever version I have is (within reason) now the native version to the application


        Hope that helps

      • Not to mention what happens when your nice shiney new system breaks, and you can't run ls on your old system. As you compiled everything with march=nocona. It's perfectly reasonable to recompile an app like mplayer for your cpu. But for most apps i586 is as fast or faster.
      • Re:Gentoo? (Score:4, Interesting)

        by autocracy ( 192714 ) <slashdot2007@sto ... .com minus berry> on Tuesday March 14, 2006 @06:37PM (#14919901) Homepage
        Having lived in the Linux From Scratch days, you'll find just about everything has use flags and parts that can be disabled.
      • Re:Gentoo? (Score:4, Interesting)

        by tota ( 139982 ) on Tuesday March 14, 2006 @06:45PM (#14919965) Homepage
        I'll try to be gentle;)

        "The distro is based around compiling from source, which many suggest gives a huge speedup."
        It probably does, especially when building for specific architectures
        (like C3 or C3-2, etc..)
        "... but I'd be suprised if it was significant."
        Well, since you compile the compiler as well as everything else.
        It does accumulate...
        But point taken, in most cases it is not a reason in itself.

        USE flags: "I suspect they only work for code that support"
        "If the upstream/original code doesn't have a feature marked as optional I don't imagine the Gentoo people would rework it to strip it out."
        Actually, that's not true: The Gentoo devs do apply some very useful patches, including some that make it possible to *remove* unused features like you described. Better yet, these patches do make it upstream eventually, albeit at a smaller pace (so the whole community benefits)

        Re: configuration files: "Debian, already does its utmost to preserve all configuration file changes automagically. I find it hard to understand what Gentoo does differently which makes it better"
        It is not that different, except maybe that Debian does not change as quickly as Gentoo.

        "you're the only person in the planet with a particular setup - that means bug reports are hard to manage."
        You would be surprised.... Check out the Gentoo ML, they are full of people ready to help, even you try to use that tweaked package XYZ and get into difficulty.

        "thousand machines running identical binaries when it comes to tracking down bugs"
        Well, if that's what you are looking for, you still can with Gentoo:
        (as the parent posted noted) build binary packages on the build machine and deploy to all the others in binary form.

        If you want to try it out, why not use UML to boot into it:
        http://uml.nagafix.co.uk/ [nagafix.co.uk]
        (images and kernels ready to use)
        • > It is not that different, except maybe that Debian does not change as quickly as Gentoo.

          It depends what kind of Debian you follow. Stable never changes, because its stable. Testing is the best for a Desktop system, not as "flakey" as unstable but gets updated very fast - most of the time, except firefox 1.5 thats still not in testing. But with apt-pinning you can easly install one or two small packages from unstable in testing. Its similar to the gentoo feature where you can tag which packages can come
        • Re:Gentoo? (Score:2, Insightful)

          by dickko ( 610386 )
          Well, since you compile the compiler as well as everything else.
          It does accumulate...


          Does a custom-compiled compiler create different binaries to a pre-packaged compiler? I was under the impression that it might compile the application faster, but the resulting linked-up, ready-to-run binary is no different. So "it does accumulate" doesn't add up to me...

          Just nit-picking, sorry...
      • Re:Gentoo? (Score:5, Informative)

        by RedWizzard ( 192002 ) on Tuesday March 14, 2006 @06:47PM (#14919979)
        As for the first I think that compiling from source may well give you a speedup.
        Forget performance, it's a red herring. If you're considering looking at Gentoo for performance reasons you'll probably be disapointed. Increased performance is a minor side effect at best and more likely to be completely undetectable. Gentoo is about configurability and control, not performance.
        Next we have USE flags. These do strike me as an insanely useful thing. But I have one niggling little doubt: I suspect they only work for code that supports it. e.g. project foo has optional support for libbar. If the upstream/original code doesn't have a feature marked as optional I don't imagine the Gentoo people would rework it to strip it out.

        So the ability to remove things from the source must be neutered, right?

        You are right - generally USE flags rely on the upstream source having optional support for various features. So in theory it might be that very little can be removed from a given package. But in practice most OSS software is highly configurable at the source level, particularly if it is portable rather than Linux specific. The number of USE flags recognised by each package is highly variable. For example, mplayer recognises over 70 USE flags, while Firefox recognises 11.
        Finally the merging of configuration files in /etc seems useful. But I wonder if this is the correct approach. My distribution of choice, Debian, already does its utmost to preserve all configuration file changes automagically. I find it hard to understand what Gentoo does differently which makes it better.
        It puts control back into the hands of the user (which is basically the fundamental point of Gentoo). Consider a package that is adding a new config option. The Debian way will be to ignore that option, set a default for the option, and/or at best log a message that is easily missed (I don't know Debian so any inaccuracies are unintentional). Gentoo provides a complete default config that can be compared to the existing config. It allows the user to decide what changes should be made to the config. This is particularly important for packages which have extremely complex configurations (e.g. apache). There is often no way to automatically translate configuration between versions. The user really needs to look at it.
        • Re:Gentoo? (Score:2, Troll)

          by Tony Hoyle ( 11698 )
          I gave up on gentoo fundamentally because of its *lack* of control.

          The problem is the USE flags are global.. you can override them for an individual package but that doesn't get recorded anywhere - on the next emerge world it'll happily forget all your carefully crafted options and reinstall with its global defaults.

          The killer for me with lynx. Most distros have a minimal lynx that works in text mode. By default the gentoo one is dependent on X, about a million fonts, etc. You can override that on the co
          • Re:Gentoo? (Score:5, Informative)

            by gorre ( 519164 ) on Tuesday March 14, 2006 @07:21PM (#14920269) Homepage
            The problem is the USE flags are global.
            Not true, you can put USE flags for specific packages in the /etc/portage/package.use file. You might want to check out the relevant section [gentoo.org] of the Gentoo handbook for more information.
          • Apparently you never learned about /etc/package.use, where you can specify USE flags on a per-package basis. I think this is 'relatively' new though, so it may not have been around if you tried Gentoo 'long ago.'
          • The problem is the USE flags are global.. you can override them for an individual package but that doesn't get recorded anywhere

            That's not true anymore. Gentoo maintains a set of package specific files to track individual flags and stable/ustable settings in /etc/portage. package.use keeps package specific flags while make.conf keeps global flags (and package flags if you like).

            You need to re-examine the portage system, it's grown a lot in the past while.
          • Re:Gentoo? (Score:3, Informative)

            by greginnj ( 891863 )
            Lots of people have mentioned /etc/portage/package.use already ... if you've had enough of config files, you should check out the 'porthole' app, which is a GUI frontend to emerge. It makes it very easy for you to see all and only those USE flags which are appropriate to each app, and you can set them for each emerge distinctly. I'm also trying 'kuroo', which is a similar KDE-based app, but I'm liking porthole better.
        • Re:Gentoo? (Score:5, Informative)

          by V. Mole ( 9567 ) on Tuesday March 14, 2006 @07:27PM (#14920322) Homepage

          The Debian way will be to ignore that option, set a default for the option, and/or at best log a message that is easily missed (I don't know Debian so any inaccuracies are unintentional). Gentoo provides a complete default config that can be compared to the existing config.

          Nope, that's not the "Debian way" (at least as I, long-time Debian user/developer, see it). Debian provides a default config file (or files). When the package is upgraded, if the distributed config file is changed (new option, or new value for old option), then one of two things happens:

          1. If the user has NOT changed their local version, just upgrade to the new distributed default. The assumption is if they were happy with the old default, they'll be happy with the new ones. This covers the vast majority of cases.
          2. If the user has changed their local version, offer them the chance to look at the diff, and then either overwrite, don't overwrite, or shell it and deal with by hand. If they choose not to overwrite, then the distributed default is left alongside the real config file for later perusal/integration.
          While there are a few obscure corners in the implementation, and individual developers can make mistakes, it mostly works pretty well. The net effect sounds pretty much like your description of the Gentoo way: don't overwrite local changes, and give them something to diff against.
      • I haven't noticed any particular speed advantage to things compiled locally; I suspect that anything where it actually makes a big difference (e.g., codecs) does runtime detection of the correct version anyway.

        The big advantage to compiling things locally is that the rules for which packages work together are based on source compatibility, not binary compatibility. This, in turn, means that you have a lot more flexibility in updating things, and this flexibility eliminates a lot of the "flag day, new stable
      • Re:Gentoo? (Score:4, Informative)

        by Theatetus ( 521747 ) on Tuesday March 14, 2006 @07:06PM (#14920145) Journal
        Next we have USE flags. These do strike me as an insanely useful thing. But I have one niggling little doubt: I suspect they only work for code that supports it. e.g. project foo has optional support for libbar. If the upstream/original code doesn't have a feature marked as optional I don't imagine the Gentoo people would rework it to strip it out.

        Yes and no. It's really more dependent on the Gentoo maintainer than on the upstream. Most "big" projects also include a patchset (generally small stuff like where config files go; sometimes big changes to the codebase). These will generally have fairly rich USE flags. And it's not simply disabling things; in some cases it's adding whole subsystems (like SASL for sendmail, or the postgres backend for named). But anyways, some maintainers will add a lot of USE flags to their ebuild and others won't.

        Finally the merging of configuration files in /etc seems useful. But I wonder if this is the correct approach. My distribution of choice, Debian, already does its utmost to preserve all configuration file changes automagically. I find it hard to understand what Gentoo does differently which makes it better.

        Part of it comes from the fact that /etc/foo.conf might be altered by both libfoo and gtk-foo. The utility dispatch-conf diffs the two packages' foo.confs to let you merge their conf files. I think the best thing Gentoo does in /etc is the utility rc-update. It's the most sane init/runlevel interface I know.

      • Gentoo vs. Debian (Score:4, Informative)

        by Noksagt ( 69097 ) on Tuesday March 14, 2006 @07:21PM (#14920273) Homepage
        I usually find the most interesting criticisms of Gentoo (and the most insightful questions about it) come from Debian users. Yours are certainly no less relevant than others I've read.

        I've used both Debian and Gentoo. I am now (mostly) using Gentoo and not Debian. I hope you might find my perspective helpful. (But it should also be stated I also use ports on FreeBSD, and I have come to the conclusion that source-based distros are easier for me to use.)

        As for the first I think that compiling from source may well give you a speedup. But when my computer is setting with me at the desktop/ssh session very few processes are running and the network latency / my thinking time are most likely to be the biggest source of delays.

        How I'd love to have so much dead CPU time! If your computer's not doing anything for you, why bother having one? Truth be told, one can reap performance gains in more definitive ways than trying to have your compiler make different binaries. As you indicated, running few processes helps. As can swapping in a custom kernel and/or using a faster filesystem (both of which you can do on Deb fairly easily).

        True for heavily loaded servers the compilation might give you a boost but I'd be suprised if it was significant.

        I don't usually see a huge advantage, but it does depend on the app. For desktop users, app launchtime is often significant. I do think using '-Os' to make smaller binaries (which get into memory faster) does usually create a noticeable benefit. And for workstation/server apps, every few percent for "faster" apps could be helpful to some people (but I agree that it is typically only a few percent). But just leaving apps open is often "good enough" for load time & perhaps there aren't many who really need the extra few percent.

        These do strike me as an insanely useful thing.

        Yes. Absolutely. Particularly when you have relatively common needs across all apps. Perhaps you want to run an X-less server? Or perhaps you want to have apps with only KDE/QT or only Gnome/GTK+ or what not. USE comes to the rescue.

        But I have one niggling little doubt: I suspect they only work for code that supports it. e.g. project foo has optional support for libbar. If the upstream/original code doesn't have a feature marked as optional I don't imagine the Gentoo people would rework it to strip it out.

        You are typically correct. But the thing is that foo more often than not will have optional support for some feature. But some gentoo ebuilds do, indeed, have USE flags that aren't just ./configure flags for some applications. For example, you can install xpdf with the 'nodrm' use flag, which applies a patch to cause xpdf to ignore drm restrictions. Indeed, for making custom ebuilds, USE flags prove to be quite useful: you can use them to test multiple patches without the need to apply a given patch to all installations & can easily check which features a certain app has (by checking which flags it was emerged with).

        Finally the merging of configuration files in /etc seems useful. But I wonder if this is the correct approach. My distribution of choice, Debian, already does its utmost to preserve all configuration file changes automagically. I find it hard to understand what Gentoo does differently which makes it better.

        Gentoo's approach gives the user more choice. It preserves your old files by default. You can choose to replace the old config with a newer config or, more useful, merge (typically using sdiff) in changes between the old and new config. It doesn't choose what is best for you.

        Debian's defaults are normally sane. But not always.

        But one thing is true: If you're building from source and making use of modified USE flags and compiler flags then changes are you're the only person in the planet with a particular setup - that means

      • Re:Gentoo? (Score:4, Interesting)

        by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Tuesday March 14, 2006 @07:32PM (#14920369) Homepage Journal
        They make use of "USE" flags which can disable parts of programs you don't want/need.

        More importantly, they enable parts of programs you do want/need, even if not many other people do.

        For example, my desktop is one of the few *ix machines in my office, and our network is primarily based around Win2k3 and Active Directory. I really, really need Kerberos support in every package that supports it, and configuring 'USE="kerberos"' solves that problem.

        This exact issue drove me away from Debian way back when. It made me chose between old Kerberized OpenSSH, or a newer un-Kerberized version [debian.org] (as of today: ssh-krb5 3.8.1p1-10 from OpenBSD 3.5, released 2004-05-01, or ssh 1:4.2p1-7). Gentoo didn't make me choose, so that's what I went with.

        Gentoo isn't for everybody, but it has some features that I'd never give up. The ability to pick and choose obscure features that most other people won't need is high on that list.

      • Re:Gentoo? (Score:5, Interesting)

        by kbielefe ( 606566 ) * <karl.bielefeldt@gma[ ]com ['il.' in gap]> on Tuesday March 14, 2006 @07:33PM (#14920371)
        I use gentoo for a few different reasons, none of which have anything to do with ekeing out every last cycle from my machine:
        • Nearly universal support for stack smash protection which must be enabled at compile time.
        • Incremental updates. I update my system a little bit at a time instead of doing a major upgrade when the distro makes a new major release. I've used gentoo for a long time (5+ years), so I don't know how much of a problem this still is with other distros or not.
        • Better dependency handling. No problems with different packages being compiled by different compilers against different development library versions.
        • Not strongly married to either KDE or Gnome.
        • Multimedia is easier to get working than any other distro I've tried. decss, win32 codecs, etc.
        • Can stay bleeding edge where I want to, and extremely stable in other areas.
        • Easy to make small changes to source. I occasionally add a minor feature, change a default, fix a bug, or apply a security patch from a mailing list instead of waiting for the next release.
        • Easy to distribute to multiple machines. It's a snap to compile and test on one machine, then quickly install my custom binary package on many machines.
        • USE flags. Almost everywhere you could use --disable-feature on a manual configure, there is a USE flag for that feature. This is very useful both for enabling features that most distro's wouldn't include and disabling features that most distros include by default. For example, when alsa was still pretty new and usually not enabled by default, the alsa USE flag made migration much easier.
      • Etc updates, both need improvement, debian automagicly screwed up my exim4 config, and gentoo keeps putting me into vi to edit binary files, and files I've made no modification to.

        Sometimes I think they should concider using git (repository manager) to manage /etc, would be nice to see some logical management with a full history of changes made by updates.

        Building things from source, some things are probably worth it, I always build a kernel optimised for the platform I'm using, and I'm tempted to build thi
      • I've used gentoo a lot at home and debian a lot at work.

        Debian really does demonstrate the problems inherent in letting someone else make decisions about what options and dependencies should exist for a piece of software.

        To see what I mean, you have a freshly installed debian box you want to monitor with nagios.

        So, you want to install the nagios nrpe server on this machine.

        The Debian package for this is in two parts:

        1. nagios-nrpe-plugin

        This is the plugins that are actually used by an nrpe server. If you in
    • Re:Gentoo? (Score:2, Funny)

      by ameoba ( 173803 )
      If the site get's /.ed, check here [funroll-loops.org] for more information on the performance benefits of running Gentoo.
  • Eh? (Score:5, Insightful)

    by kassemi ( 872456 ) on Tuesday March 14, 2006 @05:21PM (#14919200) Homepage

    Was I the only one who found that this article didn't really shed too much light on whether or not you should compile your software from source?

    By the way, I know the benefits of compiling from source, but how this made slashdot, I don't know.

  • Other benefits (Score:5, Informative)

    by Rich0 ( 548339 ) on Tuesday March 14, 2006 @05:21PM (#14919202) Homepage
    I have a feeling that this will turn into a general debian vs gentoo style debate, and we all have our preferences.

    The big benefits of precompiling are that you don't need to support 1500 different sets of libraries in your development environments, and that the package will generally work right with minimum fuss.

    The big benefits of source-based distros are the ability to tailor packages to each install (ie the ability to compile certain features in or out), to choose optimizations on each package (do you want -Os, -O2, -03, or are you really daring -> -ffast-math?).

    There are some things that cut both ways - often a given package can be compiled using one or more different dependencies and if you want this flexibility then source-based might work better. On the other hand, it also means that if you have 500 different users of your distor you have 495 different configurations and bugs that are hard to reproduce.

    As for me - I like source-based. However, if I had to build a linux email/browser box for a relative I'd probably use Debian stale...er...stable. The right tool for the right job.
    • From a maintenance standpoint, vanilla is better.

      From an optimization/customization standpoint, rolling your own is better.

      So, for the items that should be vanilla from company to company (DHCP / DNS / File serving / etc), I recommend using as vanilla a system as possible.

      In theory, there should be something that differentiates your company from all of the others. If this is an app or database or other computer related item, then it should be fully customized and fully documented. You want to squeeze every
    • Re:Other benefits (Score:3, Interesting)

      by Sentry21 ( 8183 )
      The big benefits of source-based distros are the ability to tailor packages to each install (ie the ability to compile certain features in or out), to choose optimizations on each package (do you want -Os, -O2, -03, or are you really daring -> -ffast-math?).

      In some circles (e.g. #mysql on Freenode) this is considered a Bad Thing. Users come in on Gentoo systems complaining about how 'Unstable' MySQL is. Did they compile from source? Yes. Did they compile from official source? Yes. What EXACTLY did they d
      • The result is that the user's CFLAGS, Gentoo's patches/defaults, and so on, end up with a binary that is quite a bit different from the stock MySQL install, and it's not terribly surprising to me that the only 'unstable' MySQL situations I've seen are on Gentoo (which is not to say others don't occur).

        How is this any different from SuSE, Fedoria, Debian, etc? Hardly anyone has a "stock" MySQL with zero patches and just exactly the same libraries linked in as the developers do. One distro has a bug fix inclu
        • How is this any different from SuSE, Fedoria, Debian, etc? Hardly anyone has a "stock" MySQL with zero patches and just exactly the same libraries linked in as the developers do. One distro has a bug fix included and another doesn't while a third installs to a non-standard location. Even worse a fourth distro has the temerity not to use this week's libssl! What is a tyrannical developer to do?

          Well, first of all, with the distros, you're talking about 15-20 different binaries, not thousands. Second, the dis
  • Advantages to both (Score:5, Insightful)

    by EggyToast ( 858951 ) on Tuesday March 14, 2006 @05:21PM (#14919209) Homepage
    Sure, compiling your own sometimes results in a more efficient binary, but it's also a great way to make sure you have all the dependencies for whatever you're installing.

    Conversely, if programmers sufficiently document their binaries, that's not as much of a problem. URLs for other binaries, or instructions for checking/compiling dependencies, can speed up that process.

    Of course, binaries are a huge advantage to non-experts and beginners, who just want a program to be there. They don't care about maximizing efficiency, they care about getting their work/fun done.

    So really, it entirely depends on the application. For programs that are typically for the hardcore programmer/user crowd, source-only makes sense -- those people who use the program are going to know how to compile and check everything already. But for programs like, say, a CD burning program? I definitely think both should be provided, and installation for both should be well documented. Given how easy it is to provide a binary, even if it's "fat," there's no reason why a more popular program can't provide both versions.

  • by jrockway ( 229604 ) * <jon-nospam@jrock.us> on Tuesday March 14, 2006 @05:22PM (#14919214) Homepage Journal
    Recompiling software gets you almost nothing. Maybe 10% more performance, at the very maximum.

    There are special cases like when you want to use dynamic libraries instead of static (to save memory), or when there's a major architecture change (PPC -> x86 for Apple). In those cases you'll gain something.

    Another case is rewriting your program to use CPU-specific instructions, like Altivec or SSE3. That, in certain circumstances, will speed up your program.

    But if you're compiling OO.org or Mozilla because you think your 686 version will be 100% faster than the 386 version, you're wrong.
    • TEN PERCENT! (Score:5, Insightful)

      by LeonGeeste ( 917243 ) on Tuesday March 14, 2006 @05:25PM (#14919238) Journal
      In my field, ten percent is everything. If I could increase performance by ten percent, I'd get a 100% bonus for the year. My servers need to handle so much data and are always backlogged (and adding more on is exensive!) Don't trivialize ten percent.
      • Unless you use a very basic package of software - you would get more than that by micro-optimizing any given software, you seem to imply that you need performance.

        For most of the users performance change under 10% doesn't matter.
      • Seeing these replies, I get the feeling that a lot of people don't understand how in the world 10% can really be a problem. Don't think of desktop apps, think of systems, think of grid computing, think of high performance apps. I can tell you that in finance, getting an answer 10% faster is extremely valuable. We're talking getting complex computations done in milliseconds so you can see if that stock or option is worth trading and if so, get the order out to your competitors.

        Or on the other end, think of a
      • In my field, ten percent is everything. If I could increase performance by ten percent, I'd get a 100% bonus for the year. My servers need to handle so much data and are always backlogged (and adding more on is exensive!) Don't trivialize ten percent.

        I hope that the point is that you cannot increase performance of your servers by 10% by recompiling your software, just decrease YOUR performance by 200% or more by attempting to do so.

        Recompiling someone elses software is usually foolish. You lose support, you
    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Tuesday March 14, 2006 @06:47PM (#14919981)
      Comment removed based on user account deletion
      • Most packages I have seen on Linux distros are compiled with -O2 or -O3. It is highly unlikely that the various other switches provided by GCC will provide anything significant.

        Mostly because said switches are included when the main optimisation switches are provided.
      • by Spoke ( 6112 ) on Tuesday March 14, 2006 @09:44PM (#14921124)
        Most packages I have seen on Linux distros are compiled with -O2 or -O3. It is highly unlikely that the various other switches provided by GCC will provide anything significant.
        Besides the obvious -O parameters to gcc, specifying the arch of the platform (-march=i386 for example) can sometimes have a decent effect on performance. A lot of distributions compile for the most common platform which usually means specifying -march=i386 -mtune=i686. That gets you binaries that run on any i386 or better, while tuning the code for i686 machines. If you're running an older processor or something like a VIA or AMD cpu, often compiling with -march=c3-2 or -march=athlon64 or whatever specific CPU you're running can provide a noticable benefit, especially on newer versions of gcc.
    • Most programs may get an average speedup of 10%. But if the speedup happens in something critical you get a massive speed increase. I was using the resynth plugin in gimp to remove some text and fill it in with the background texture (interpolated). The difference between running it on gentoo linux and that of pre-compiled windows version was over 4x. Depending on the image size it can easily run for hours. Now that's not typical, but on the other hand I didn't do anything special on linux to get this
  • by Sensible Clod ( 771142 ) on Tuesday March 14, 2006 @05:22PM (#14919215) Homepage
    it depends.

    If the time required to compile (plus trouble) is less than the time/trouble saved through performance gains (assuming you actually compile it correctly to get said gains), then compile. Else download and install.

    But then again, if you just have a lot of time on your hands, it doesn't really matter what you do.

  • I used to that when I first got into Linux so many years ago when a lot of hardware wasn't supported or you need to add some special parameters. These days, with the latest distributions, I don't have to do anything special since everything is recognized and supported. In fact, I don't remember how to compile the source anymore. At 36, I must be getting old.
  • On my FreeBSD I compile almost everything from ports. That way you have maximum control about the options and CFLAGS with wich your app is compiled.

    It's as easy as 'cd /usr/ports/category/portdir; make install clean'.

    If you have a number of identical machines, you could compile and build packages on one machine, and then install them on all the others.

    On Linux I think that Gentoo's portage comes closest.

    • No, first you do cd /usr/ports/sysutils/portupgrade; make install clean

      Then, all you need to do is run portinstall portname for each port you want to install.
    • On my FreeBSD I compile almost everything from ports. That way you have maximum control about the options and CFLAGS with wich your app is compiled.

      Whoopy. You've managed to take the worst aspects of FreeBSD (that fewer people use it) and combine it with the fact nobody is QAing your CFLAGS but you.

      Thanks for putting untested configurations on the Internet!

      On Linux I think that Gentoo's portage comes closest.

      Portage is much more stupid than FreeBSD's ports. With FreeBSD's ports- just about everything is QA'
  • by vlad_petric ( 94134 ) on Tuesday March 14, 2006 @05:27PM (#14919261) Homepage
    It's true that gcc can generate P4-specific code, for instance, but in most cases and with generic compilation flags this will barely make a difference. I am personally not aware of a single mainstream linux app that does 5% better when optimized for P4 as opposed to generic 386 (I'm not including here apps that have hand-crafted assembly for a couple of instruction-set architectures, like, say, mplayer). I guess things might change slightly with gcc 4.* which has automatic vectorization, but in my experience automatically-vectorizable C code is very rare, unless written specifically that way.
    • I am personally not aware of a single mainstream linux app that does 5% better when optimized for P4 as opposed to generic 386

      Running "openssl speed" compiled with "-O3 -march=pentium4" gave about 3 times the performance of "-O" on my server. Being able to handle 3 times the number of SSL connections was certainly worth the 10 seconds required to put correct values in Gentoo's /etc/make.conf.

    • When generating code for a generic x86 CPU, gcc will replace constant multiplications by shift/add sequences. If you have an Athlon, then you have two multipliers and a single shifter. This means that a constant multiplication will be much quicker as a multiply instruction than a sequence of shifts and adds. This is just one example of when GCC can generate highly sub-optimal code, I'm sure there are many others. If you really need the performance, then carefully tweaking your compiler options can give
  • by eno2001 ( 527078 ) on Tuesday March 14, 2006 @05:32PM (#14919302) Homepage Journal
    Up until this past year I've been a big fan of using a very stripped down Redhat (then later Fedora) distro and doing my own custom compilations of things like OpenSSL, Apache, OpenSSH, DHCPD, BIND, Courier, MySQL, PHP, XFree86 and X.org, and of course the Kernel itself. The main reason? It "just works". When I originally started using Linux and used RPMs I was very annoyed with them. I think RPM sucks. I also think BSD ports suck too. I don't like using stuff on my machine that I didn't massage first. That's just the way I am.

    A co-worker introduced me to Gentoo late last year and I have to say I am very impressed. It's much faster than the optimizations I was using. Of course I didn't compile everything in RedHat or Fedora by hand. That's why Gentoo really rocks. You CAN hand compile everything from the ground up! I also used to use Linux From Scratch. And YES, I do use this stuff on production machines. You just can't get any better performance or security by doing it all yourself. The only reason to use package managers is if you are new to Linux or just don't want to learn much. But if you don't dig in, then you're at risk of thinking that something in your installation is broken, when it's not. I've seen many people throw up their hands saying, "I have to re-install my Linux box dammit!" when all they really needed to do was fix a symlink, edit a config file or downgrade something for compatibility reasons.

    For example, on a laptop at home I decided I wanted to use the Xdmx server from X.org, so I hand compiled X.org. After that, I kept having this problem where the acpid (tracks the laptop's battery life among other things) daemon wasn't starting and would produce an error message whenever I logged into Gnome. I dug around on the net for quite a while and finally found out that the version of X.org (a devel version, not a stable version) grabs ACPI if you are using the RedHat graphical boot screen. The fix? Stop the RHGB from running by removing the RHGB kernel option. I think a lot of people would have assumed they hosed their installation and reinstalled if that problem really bothered them. It's not hard to find solutions to most problems in Linux no matter how obscure. That's why only knowing how to use pre-compiled binaries is a detriment if you're serious about using Linux.
    • First you learn to use Linux. Then you learn how to set up a source based distribution. Then you learn not to. ;) I'm joking of course, but dismissing binary distributions as "not for serious users" is a bit extreme. After you've used Linux long enough, you stop caring about little details, and just want it to use it with a minimum of fuss.
      • by TheCarp ( 96830 ) * <sjc@NospAM.carpanet.net> on Tuesday March 14, 2006 @05:57PM (#14919560) Homepage
        Don't apologize, you hit the nail on the head.

        You learn not to.

        Yes, there are times when a source based distro is good. There are times when you NEED the level of control that it gives you.

        Most jobs, in most environments don't. In fact, most sysadmins that I have seen just don't have the resources to exert that much control, or put that much love and care into every system.... nor should they.

        I have said before, in specific cases (research computing clusters where you essentially have 3 machines, one of which is copied n times as "nodes" that can be reimaged from a new image pretty much at will - is the one case where I have really seen it used to good effect) source based distros are great. Of for your hobby machine, or you r laptop.

        As soon as you start talking less about "running linux" and more about "deploying services", the focus shifts. Source based distros are a management nightmare in any manner of large or hetrogeneous environment.

        Frankly, the majority of systems that I have had responsibility for havn't even had a compiler on them, never mind a full blown development environment, usually not even library headers.

        Why? Because we don't want admins compiling code on just any box and tossing it in place. So, why make it easy for them to do so? Nothing should EVER EVER EVER be compiled on the production mail server, or the web server.... it should be compiled on the compiler machine, the dev box.

        When you start making distinctions between the roles of boxes like that, as you should do in any larger environment, then you start to see the benefits of a source based distro melt away, and the real power of package management come into full effect.

        Most linux users, the home user, will never see this. I know I didn't understand it until I had been doing it for a living for a few years.

        -Steve
    • by Raphael ( 18701 ) on Tuesday March 14, 2006 @06:13PM (#14919689) Homepage Journal
      You just can't get any better performance or security by doing it all yourself.

      The performance can be debated, but you have got the security argument backwards. If you use pre-packaged binaries, you can get security updates quickly and automatically because any responsible Linux distributor will provide updated packages in a timely manner. This is especially useful if you maintain a large number of machines and want to make sure that the latest security patches are quickly applied to all of them.

      On the other hand, compiling your own software requires you to pay attention to all security announcements and apply the security patches yourself. It may be possible to automate that to some extent (e.g., Gentoo provides some help for security patches), but not as easily as with the pre-packaged binaries.

      From a security point of view, I am glad that some others are doing most of the work for me. The issue of performance can be debated because there are many pros and cons, but security is usually better for the pre-packaged binaries. Note that I am talking about security from the point of view of applying the latest security patches and staying ahead of the script kiddies, not in terms of initial configuration of the system. And by the way, I am not against compiling software on my own because I compile several packages from sources (directly from cvs or svn, especially for the stuff that I contribute to). But then I am aware that the security may actually be weaker for those packages than for the ones that I can update automatically.

    • The only reason to use package managers is if you are new to Linux or just don't want to learn much.

      Or perhaps time? When my boss says "give me a moodle server" and I can build a box, install debian and moodle, and still have it for him that morning, he's a lot happier than me saying "oh wait, bash etc. is still compiling".
      • by digidave ( 259925 ) on Tuesday March 14, 2006 @06:38PM (#14919919)
        Exactly. I started using Debian because not only are the packages the best in the world, but it's easy to get things working. Now I'm beta testing VMWare Server because that makes it even easier. I created a few virtual machines (one LAMP, one Ruby on Rails/Lighty, one database-only, etc) and can have them running in less than ten minutes + the time it takes to do any specific configuration for whatever app goes on there, which is usually only a few minutes. The VMs are configured to auto-update themselves from Debian's repositories every night, so out of the box I just run apt-get to update from when I made the VM and it's all set to go.

        I used to compile every major package, back when I didn't know as much about Linux or being a sysadmin. Now that I know what I'm doing I have the confidence needed to use a binary package manager to its fullest.
    • I nearly hosed my system when I hand compiled a development version of an exotic X server, but thanks to my hands on approach to system administration, I was able to get out of that doozey after mere hours of searching. God knows what I would have done about it if I wasn't so happy to dig in and screw with random shit!

      What can I say that won't come off as a flame? Ah! At least nobody will accuse you of lying about being a Gentoo user!

      By the way, the hands on approach is possible in other ways. I use Slack

    • Comment removed based on user account deletion
  • deployment? (Score:5, Insightful)

    by TheCarp ( 96830 ) * <sjc@NospAM.carpanet.net> on Tuesday March 14, 2006 @05:39PM (#14919359) Homepage
    Do these guys even know what deplpoyment means?

    They are not only wrong in their conclusion, but the article barely scratches the surface of the question.

    Put simply, compiling software on your own is fine for a one off, or your desktop, or your hobby machine.ou.. or if you either a) need the latest wizbang features (and maybe can put up with the latest gobang bugs) or b) need really specific version control or c) can't find a precompiled package with the right ompile time options set.

    Other than that, you should pretty much always use pre-built.

    Sure, you can build your entire system from scratch if you like, libc on up. Why? The performance increase will be so minor that you will have to run benchmarks if you even want to be able to tell there was a change. You will then have to recompile everything you ever decide to update.

    This strategy is a nightmare as the systems get more diverse and complex.

    it also has nothing to do with deployment. Deployment is what you do after you have decided what you want and made it work once. Deployment is when you go and put it on some number of machines to do real work.

    I would love to see them talk more about the differences in deployment. With precompiled packages from the os vendor, ala debian or redhat, its easy. You use apt-get or rpm and off you go. Maybe you even have a redhat network satalite or a local apt repository to give yourself more control. Then you can easily inject local packages or control the stream (no, sorry, I am NOT ready to upgrade to the new release)

    but should you compile "most of the time"? hell no!

    It is, in fact, the vast minority of software where you really need the latest features and or special compile options. Its the vast minority of the time where the performance increase will even be perceptable.

    Why waste the time and cpu cycles? Takes over a day to get a full gentoo desktop going, and for what? I can have ubuntu installed and ready to go with a full desktop in maybe 2 hours.

    Lets take the common scenario.... new openssl remote root exploit comes out. The email you read just basically said, in no uncertain terms, that half your network is now just waiting to be rooted by the next script kiddie who notices. Thats lets say... 50 machines.

    Now your job is to deploy a new openssl to all these machines.

    You could notice that the vulnerability came out in such a time frame that they allowed the OS vendors like debian to release fixes (often happens, if not they are usually out within a very reasonable time frame)... so you hit apt-get update && apt-get upgrade

    or maybe you just promote the packgae into your repository, and let the automated updates deploy it for you. You go home, have a coffee, and be ready to log in remotly if anything goes tits up.

    Now if you are hand compiling what do you do? Compile, test. And then um.... ahh... scp the dir to 49 machines and ssh in and do a make install on each?

    How is this better than using a package manager again? Now you ahve dirs sitting around, you have to hope that your compile box is sufficiently similar to the other boxes that you didn't just add a library requirement (say because configure found that yoru dev box has libfuckmyshit.so installed and this neat new bits of openssl can make use of it)

    How about when a box crashes and burns and you now need to take your lovingly handcrafted box, and rebuild it, and put all that lovingly compiles software back on it.

    Fuck all that... give me a good binary distro any day of the week. I will hand compile when I HAVE to... and not before.

    -Steve
    • I really don't think that anyone who builds things from source does it the way that you suggest (or if they do they deserve the headaches they get). Without going into why someone would choose to build from source (there are reasons peppered throughout this thread) let's look at a much more sane (and I would hope more common) way of going about it:

      Sysadmin wants to install/upgrade program.
      Sysadmin decides for whatever reason that this should be built from source.
      Sysadmin downloads source to testing mac
  • by Todd Knarr ( 15451 ) on Tuesday March 14, 2006 @05:43PM (#14919399) Homepage

    If you're compiling your own for performance reasons, don't bother. There's a few packages that can benefit from being compiled for specific hardware, the kernel and libc, for example, and things like math libraries that can use special instructions when compiled for specific hardware. For the most part, though, your apps aren't going to be limited by the instruction set, they'll be limited by things like graphics and disk I/O performance and available memory. IMHO if you're trying to squeeze the last 1% of performance out of a system, you probably should look at whether you need to just throw more hardware horsepower at the problem.

    The big benefits and drawbacks of custom-compiled vs. pre-built packages is in the dependencies. Pre-built packages don't require you to install development packages, compilers and all the cruft that goes along with a development environment. You can slap those packages in and go with a lot less installation, and you can be sure they're built with a reasonable selection of features. On the other hand, those pre-built packages come with dependencies. When you install a pre-built package you pretty much have to have the versions of all the libraries and other software it depends on. By contrast, when you compile your own packages they'll build against the versions of other software you already have, using all the same compiler options, and they'll probably auto-configure to not use stuff you don't have installed. This leads to a more consistent set of binaries, fewer situations where you need multiple versions of other packages installed (eg. having to install Python 2.3 for package X alongside Python 2.4 for package Y) and overall a cleaner package tree.

    Where the cut-off is depends on your situation. If there's only a few instances of dependency cruft, it may not be an issue. If you have a lot of dueling dependencies, it may be worth while to recompile to reduce the number of different versions of stuff you need. If you've got externally-dictated constraints (eg. only one version of OpenSSL is approved for your systems and everything must use it or not use SSL at all) then you may have no choice but to compile your own if you can't find a pre-built package that was built against the appropriate versions of libraries.

  • The article missed an aspect of the hardware issue. Got the part about rolling your own to optimize for your dual core Pentium 4 rather than using a binary compiled for a 486, but not the part about getting it work on totally different hardware, such as an Alpha or PowerPC.

    Another missed point is it's usually easier for the developers and hardware vendors. Easier to distribute source code than maintain dozens of binaries. Just another advantage of open source. Many projects have a few binaries for the most popular platforms, and source code for those who want to use something less common.

    Latest version and leading edge. I've been trying Gentoo. In most cases the extra performance isn't much, and isn't worth the hassle. And Gentoo defeats one of the attractions of roll your own: the leading edge. There is always a lag between when a project releases a new version, and a distributor can get around to turning the new version into a new package. Article didn't mention that aspect either. If your distro is slow, you can bypass them and go straight to the source, or, if available, the source's binaries. Ubuntu Linux 5.10, for instance, is still stuck on Firefox 1.0.7. I'm not talking about checking out the very latest CVS source tree, just the latest release.

    • I've been trying Gentoo.

      Good. Though you didn't try hard enough.

      In most cases the extra performance isn't much, and isn't worth the hassle.

      Nonsense. Perhaps you didn't stressed your CFLAGS enough. 30% of speed performance in X11 on pentium4 is typical gain against generic 686. Don't forget to enable SSEn.

      And Gentoo defeats one of the attractions of roll your own: the leading edge.

      Nonsense. Did you hear about portage overlay? You can have zero day release versions of anything of your own, plus the comfort of
  • Unless they don't work, are incompatible, are unavailable, or are for some other reason unsuitable. Then you compile your own.
  • by jimicus ( 737525 ) on Tuesday March 14, 2006 @06:19PM (#14919736)
    I use Gentoo - and I even use it on production servers.

    However, I don't consider performance to be a particularly big benefit. Once you've got gcc doing it's basic -O3 or whatever, anything else is marginal at best.

    There are, however, two things I get out of it which are useful to me. These should not be ignored:

    • Choose compilation options. Most packages have a number of options available at compile time. Generally with pre-compiled packages, you either get every single option compiled in OR you get "whatever options ${VENDOR} thinks are most appropriate". Gentoo provides flags as an intrinsic part of portage which allow you to specify which options are compiled in.
    • A vast number of regularly updated packages. Put simply, I can emerge almost anything, whereas every other distribution I've used, sooner or later I come across a package I need which doesn't have an RPM or what have you, and I have to build my own complete with the dependency hell that can entail.


    Of course, there are drawbacks:

    • Many commercial non-F/OSS packages aren't available as ebuilds and their installers get confused with Gentoo's creative filesystem layout, which is not entirely LSB-compliant in places (particularly /etc/rcX.d).
    • Gentoo packages are declared "stable" more-or-less by fiat rather than by extensive regression testing, and it's not unknown for an upgrade to completely break things without having the good grace to announce it first. This isn't necessarily a problem if you test everything before you go live and you have contingencies in place - which you should be doing anyway. But it can be annoying.


    I guess it's just a case of "know the tool you're using".
  • One difference between compiling from source and using a precompiled binary that is often overlooked, is that the precompiled binary has possibly been tested. In comparison, a binary you build yourself have never been tested by anyone, anywhere. There are so many possible variances between build environments that it's unlikely any two people will ever compile to the exact same binary.

    An unforgivable mistake in the sphere of commercial software is to test a DEBUG build and release an OPTIMIZED build. The b

  • Look, when I download a piece of software, I want to click "install", and start using it. I don't want to have to finish writing it before it will be useable on my computer.

    I don't know what the state of the art is for compiling code these days, but I know that when I download a program to use on my computer, I don't want to make a Computer Science project out of actually getting it to RUN on my computer. So if it /must/ be compiled to run, I shouldn't know anything about it happening - the compiler and t
  • Well for the desktop i really do not care, just apt-get all the crap, and it is a slow machine running X, mozilla and terminals. No reason to compile.

    Now on the serversm when there is an advantage to be taken, I would compile apps, but not everything. Like mysql and apache (that accounts for 90% for processor time) I would compile if there is heavy load, but for others i really do not care.

    Now BSD is a different devil, there you must compile it when using ports, so I guess that is the way to go.
  • by ThinkFr33ly ( 902481 ) on Tuesday March 14, 2006 @06:33PM (#14919873)
    ...is that you get the benefit of the optimization that can only occur when compiling for a specific machine without actually needing to do the work yourself.

    In platforms like .NET and Java the IL/byte code is JIT'ed before execution and can have these machine-specific optimizations inserted.

    No need to break out gcc or try and resolve dependencies. Ahh... I loved the managed world. :)
  • Stay true to your roots and just hope your intelligence gets above a 10. Home-rolled binaries give that extra kick. Please make it a little harder!
  • In my long Linux usage time I used serval distros. One of them was Gentoo and at that time I thought compiling all from source is the answer to all the RPM dependency hell questions.

    Well, sorry, no, never. Compiling all makes absolutly NO sense at all, except for some special packages, where you have to do a special setup.

    You waste so much time with compiling (eg in Gentoo) and at the end you have more troubles with libraries, etc. I run Debian/testing on my workstation and on most of my servers and I only
  • It depends on what you're doing, what you need, why you're installing and what you're installing on. THAT is the only real answer - at least, in general.

    Specifically, if the default configuration is all you need, then go with the default. It'll make maintenance a lot easier and won't take up time doing customization that will never be utilized.

    If you need something fairly standard - or at least uniformly weird - and an installer is available, then Gentoo is an excellent halfway house. Because package intera

  • Machines are so fast today, that for personal use I don't see any difference between custom compiled code optimized for my cpu and precompiled. Bear in mind that most precompiled binaries are already available for the major cpu's (P4, AMD, AMD64, etc).

  • When I started using Linux I started with an old Slackware, some time later I started to compile everything myself and test different gcc options, and even patched some programs to use a devfs only system. About two years ago I decided that I have nor the time neither the will to keep doing it, I switched to debian unstable, as I was used to be always on the edge. Now I run debian stable in every machine, the server, the desktop and the notebook. I have compiled, adjusting by hand every option, some package
  • On whether I have the 45 spare minutes required to compile my OS and applications [bell-labs.com] from scratch ?
  • I like the middle ground of having a range of pre-built binaries that are optimised for common machine types, allowing the user to install the one most appropriate. This allows benefits in performance, as well as reducing the permutations of the software that need supporting.

    The Moox builds (http://www.moox.ws/ [www.moox.ws] of Firefox did exactly this, and gave a noticable performance increase over the 'regular' mozilla.com pre-built binaries for Win32. Before the site went down, there were also benchmarks of his buil
  • by lawaetf1 ( 613291 ) on Tuesday March 14, 2006 @08:24PM (#14920711)
    Fine the subject doesn't make complete sense.. BUT... doesn't compiling code with Intel's cc result in significantly better binaries than any flag you can throw at gcc?? From http://www.intel.com/cd/ids/developer/asmo-na/eng/ 219902.htm [intel.com], MySQL claims 20 percent performance improvement over gcc.

    I'm not saying we all have access to icc, but if someone wants to make a binary available, I'm more liable to use that than compiling from source. Call me crazy. And I know someone will.
  • by martinde ( 137088 ) on Tuesday March 14, 2006 @08:59PM (#14920922) Homepage
    (Full disclosure, I've been using Debian since 1996. I'm backporting packages to stable as I write this, so I'm definitely not coming from a "one size fits all" perspective.)

    If there is one thing that appeals to me about Gentoo (as I understand it), it's the concept of meta-configuration at build time. Unfortunately, lots of options in packages get configured at BUILD TIME, so either the binary packages have those options on, or they don't. When this is the case, if the distro doesn't provide multiple binary packages with all of the useful configurations, then you end up having to build from source. (IMO, building from source means compiling from source packages whenever possible.)

    So I like the concept of saying in one place, "build for KDE", "build with ssl support", "build with ldap support", etc. Maybe someday everything will be runtime configurable, but until that day, I'll be wishing for that meta-level of configuration...

    Having said all of that, check out apt-source, apt-build, and their ilk if you're interested in "Debian from source". Debian binary packages solve about 97% of my problems so I'm not usually looking for "make world" kind of functionality.

    Enough rambling for now.
  • by Killer Eye ( 3711 ) on Tuesday March 14, 2006 @11:10PM (#14921483)
    In my experience, it is often necessary to recompile from source simply to have more than one version of the same package available at once! Too many pre-built binaries assume they are the only version in the universe you could want in /usr/local/bin.

    For some packages a recompile is merely annoying, having to download and reconfigure with a new prefix and rebuild; but for others, it can be a horrible web of configuration options to find numerous dependencies in special locations. This complexity can be really frustrating if all you want to do is relocate the tool so two different versions can be installed.

    Pre-built binaries should assume by default that they'll go into a version-specific directory (say /opt/pkgname/1.0), and at the same time they should assume their dependencies can also be found there. The /usr/local hierarchy would remain, but as a hierarchy of references to specific versions of things. The /usr/local hierarchy would contain selected default versions, it would be used for efficient runtime linking (have "ld" search one "lib", not 20 different packages), and it would be targeted for dependencies that truly don't care about the version that is used.

    There are other details, of course...for example, it may matter what compiler you use, you may want 32-bit and 64-bit, etc. But the basic principle is still simple: have a standard package version tree on all Unix-like systems so you can "just download" binaries without conflicts, once and for all.
  • FreeBSD (Score:5, Interesting)

    by vga_init ( 589198 ) on Wednesday March 15, 2006 @04:14AM (#14922563) Journal
    For years I lived in the world of FreeBSD.

    Not only had I built every package from source (using ports), I also took the trouble to rebuild the base system and kernel with a custom configuration and options.

    The benefits to some of this were obvious; the FreeBSD GENERIC kernel at the time seemed (to my eyes) to suffer a massive performance loss from its configuration. Anyone running FreeBSD *must* build at least a custom kernel, even if they use the binary distribution of everything else.

    It was a lot of effort. What did I get out of it? It was by the end one of the speediest systems I had ever used since the days of DOS. Most programs loaded faster than their binary equivalents (on older machines the differences were more glaringly obvious, such as the time it took to initialize X).

    One time I clocked my old machine, running a custom built FreeBSD installation, against the other computers in the house from power-on to a full desktop (after login).

    On my machine, the entire affair (BIOS, bootloader, bootstrapping, system loading, X, login, desktop environment (WindowMaker in this case)) cost a mere 45 seconds. My father's machine, which was in all respects a faster computer, loaded Windows 2000 in the course of perhaps two minutes. Also, I stopped timing after the desktop came up, but Windows does continue to load and fidget about for a good while after that. The extra time taken for it to settle down would have cost it another minute, but only because of all the crap my dad had set to load, which I don't blame Windows for.

    The kitchen computer also ran Windows 2000, but had a slimmer configuration, so it loaded shortly over a minute. FreeBSD, however, still beat them both badly.

    In light of my own experience, compiling from source can get you some rather wonderful results. However, I noticed that not all systems were created equal. While FreeBSD GENERIC was as slow as molasses, I find in linux that the binary kernels that come with my distributions seem to load and operate just as fast, if not faster than my custom build of FreeBSD. In linux I have used only binary packages, and the system overall "feels" just as fast, though some operations are a little slower (like loading emacs ;)).

    I appreciate the arguments presented by both camps, but I feel the need to point out that some are too quick to downplay the possible performance gains offered by custom builds, because they certainly exist. Sometimes they can be noticeably significant.

"When the going gets tough, the tough get empirical." -- Jon Carroll

Working...