Should You Pre-Compile Binaries or Roll Your Own? 301
Jane Walker writes "The completion of pre-compiled packages and maximizing machine performance are two powerful incentives for Windows admins to use Linux and compile an OSS package." TechTarget has an article taking a look at some of the "why" behind rolling your own. What preferences have other Slashdot users developed, and why?
Gentoo? (Score:5, Informative)
Re:Gentoo? (Score:5, Informative)
Re:Gentoo? (Score:2, Insightful)
Agreed on FreeBSD (Score:2)
But (Score:2)
Re:Gentoo? (Score:5, Interesting)
The story, and comment, is almost certain to generate a flamefest. So I'll get in early.
I'm a Debian user, and there are three things I know about gentoo:
As for the first I think that compiling from source may well give you a speedup. But when my computer is setting with me at the desktop/ssh session very few processes are running and the network latency / my thinking time are most likely to be the biggest source of delays.
True for heavily loaded servers the compilation might give you a boost but I'd be suprised if it was significant.
Next we have USE flags. These do strike me as an insanely useful thing. But I have one niggling little doubt: I suspect they only work for code that supports it. e.g. project foo has optional support for libbar. If the upstream/original code doesn't have a feature marked as optional I don't imagine the Gentoo people would rework it to strip it out.
So the ability to remove things from the source must be neutered, right?
Finally the merging of configuration files in /etc seems useful. But I wonder if this is the correct approach. My distribution of choice, Debian, already does its utmost to preserve all configuration file changes automagically. I find it hard to understand what Gentoo does differently which makes it better.
Ultimately I guess there are pros and cons to source based distributions depending on your needs. But one thing is true: If you're building from source and making use of modified USE flags and compiler flags then changes are you're the only person in the planet with a particular setup - that means bug reports are hard to manage.
Theres a great deal to be said from having a thousand machines running identical binaries when it comes to tracking down bugs. (Sure diversity is good, especially for security, but there comes a point where maybe people take it a little bit too far).
ObDisclaimer: I'm happy to be educated about Gentoo, but be gentle with me, k?
Re:Gentoo? (Score:5, Informative)
Well, as a Gentoo user, I'll tell you my personal reasons for using portage (speed isn't one of them).
1.) Maintainability: I don't have to fiddle with 30+ binary dependencies when I upgrade a package, nor do I worry about having multiple library versions within the same major release
2.) Simplicity: Well, it's not particularly simple (in fact, until 2006.1 it was a nightmare) to setup, but once everything is in line I simply don't have to worry about be various `gotchas` of any given package, it's all been abstracted away
3.) USE Flags: An extension of the above, USE is like a homogeneous ./configure, no more silly --without-some-foo flags, or include paths that I forgot about 30 seconds after I installed a library. It's not so much about making things optional (at least in the real world) but more about keeping things simple (I specify all of my USE flags at install time, and simply add them to my list when new ones are created)
4.) Lack of Binary Packages: As an old slackware user, I got used to not finding package `foo` as a .tbz and having to deal with RPMs that are/were broken and took more time to install properly than to compile. By using a source based distribution, if I have a one-off or patched library I don't have to worry about will feature X work or why Sodipodi crashes, because whatever version I have is (within reason) now the native version to the application
Hope that helps
Re:Gentoo? (Score:5, Interesting)
Only if it breaks api compatibility with the previous version. Otherwise, that's what dynamic linking is for, isn't it?
Right on
Personally, I think the big benefits of running gentoo over debian are things like
On the other hand, I'd say on a p4 3ghz desktop system with a very large software set, I'm probably averaging 2-3 hours a week of compiling for various updates, my debian and fc4 boxes spend more like 5-10 minutes a week downloading and unpacking them. But, if you're halfway decent at scheduling and don't have constant insanely-high demand everywhere, I'd say that update time isn't even a particularly big deal (after all, it's mostly non-interactive
Re:Gentoo? (Score:2)
Re:Gentoo? (Score:4, Interesting)
Re:Gentoo? (Score:4, Interesting)
"The distro is based around compiling from source, which many suggest gives a huge speedup."
It probably does, especially when building for specific architectures
(like C3 or C3-2, etc..)
"... but I'd be suprised if it was significant."
Well, since you compile the compiler as well as everything else.
It does accumulate...
But point taken, in most cases it is not a reason in itself.
USE flags: "I suspect they only work for code that support"
"If the upstream/original code doesn't have a feature marked as optional I don't imagine the Gentoo people would rework it to strip it out."
Actually, that's not true: The Gentoo devs do apply some very useful patches, including some that make it possible to *remove* unused features like you described. Better yet, these patches do make it upstream eventually, albeit at a smaller pace (so the whole community benefits)
Re: configuration files: "Debian, already does its utmost to preserve all configuration file changes automagically. I find it hard to understand what Gentoo does differently which makes it better"
It is not that different, except maybe that Debian does not change as quickly as Gentoo.
"you're the only person in the planet with a particular setup - that means bug reports are hard to manage."
You would be surprised.... Check out the Gentoo ML, they are full of people ready to help, even you try to use that tweaked package XYZ and get into difficulty.
"thousand machines running identical binaries when it comes to tracking down bugs"
Well, if that's what you are looking for, you still can with Gentoo:
(as the parent posted noted) build binary packages on the build machine and deploy to all the others in binary form.
If you want to try it out, why not use UML to boot into it:
http://uml.nagafix.co.uk/ [nagafix.co.uk]
(images and kernels ready to use)
Re:Gentoo? (Score:2)
It depends what kind of Debian you follow. Stable never changes, because its stable. Testing is the best for a Desktop system, not as "flakey" as unstable but gets updated very fast - most of the time, except firefox 1.5 thats still not in testing. But with apt-pinning you can easly install one or two small packages from unstable in testing. Its similar to the gentoo feature where you can tag which packages can come
Re:Gentoo? (Score:2, Insightful)
Does a custom-compiled compiler create different binaries to a pre-packaged compiler? I was under the impression that it might compile the application faster, but the resulting linked-up, ready-to-run binary is no different. So "it does accumulate" doesn't add up to me...
Just nit-picking, sorry...
Re:Gentoo? (Score:5, Informative)
Re:Gentoo? (Score:2, Troll)
The problem is the USE flags are global.. you can override them for an individual package but that doesn't get recorded anywhere - on the next emerge world it'll happily forget all your carefully crafted options and reinstall with its global defaults.
The killer for me with lynx. Most distros have a minimal lynx that works in text mode. By default the gentoo one is dependent on X, about a million fonts, etc. You can override that on the co
Re:Gentoo? (Score:5, Informative)
Re:Gentoo? (Score:2)
Re:Gentoo? (Score:2)
That's not true anymore. Gentoo maintains a set of package specific files to track individual flags and stable/ustable settings in
You need to re-examine the portage system, it's grown a lot in the past while.
Re:Gentoo? (Score:3, Informative)
Re:Gentoo? (Score:5, Informative)
The Debian way will be to ignore that option, set a default for the option, and/or at best log a message that is easily missed (I don't know Debian so any inaccuracies are unintentional). Gentoo provides a complete default config that can be compared to the existing config.
Nope, that's not the "Debian way" (at least as I, long-time Debian user/developer, see it). Debian provides a default config file (or files). When the package is upgraded, if the distributed config file is changed (new option, or new value for old option), then one of two things happens:
Re:Gentoo? (Score:2)
The big advantage to compiling things locally is that the rules for which packages work together are based on source compatibility, not binary compatibility. This, in turn, means that you have a lot more flexibility in updating things, and this flexibility eliminates a lot of the "flag day, new stable
Re:Gentoo? (Score:4, Informative)
Yes and no. It's really more dependent on the Gentoo maintainer than on the upstream. Most "big" projects also include a patchset (generally small stuff like where config files go; sometimes big changes to the codebase). These will generally have fairly rich USE flags. And it's not simply disabling things; in some cases it's adding whole subsystems (like SASL for sendmail, or the postgres backend for named). But anyways, some maintainers will add a lot of USE flags to their ebuild and others won't.
Part of it comes from the fact that /etc/foo.conf might be altered by both libfoo and gtk-foo. The utility dispatch-conf diffs the two packages' foo.confs to let you merge their conf files. I think the best thing Gentoo does in /etc is the utility rc-update. It's the most sane init/runlevel interface I know.
Gentoo vs. Debian (Score:4, Informative)
I've used both Debian and Gentoo. I am now (mostly) using Gentoo and not Debian. I hope you might find my perspective helpful. (But it should also be stated I also use ports on FreeBSD, and I have come to the conclusion that source-based distros are easier for me to use.)
How I'd love to have so much dead CPU time! If your computer's not doing anything for you, why bother having one? Truth be told, one can reap performance gains in more definitive ways than trying to have your compiler make different binaries. As you indicated, running few processes helps. As can swapping in a custom kernel and/or using a faster filesystem (both of which you can do on Deb fairly easily).
I don't usually see a huge advantage, but it does depend on the app. For desktop users, app launchtime is often significant. I do think using '-Os' to make smaller binaries (which get into memory faster) does usually create a noticeable benefit. And for workstation/server apps, every few percent for "faster" apps could be helpful to some people (but I agree that it is typically only a few percent). But just leaving apps open is often "good enough" for load time & perhaps there aren't many who really need the extra few percent.
Yes. Absolutely. Particularly when you have relatively common needs across all apps. Perhaps you want to run an X-less server? Or perhaps you want to have apps with only KDE/QT or only Gnome/GTK+ or what not. USE comes to the rescue.
You are typically correct. But the thing is that foo more often than not will have optional support for some feature. But some gentoo ebuilds do, indeed, have USE flags that aren't just ./configure flags for some applications. For example, you can install xpdf with the 'nodrm' use flag, which applies a patch to cause xpdf to ignore drm restrictions. Indeed, for making custom ebuilds, USE flags prove to be quite useful: you can use them to test multiple patches without the need to apply a given patch to all installations & can easily check which features a certain app has (by checking which flags it was emerged with).
Gentoo's approach gives the user more choice. It preserves your old files by default. You can choose to replace the old config with a newer config or, more useful, merge (typically using sdiff) in changes between the old and new config. It doesn't choose what is best for you.
Debian's defaults are normally sane. But not always.
Re:Gentoo? (Score:4, Interesting)
More importantly, they enable parts of programs you do want/need, even if not many other people do.
For example, my desktop is one of the few *ix machines in my office, and our network is primarily based around Win2k3 and Active Directory. I really, really need Kerberos support in every package that supports it, and configuring 'USE="kerberos"' solves that problem.
This exact issue drove me away from Debian way back when. It made me chose between old Kerberized OpenSSH, or a newer un-Kerberized version [debian.org] (as of today: ssh-krb5 3.8.1p1-10 from OpenBSD 3.5, released 2004-05-01, or ssh 1:4.2p1-7). Gentoo didn't make me choose, so that's what I went with.
Gentoo isn't for everybody, but it has some features that I'd never give up. The ability to pick and choose obscure features that most other people won't need is high on that list.
Re:Gentoo? (Score:5, Interesting)
Re:Gentoo? (Score:2)
Sometimes I think they should concider using git (repository manager) to manage
Building things from source, some things are probably worth it, I always build a kernel optimised for the platform I'm using, and I'm tempted to build thi
Debian! (Score:2)
Debian really does demonstrate the problems inherent in letting someone else make decisions about what options and dependencies should exist for a piece of software.
To see what I mean, you have a freshly installed debian box you want to monitor with nagios.
So, you want to install the nagios nrpe server on this machine.
The Debian package for this is in two parts:
1. nagios-nrpe-plugin
This is the plugins that are actually used by an nrpe server. If you in
Re:Gentoo? (Score:2)
Re:Gentoo? (Score:2, Funny)
Eh? (Score:5, Insightful)
Was I the only one who found that this article didn't really shed too much light on whether or not you should compile your software from source?
By the way, I know the benefits of compiling from source, but how this made slashdot, I don't know.
Other benefits (Score:5, Informative)
The big benefits of precompiling are that you don't need to support 1500 different sets of libraries in your development environments, and that the package will generally work right with minimum fuss.
The big benefits of source-based distros are the ability to tailor packages to each install (ie the ability to compile certain features in or out), to choose optimizations on each package (do you want -Os, -O2, -03, or are you really daring -> -ffast-math?).
There are some things that cut both ways - often a given package can be compiled using one or more different dependencies and if you want this flexibility then source-based might work better. On the other hand, it also means that if you have 500 different users of your distor you have 495 different configurations and bugs that are hard to reproduce.
As for me - I like source-based. However, if I had to build a linux email/browser box for a relative I'd probably use Debian stale...er...stable. The right tool for the right job.
Custom for critical, vanilla for infrastructure. (Score:2)
From an optimization/customization standpoint, rolling your own is better.
So, for the items that should be vanilla from company to company (DHCP / DNS / File serving / etc), I recommend using as vanilla a system as possible.
In theory, there should be something that differentiates your company from all of the others. If this is an app or database or other computer related item, then it should be fully customized and fully documented. You want to squeeze every
Re:Other benefits (Score:3, Interesting)
In some circles (e.g. #mysql on Freenode) this is considered a Bad Thing. Users come in on Gentoo systems complaining about how 'Unstable' MySQL is. Did they compile from source? Yes. Did they compile from official source? Yes. What EXACTLY did they d
Re:Other benefits (Score:2)
How is this any different from SuSE, Fedoria, Debian, etc? Hardly anyone has a "stock" MySQL with zero patches and just exactly the same libraries linked in as the developers do. One distro has a bug fix inclu
Re:Other benefits (Score:2)
Well, first of all, with the distros, you're talking about 15-20 different binaries, not thousands. Second, the dis
Advantages to both (Score:5, Insightful)
Conversely, if programmers sufficiently document their binaries, that's not as much of a problem. URLs for other binaries, or instructions for checking/compiling dependencies, can speed up that process.
Of course, binaries are a huge advantage to non-experts and beginners, who just want a program to be there. They don't care about maximizing efficiency, they care about getting their work/fun done.
So really, it entirely depends on the application. For programs that are typically for the hardcore programmer/user crowd, source-only makes sense -- those people who use the program are going to know how to compile and check everything already. But for programs like, say, a CD burning program? I definitely think both should be provided, and installation for both should be well documented. Given how easy it is to provide a binary, even if it's "fat," there's no reason why a more popular program can't provide both versions.
Re:Advantages to both (Score:2)
(no joke, upgrade 'nano', and keep an eye on the dependancies)
Re:Advantages to both (Score:2)
tmh@sisko:~$ apt-cache depends nano
nano
Depends: libc6
Depends: libncurses5
Suggests: spell
Conflicts: nano-tiny
Conflicts:
Replaces:
Re:Advantages to both (Score:2)
Don't waste your time. (Score:5, Informative)
There are special cases like when you want to use dynamic libraries instead of static (to save memory), or when there's a major architecture change (PPC -> x86 for Apple). In those cases you'll gain something.
Another case is rewriting your program to use CPU-specific instructions, like Altivec or SSE3. That, in certain circumstances, will speed up your program.
But if you're compiling OO.org or Mozilla because you think your 686 version will be 100% faster than the 386 version, you're wrong.
TEN PERCENT! (Score:5, Insightful)
Re:TEN PERCENT! (Score:2)
For most of the users performance change under 10% doesn't matter.
Re:TEN PERCENT! (Score:2)
Or on the other end, think of a
Re:TEN PERCENT! (Score:2)
I hope that the point is that you cannot increase performance of your servers by 10% by recompiling your software, just decrease YOUR performance by 200% or more by attempting to do so.
Recompiling someone elses software is usually foolish. You lose support, you
Re:TEN PERCENT! (Score:2)
Re:TEN PERCENT! (Score:2)
Imagine a server room with $1,000,000 worth of hardware. LeonGeeste's 10% speedup would give his boss an extra $100,000 worth of computing ability. You don't think that'd be worth a bonus?
To those who argue that such a simple tweak isn't worth compensation: he'd be getting paid for the amount of learning it took to get to point that he can implement huge money saving ideas. As the old story [halfthedeck.com] says, it's not the price of the fix itself that they're paying for.
Re:TEN PERCENT! (Score:2)
You do realise that, since you claim to be backlogged, in the short term the backlog is going to drastically increase since you'll be wasting time and resources compiling stuff, right? That's one hell of a gamble.
Besides, if you're
Sometimes optimization is trivial (Score:2)
In desperation, because the developer was an idiot, I took a look at the code. By adding one word: making a variable representing the UID static, the app wen
Re:TEN PERCENT! (Score:3, Funny)
"Look", said one, "somebody dropped a $20 bill!"
"That's not possible", responded the other, "If there was a $20 bill there, someone would have picked it up already."
Comment removed (Score:5, Insightful)
Re:Don't waste your time. (Score:2)
Mostly because said switches are included when the main optimisation switches are provided.
Re:Don't waste your time. (Score:5, Interesting)
Re: (Score:2)
Re:Don't waste your time. (Score:3, Interesting)
As usual, the answer is... (Score:4, Insightful)
If the time required to compile (plus trouble) is less than the time/trouble saved through performance gains (assuming you actually compile it correctly to get said gains), then compile. Else download and install.
But then again, if you just have a lot of time on your hands, it doesn't really matter what you do.
In the old days... (Score:2)
Re:In the old days... (Score:2)
build your own! (Score:2)
On my FreeBSD I compile almost everything from ports. That way you have maximum control about the options and CFLAGS with wich your app is compiled.
It's as easy as 'cd /usr/ports/category/portdir; make install clean'.
If you have a number of identical machines, you could compile and build packages on one machine, and then install them on all the others.
On Linux I think that Gentoo's portage comes closest.
Re:build your own! (Score:2)
Then, all you need to do is run portinstall portname for each port you want to install.
Re:build your own! (Score:2)
Whoopy. You've managed to take the worst aspects of FreeBSD (that fewer people use it) and combine it with the fact nobody is QAing your CFLAGS but you.
Thanks for putting untested configurations on the Internet!
On Linux I think that Gentoo's portage comes closest.
Portage is much more stupid than FreeBSD's ports. With FreeBSD's ports- just about everything is QA'
"Absolute best performance" fallacy (Score:5, Insightful)
Re:"Absolute best performance" fallacy (Score:5, Informative)
Running "openssl speed" compiled with "-O3 -march=pentium4" gave about 3 times the performance of "-O" on my server. Being able to handle 3 times the number of SSL connections was certainly worth the 10 seconds required to put correct values in Gentoo's /etc/make.conf.
Re:Wow, lay off the drugs dude. (Score:2)
Ah, so a 100% speedup is perfectly believable, but 200% is completely preposterous? Whatever. But even if it was only 50%, it's still 10 times better than the "no more than 5% improvement" being bandied around.
At any rate, I guess you missed the words "on my server". I don't care about your server, or anyone else's. Mine ran measurably faster. Mod me down and whine about it all you want, but it doesn't change the fact that I ran the tests on
Re:"Absolute best performance" fallacy (Score:2)
I am Between Self Compiling and Gentoo (Score:4, Insightful)
A co-worker introduced me to Gentoo late last year and I have to say I am very impressed. It's much faster than the optimizations I was using. Of course I didn't compile everything in RedHat or Fedora by hand. That's why Gentoo really rocks. You CAN hand compile everything from the ground up! I also used to use Linux From Scratch. And YES, I do use this stuff on production machines. You just can't get any better performance or security by doing it all yourself. The only reason to use package managers is if you are new to Linux or just don't want to learn much. But if you don't dig in, then you're at risk of thinking that something in your installation is broken, when it's not. I've seen many people throw up their hands saying, "I have to re-install my Linux box dammit!" when all they really needed to do was fix a symlink, edit a config file or downgrade something for compatibility reasons.
For example, on a laptop at home I decided I wanted to use the Xdmx server from X.org, so I hand compiled X.org. After that, I kept having this problem where the acpid (tracks the laptop's battery life among other things) daemon wasn't starting and would produce an error message whenever I logged into Gnome. I dug around on the net for quite a while and finally found out that the version of X.org (a devel version, not a stable version) grabs ACPI if you are using the RedHat graphical boot screen. The fix? Stop the RHGB from running by removing the RHGB kernel option. I think a lot of people would have assumed they hosed their installation and reinstalled if that problem really bothered them. It's not hard to find solutions to most problems in Linux no matter how obscure. That's why only knowing how to use pre-compiled binaries is a detriment if you're serious about using Linux.
Re:I am Between Self Compiling and Gentoo (Score:2)
Re:I am Between Self Compiling and Gentoo (Score:4, Insightful)
You learn not to.
Yes, there are times when a source based distro is good. There are times when you NEED the level of control that it gives you.
Most jobs, in most environments don't. In fact, most sysadmins that I have seen just don't have the resources to exert that much control, or put that much love and care into every system.... nor should they.
I have said before, in specific cases (research computing clusters where you essentially have 3 machines, one of which is copied n times as "nodes" that can be reimaged from a new image pretty much at will - is the one case where I have really seen it used to good effect) source based distros are great. Of for your hobby machine, or you r laptop.
As soon as you start talking less about "running linux" and more about "deploying services", the focus shifts. Source based distros are a management nightmare in any manner of large or hetrogeneous environment.
Frankly, the majority of systems that I have had responsibility for havn't even had a compiler on them, never mind a full blown development environment, usually not even library headers.
Why? Because we don't want admins compiling code on just any box and tossing it in place. So, why make it easy for them to do so? Nothing should EVER EVER EVER be compiled on the production mail server, or the web server.... it should be compiled on the compiler machine, the dev box.
When you start making distinctions between the roles of boxes like that, as you should do in any larger environment, then you start to see the benefits of a source based distro melt away, and the real power of package management come into full effect.
Most linux users, the home user, will never see this. I know I didn't understand it until I had been doing it for a living for a few years.
-Steve
Re:I am Between Self Compiling and Gentoo (Score:4, Insightful)
The performance can be debated, but you have got the security argument backwards. If you use pre-packaged binaries, you can get security updates quickly and automatically because any responsible Linux distributor will provide updated packages in a timely manner. This is especially useful if you maintain a large number of machines and want to make sure that the latest security patches are quickly applied to all of them.
On the other hand, compiling your own software requires you to pay attention to all security announcements and apply the security patches yourself. It may be possible to automate that to some extent (e.g., Gentoo provides some help for security patches), but not as easily as with the pre-packaged binaries.
From a security point of view, I am glad that some others are doing most of the work for me. The issue of performance can be debated because there are many pros and cons, but security is usually better for the pre-packaged binaries. Note that I am talking about security from the point of view of applying the latest security patches and staying ahead of the script kiddies, not in terms of initial configuration of the system. And by the way, I am not against compiling software on my own because I compile several packages from sources (directly from cvs or svn, especially for the stuff that I contribute to). But then I am aware that the security may actually be weaker for those packages than for the ones that I can update automatically.
Re:I am Between Self Compiling and Gentoo (Score:2)
Or perhaps time? When my boss says "give me a moodle server" and I can build a box, install debian and moodle, and still have it for him that morning, he's a lot happier than me saying "oh wait, bash etc. is still compiling".
Re:I am Between Self Compiling and Gentoo (Score:4, Interesting)
I used to compile every major package, back when I didn't know as much about Linux or being a sysadmin. Now that I know what I'm doing I have the confidence needed to use a binary package manager to its fullest.
One niggle (Score:2)
What can I say that won't come off as a flame? Ah! At least nobody will accuse you of lying about being a Gentoo user!
By the way, the hands on approach is possible in other ways. I use Slack
Re: (Score:2)
deployment? (Score:5, Insightful)
They are not only wrong in their conclusion, but the article barely scratches the surface of the question.
Put simply, compiling software on your own is fine for a one off, or your desktop, or your hobby machine.ou.. or if you either a) need the latest wizbang features (and maybe can put up with the latest gobang bugs) or b) need really specific version control or c) can't find a precompiled package with the right ompile time options set.
Other than that, you should pretty much always use pre-built.
Sure, you can build your entire system from scratch if you like, libc on up. Why? The performance increase will be so minor that you will have to run benchmarks if you even want to be able to tell there was a change. You will then have to recompile everything you ever decide to update.
This strategy is a nightmare as the systems get more diverse and complex.
it also has nothing to do with deployment. Deployment is what you do after you have decided what you want and made it work once. Deployment is when you go and put it on some number of machines to do real work.
I would love to see them talk more about the differences in deployment. With precompiled packages from the os vendor, ala debian or redhat, its easy. You use apt-get or rpm and off you go. Maybe you even have a redhat network satalite or a local apt repository to give yourself more control. Then you can easily inject local packages or control the stream (no, sorry, I am NOT ready to upgrade to the new release)
but should you compile "most of the time"? hell no!
It is, in fact, the vast minority of software where you really need the latest features and or special compile options. Its the vast minority of the time where the performance increase will even be perceptable.
Why waste the time and cpu cycles? Takes over a day to get a full gentoo desktop going, and for what? I can have ubuntu installed and ready to go with a full desktop in maybe 2 hours.
Lets take the common scenario.... new openssl remote root exploit comes out. The email you read just basically said, in no uncertain terms, that half your network is now just waiting to be rooted by the next script kiddie who notices. Thats lets say... 50 machines.
Now your job is to deploy a new openssl to all these machines.
You could notice that the vulnerability came out in such a time frame that they allowed the OS vendors like debian to release fixes (often happens, if not they are usually out within a very reasonable time frame)... so you hit apt-get update && apt-get upgrade
or maybe you just promote the packgae into your repository, and let the automated updates deploy it for you. You go home, have a coffee, and be ready to log in remotly if anything goes tits up.
Now if you are hand compiling what do you do? Compile, test. And then um.... ahh... scp the dir to 49 machines and ssh in and do a make install on each?
How is this better than using a package manager again? Now you ahve dirs sitting around, you have to hope that your compile box is sufficiently similar to the other boxes that you didn't just add a library requirement (say because configure found that yoru dev box has libfuckmyshit.so installed and this neat new bits of openssl can make use of it)
How about when a box crashes and burns and you now need to take your lovingly handcrafted box, and rebuild it, and put all that lovingly compiles software back on it.
Fuck all that... give me a good binary distro any day of the week. I will hand compile when I HAVE to... and not before.
-Steve
Re:deployment? (Score:2)
Sysadmin wants to install/upgrade program.
Sysadmin decides for whatever reason that this should be built from source.
Sysadmin downloads source to testing mac
The real issue is dependencies (Score:5, Insightful)
If you're compiling your own for performance reasons, don't bother. There's a few packages that can benefit from being compiled for specific hardware, the kernel and libc, for example, and things like math libraries that can use special instructions when compiled for specific hardware. For the most part, though, your apps aren't going to be limited by the instruction set, they'll be limited by things like graphics and disk I/O performance and available memory. IMHO if you're trying to squeeze the last 1% of performance out of a system, you probably should look at whether you need to just throw more hardware horsepower at the problem.
The big benefits and drawbacks of custom-compiled vs. pre-built packages is in the dependencies. Pre-built packages don't require you to install development packages, compilers and all the cruft that goes along with a development environment. You can slap those packages in and go with a lot less installation, and you can be sure they're built with a reasonable selection of features. On the other hand, those pre-built packages come with dependencies. When you install a pre-built package you pretty much have to have the versions of all the libraries and other software it depends on. By contrast, when you compile your own packages they'll build against the versions of other software you already have, using all the same compiler options, and they'll probably auto-configure to not use stuff you don't have installed. This leads to a more consistent set of binaries, fewer situations where you need multiple versions of other packages installed (eg. having to install Python 2.3 for package X alongside Python 2.4 for package Y) and overall a cleaner package tree.
Where the cut-off is depends on your situation. If there's only a few instances of dependency cruft, it may not be an issue. If you have a lot of dueling dependencies, it may be worth while to recompile to reduce the number of different versions of stuff you need. If you've got externally-dictated constraints (eg. only one version of OpenSSL is approved for your systems and everything must use it or not use SSL at all) then you may have no choice but to compile your own if you can't find a pre-built package that was built against the appropriate versions of libraries.
hardware, ease of distribution, latest version (Score:4, Insightful)
Another missed point is it's usually easier for the developers and hardware vendors. Easier to distribute source code than maintain dozens of binaries. Just another advantage of open source. Many projects have a few binaries for the most popular platforms, and source code for those who want to use something less common.
Latest version and leading edge. I've been trying Gentoo. In most cases the extra performance isn't much, and isn't worth the hassle. And Gentoo defeats one of the attractions of roll your own: the leading edge. There is always a lag between when a project releases a new version, and a distributor can get around to turning the new version into a new package. Article didn't mention that aspect either. If your distro is slow, you can bypass them and go straight to the source, or, if available, the source's binaries. Ubuntu Linux 5.10, for instance, is still stuck on Firefox 1.0.7. I'm not talking about checking out the very latest CVS source tree, just the latest release.
Re:hardware, ease of distribution, latest version (Score:2)
Good. Though you didn't try hard enough.
In most cases the extra performance isn't much, and isn't worth the hassle.
Nonsense. Perhaps you didn't stressed your CFLAGS enough. 30% of speed performance in X11 on pentium4 is typical gain against generic 686. Don't forget to enable SSEn.
And Gentoo defeats one of the attractions of roll your own: the leading edge.
Nonsense. Did you hear about portage overlay? You can have zero day release versions of anything of your own, plus the comfort of
Precompiled binaries are fine (Score:2)
Yes, pre-compile - but not for the obvious reason (Score:5, Insightful)
However, I don't consider performance to be a particularly big benefit. Once you've got gcc doing it's basic -O3 or whatever, anything else is marginal at best.
There are, however, two things I get out of it which are useful to me. These should not be ignored:
Of course, there are drawbacks:
I guess it's just a case of "know the tool you're using".
Precompiled binaries are tested (Score:2)
An unforgivable mistake in the sphere of commercial software is to test a DEBUG build and release an OPTIMIZED build. The b
Yeah, that will make people REALLY want to adopt.. (Score:2)
I don't know what the state of the art is for compiling code these days, but I know that when I download a program to use on my computer, I don't want to make a Computer Science project out of actually getting it to RUN on my computer. So if it
Re:Yeah, that will make people REALLY want to adop (Score:3, Insightful)
desktop: apt-get, server : make -params (Score:2)
Now on the serversm when there is an advantage to be taken, I would compile apps, but not everything. Like mysql and apache (that accounts for 90% for processor time) I would compile if there is heavy load, but for others i really do not care.
Now BSD is a different devil, there you must compile it when using ports, so I guess that is the way to go.
Another great thing about managed code... (Score:3, Insightful)
In platforms like
No need to break out gcc or try and resolve dependencies. Ahh... I loved the managed world.
Re:Another great thing about managed code... (Score:2)
And there is the potential for future optimizations. For example: the CLR could conceivably do profile-guided optimization [microsoft.com] automatically.
Stay true! (Score:2)
Most binaries, a little compile (Score:2)
Well, sorry, no, never. Compiling all makes absolutly NO sense at all, except for some special packages, where you have to do a special setup.
You waste so much time with compiling (eg in Gentoo) and at the end you have more troubles with libraries, etc. I run Debian/testing on my workstation and on most of my servers and I only
The One True Answer (Score:2)
Specifically, if the default configuration is all you need, then go with the default. It'll make maintenance a lot easier and won't take up time doing customization that will never be utilized.
If you need something fairly standard - or at least uniformly weird - and an installer is available, then Gentoo is an excellent halfway house. Because package intera
No difference 90% of the time (Score:2)
How to get best of both worlds (Score:2, Insightful)
Depends really (Score:2)
Moox builds of Firefox (Score:2)
The Moox builds (http://www.moox.ws/ [www.moox.ws] of Firefox did exactly this, and gave a noticable performance increase over the 'regular' mozilla.com pre-built binaries for Win32. Before the site went down, there were also benchmarks of his buil
Compiler more important than compiling? (Score:5, Interesting)
I'm not saying we all have access to icc, but if someone wants to make a binary available, I'm more liable to use that than compiling from source. Call me crazy. And I know someone will.
Re:Compiler more important than compiling? (Score:3, Informative)
Not if you're running on an AMD64 it doesn't.
http://yro.slashdot.org/article.pl?sid=05/07/12/1
It's not really about performance (Score:5, Insightful)
If there is one thing that appeals to me about Gentoo (as I understand it), it's the concept of meta-configuration at build time. Unfortunately, lots of options in packages get configured at BUILD TIME, so either the binary packages have those options on, or they don't. When this is the case, if the distro doesn't provide multiple binary packages with all of the useful configurations, then you end up having to build from source. (IMO, building from source means compiling from source packages whenever possible.)
So I like the concept of saying in one place, "build for KDE", "build with ssl support", "build with ldap support", etc. Maybe someday everything will be runtime configurable, but until that day, I'll be wishing for that meta-level of configuration...
Having said all of that, check out apt-source, apt-build, and their ilk if you're interested in "Debian from source". Debian binary packages solve about 97% of my problems so I'm not usually looking for "make world" kind of functionality.
Enough rambling for now.
Multiple Versions Required (Score:3, Interesting)
For some packages a recompile is merely annoying, having to download and reconfigure with a new prefix and rebuild; but for others, it can be a horrible web of configuration options to find numerous dependencies in special locations. This complexity can be really frustrating if all you want to do is relocate the tool so two different versions can be installed.
Pre-built binaries should assume by default that they'll go into a version-specific directory (say
There are other details, of course...for example, it may matter what compiler you use, you may want 32-bit and 64-bit, etc. But the basic principle is still simple: have a standard package version tree on all Unix-like systems so you can "just download" binaries without conflicts, once and for all.
FreeBSD (Score:5, Interesting)
Not only had I built every package from source (using ports), I also took the trouble to rebuild the base system and kernel with a custom configuration and options.
The benefits to some of this were obvious; the FreeBSD GENERIC kernel at the time seemed (to my eyes) to suffer a massive performance loss from its configuration. Anyone running FreeBSD *must* build at least a custom kernel, even if they use the binary distribution of everything else.
It was a lot of effort. What did I get out of it? It was by the end one of the speediest systems I had ever used since the days of DOS. Most programs loaded faster than their binary equivalents (on older machines the differences were more glaringly obvious, such as the time it took to initialize X).
One time I clocked my old machine, running a custom built FreeBSD installation, against the other computers in the house from power-on to a full desktop (after login).
On my machine, the entire affair (BIOS, bootloader, bootstrapping, system loading, X, login, desktop environment (WindowMaker in this case)) cost a mere 45 seconds. My father's machine, which was in all respects a faster computer, loaded Windows 2000 in the course of perhaps two minutes. Also, I stopped timing after the desktop came up, but Windows does continue to load and fidget about for a good while after that. The extra time taken for it to settle down would have cost it another minute, but only because of all the crap my dad had set to load, which I don't blame Windows for.
The kitchen computer also ran Windows 2000, but had a slimmer configuration, so it loaded shortly over a minute. FreeBSD, however, still beat them both badly.
In light of my own experience, compiling from source can get you some rather wonderful results. However, I noticed that not all systems were created equal. While FreeBSD GENERIC was as slow as molasses, I find in linux that the binary kernels that come with my distributions seem to load and operate just as fast, if not faster than my custom build of FreeBSD. In linux I have used only binary packages, and the system overall "feels" just as fast, though some operations are a little slower (like loading emacs ;)).
I appreciate the arguments presented by both camps, but I feel the need to point out that some are too quick to downplay the possible performance gains offered by custom builds, because they certainly exist. Sometimes they can be noticeably significant.
Re:Oh lordy... (Score:2)
snapshot2.png 2nd link (Score:2)
Re:Using -Os (optimize for size) faster than O2!!! (Score:2)
-Os and -O2 are closely related. It could be that some of the -O2 optimizations are actually counterproductive for your particular application. In addition, a smaller binary size means more code can fit in cache, which leads to fewer cache misses. -Os is the same as -O2 but with code-size-increasing optimizations turned off (such as loop u