Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Why Are Binaries And Screenshots Good Things? 254

QuantumG asks: "I recently got into an argument with an open source project leader over wether or not releasing precompiled binaries is a good thing or not. He was adament that if potential programmers had to download the pre-alpha source code they would be more likely to take up an active part in programming than if they could just grab a binary. I thought it was important to make it as easy as possible to show the current state of the project to new recruits so they could see what has been done, what needs to be done and what could use work. I feel the same way about screenshots. What does Slashdot think?" Binaries are definitely important. Remember, programmers aren't the only ones who would like to look at your code and see what you are doing, and it's not right to expect them to compile code that may not be easy to compile. Of course, there is a (debatable) point in the software lifecycle where the software is deemed "mature" enough for binaries. What do you all think about this issue?
This discussion has been archived. No new comments can be posted.

Why Are Binaries And Screenshots Good Things?

Comments Filter:
  • by Anonymous Coward
    I see both ends of it good and bad. Some users are getting into linux like they would windows and don't know a thing to code and compile some people mentioned that compiling a source code is easy .. It is but you have to look of in terms of most people. Most people is like a Uncle of mine that knows to get it on and get into M$ Office. Others know everything to linux other don't and we have to look at the whole picture.
    Source code is great to have when you are :
    1. know how to compile!
    2. know how to compile
    3. know how to compile!
    Basically, we compile source code not to look at the code but to optimize it to our individual system as much as possible wither it being to compile it or changing some lines of code.

    Look it another way I want a program like star office to say now do I really want to compile it? Or have knowledge to do so like my uncle,no! Linux has came a long way but this issue is really scaring me to the future is going. People have to understand that not all Administrators or Coders are using linux or getting into linux anymore . The average folk are Binary is important in this case the average user want to download it and install it not think of compiling it.

    Binary and source code should be out on ever release od the application atleast there should be both present at every major version change. Like 1.0.8 to 1.0.9 should but it is not needed, but like 1.0.8 to 1.1.0 this should have both binary and source present!
  • by Anonymous Coward
    How do you expect them to ban the link when the image can simply be mirrored at several different URLs as is done now?

    Are you suggesting that all links to *.jpg and *.gif files be banned by slashdot?
  • by Anonymous Coward
    I find it a bit arrogant to say that users are downloading your software merely to look at the source, and not the functionality. I download programs to use them, not compile them or spend an hour working out dependencies.

    I say binaries and stand alone binaries specificly, are atleast as important as the source code. You need a method for non-programmers to use your software... after all it was written to be used, not read.
  • non-profit development makes good sense to me. non-profit supporting of idiots doesn't seem near as fun or as appealing. in the commercial world programmers don't support the users of user level applications. i don't know why anyone would expect opensource developers to waste their time on the annoyance. for profit? the users don't want to buy a support contract.
  • cry me a fucking river.

    supporting morons is so fun, i can't see why developers wouldn't want to do LOTS of it.
  • Or he built his compiler on a different machine

    Well, I've been using Linux ever since the good old days when one had to attack a freshly compiled kernel with a binary editor to tell it how to get at its root file system (2 bytes somewhere in the vicinity of 508 IIRC) and other "heroic" things like that. It goes without saying that I've moved things over to another box since then (although the old one still works just fine under DOS for my father, even after 9 years).

    --

  • Well, I've built my entire box from source. Oh wait, I tell a lie. Actually, it's everything except XFree86, because I simply don't have the diskspace needed to compile that "monster".

    So yes, binaries can have use even for non-average users who theoretically are perfectly capable and willing to work with the source.

    --

  • Posted by polar_bear:

    Generally I don't even bother to pull binaries, because they're often in RPM format and I don't use Red Hat -- the RPMs are often not quite right for my SuSE system, and my Slack and Debian boxen don't do RPM. (Okay, Slack 7.1 does include RPM, but it's not really integrated nor do I want it to be...)

    If something is in alpha or early beta, leave it in source form. Users who can't compile their own software are probably shouldn't be mucking about with alpha software. Not to sound elitist, but alpha apps aren't supposed to be 100% usable and users who can't compile code aren't going to be able to do much to help out...or reap much benefit from the software until it's further along.

    Screenshots, on the other hand, should be totally mandatory...and decent install instructions would be nice too. Sticking source code on SourceForge saying it's the greatest thing since sliced bread without demonstrating the software or explaining how to install it is pretty much useless. One of the complaints I have with SourceForge is since it has a default Web presence most projects don't put much time into their presentation of the software -- which doesn't entice users into actually trying it. Just one or two measly screenshots is all we ask...
  • How did you build it? Apparently your compiler was binary too.

    My first linux distro (based on the 1.2.10 kernel), was compiled entirely from source using a cross compiler on a Solaris SPARC5 system. There are several ways to build a distribution completely from scratch. Another option would be to use a DOS/Windows to Linux cross compiler.

  • "Remember, we want linux as a desktop for the masses, right? "

    I suppose it depends on who "we" are. Personally, I see no reason for developers to feel obligated to support clueless users. It's not the developers, I'm sure, who want clueless users. Why put up with constant whining, bitching, and moaning when in all likelihood you're not making any money on the venture? I've talked to people who seem to feel that, once you get down to it, "this Linux thing" they've read so much about should be better than what they currently have--an OS and other tools put together mainly by Microsoft, largest company in the world and holder of a significant concentration of the world's wealth.

    Why should free-software developers live up to the up-on-the-pedestal image that mainstream press expects it to live up to? If mainstream press wants Linux to be better than the Windows and MacOS world from a clueless-newbie POV, perhaps mainstream media should fund/develop on their own.
  • Not good enough.
  • Depending on how fast the project is moving, a regular binary release would be good - either weekly/monthly/whatever. Or if some major feature has been added.

    This way people who are interested in the project, but are not autoconf/make/gcc whizes don't have to worry about problems, and can take a look at how things are going.

    The question that just came to mind is - what generates more useless questions:
    - no precompiled binaries and lots of "I can't get this to compile on my system" questions
    - precompiled binaries and lots of "I can't get this to run on my system" questions

  • Completely different scenario - they are talking about works-in-progress, not something released.

    Besides, that is the way it used to be and it was a pain in the ass.

  • "Precompiled binaries"? As if there were some other kind of binary?

    -- Brian
  • And if they are a "code throwing god" they may not be one in the particular arena that the software is written. For instance I think of my self as a pretty good Perl coder and a strong web developer, and I could dabble in C, C++ but I wouldn't even want to look at Java source (I have seen enough of it already) or let alone tackle my own driver for the kernel (even though I recompile that anyways). For me it has less to do with compiling or building than the fact that I wouldn't help on an Open Source project unless I was interested in it and that I could contribute something more than inefficient buggy code.
  • Well, one reason is because binaries can be trojaned and/or virused. Source also can be trojaned, but cannot at present be trojaned accidentally. Binaries can be modified by other sources, like a (presently non-existent) virus. Some people attribute the almost total lack of virii on Linux to be due to the relative scarcity of transmitted-and-executed binaries.

    However, one strong argument in favor of binaries is KDE. I downloaded and compiled that monster -- it took something like SIX HOURS and I have a fast machine! I'm assuming that it is C++ that is causing the problem. KDE's source packages are pretty big, but I don't think any of them is even as big as the Linux kernel, which I can compile in around ten minutes.

    I don't know if it's a problem with gcc or just in general with C++. From my experience so far, that language might be the best argument yet for distributed precompiled binaries.

    As a rule of thumb, if it's going to take longer than an hour to build, I'd strongly suggest distributing binaries with or in addition to a source package. Anything over an hour turns 'an experiment' into 'a project'.

    As an aside, I sure hope the KDE developers are getting a benefit worth that kind of compile-time hit. God that installation sucked. :-(

  • I think it depends on the size of the program. If it's a small program like, say, mtx [sourceforge.net], where compiling it takes moments, well, no big deal. I would go nut compiling PostGreSQL all the time though -- it's just too big and bulky to compile swiftly, and I have better things to do with my time.

    But for alpha software, sure, make them compile it. If it doesn't compile it shouldn't have even been released to alpha...

    -E

  • (there's a phrase you thought you'd never hear)

    I haven't browsed at -1 for a couple of months, but there used to be someone who would regularly post quite informative and genuinely relevant URLs, except that they were the body of a goatse.cx target link. Brilliant!
    Sometimes I was tempted to moderate him up. (OK maybe her, but somehow, I doubt it)

    I just couldnt decide between +1 -informative or +1 -funny.
  • I'm still left wondering exactly what the actual size limit is

    It's potentially dependent on what OS you're running; it's not limited on some OSes, and may be limited on others. Try a limit command in the C shell or compatible shells, or ulimit -a in the Bourne shell or compatible shells (the latter may not work on some OSes).

    how to work around it

    In a program, put in a setrlimit() call that sets the RLIMIT_CORE limit to RLIM_INFINITY (if the OS for which the program is being built supports setrlimit() and RLIMIT_CORE - if it doesn't support them, there's probablly either no limit or a limit that can't be changed; if it doesn't define RLIM_INFINITY, try a maximum-sized integral value).

    From the command line (i.e., if the debug version of the program doesn't work around it), use the limitulimit (Bourne shell and compatibles) command. (If they don't let you get or set the limit, there's probably either no limit or a limit that can't be changed.)

  • In gcc, -g is the same as -g2, while -g1 is lighter.

    I try to write readable code and use rather long identifiers in general. I find that my comments explain why I did something rather than what I did.

    Thanks

    Bruce

  • It would be user-friendly if a system could be figured out where you download a package as a single file, double-click it (or some command line program) and it then untarred, compiled, installed (after asking for root password) and deleted the tarred stuff. This could go a long way to distributing software that works on many different machines or different libaries. It could also pop up a panel that let the user choose the features to enable in the compile, you could run it again to compile with different features.

    I know it will take a time, but the time does not seem to be a problem. On Windoze people are willing to double-click something and wait forever for it to do things like download from web sites, so compilation time is not a problem.

    Obviously this "installer" program is a complex pain to write and there would have to be a standard.

    As for screenshots, I definately want to see them, they help a lot in figuring out just what a program does.

  • I have *never* had a big problem compiling software for Linux. Yes, you have to read the damn docs, yes, you have to learn how to do certain things, tough...

    Have you tried to compile ghostscript? It used to be a real pain in the ass. And ImageMagick? A PITA, too. Now precompiled binaries for both of those work very well. And yeah, I've compiled them both at least once (actually much more than once), and I'm not going to do it again. I'll just grab an RPM.


    --

  • Well, that might have changed as of late. When I had to do it (3 years ago?) it *was* a PITA. You had to get all the sources for the libs separately, compiling with (for example) RedHat supplied libs was full of incompatibilites, you had to tweak very non standard makefiles etc ... As for ImageMagick, the Perl module still gives me headaches.

    --

  • I don't have the time to get involved with everything. However, I do have the time to try out some binaries and see if I like the project. If there aren't binaries, I'm not likely to try it out (unless it's something I really, really need).

    Anything that heightens interest is a GoodThing(tm). Binaries and screenshots do so.

    There are so many projects out there doing much the same thing, that the binaries and screenshots become like marketing. Necessary evil, but useful nonetheless.


    Regards,
    -scott
  • I have *never* had a big problem compiling software for Linux. Yes, you have to read the damn docs, yes, you have to learn how to do certain things, tough...

    Binaries are released quite often, especially for large projects (not just Joe Blow's software v.0012). Personally, I still prefer to use source tarballs b/c I can control EASILY where the programs will be placed. rpm --install xxx.rpm doesn't cut it...

    The other big problem is packet management. There is absolutely no reason that we should have binary files in 100 different types of formats... If we are going to have binaries, release them as one method. I don't care what it is but come up w/soemthing.

    Just my worthless .02
  • For instance, XFree86 servers, or Mozilla...
    Lots of people wouldn't be able to, or want to,
    deal with source-only distributions of these.
    It's a HUGE job to build these things, and require
    some outrageous massaging to do it.

    Things that will compile in 30 seconds and depend
    only on libc, that's very different.

  • As a Linux User of 4 years I vote for compiled Binaries. I cannot program worth a wit in C, I can hack my way through PERL and Python but C... no way. I have been using Linux for about 4 years, and love apt. Please release binaries if you want a large user base, and just source if you want just 'leet people to use your software. Its simple really!

  • Um, and I am one of those people; that is sort of my point. However, a lot of people like to have only one machine, or want to optimize the flexibilities of a machine. And there is some competition between OS's: yes, some are better at some things than others, but they are competing in a lot of fields and to think of that simply as "separate tools" that don't share a lot of functions is a bit naive. If it were simply a matter of what Linux was "good for," there probably wouldn't even be a Gnome or KDE - the very products imply an expectation to compete with Windows and MacOS in a lot of environments where they currently are not rivalled by the Unices.
  • I build almost everything from source on FreeBSD using the ports collection, because that's how I install new software. I've installed dozens of packages, and haven't typed 'gcc' or even './configure' once to do it, just the final step: make install. Trivially doable with a CGI script if I wanted to add a pointy-clicky front end to it. I think I have needed access to the source maybe a half dozen times, and actually changed something in there twice. I think apt can do the same, but it's not configured to by default. Even source RPM's are nicer than binaries for me, because although they still don't fix the manual dependency resolution problems (meaning some kind of semi-manual process to resolve them), they typically aren't plagued with glibc versioning problems (aka dll hell). With a decent front end, a user doesn't even have to know the package is compiling (e.g. said CGI that builds a port), just that the package tool seems kind of slow ... and their system is more stable.

    --
  • Good new word - grabber. Some one that grabs programs...

    Well, there is one thing that people may not be remarking. Many programs become popular not because the developer gets an eye on them but users tell so. "Oh there is this new XXXXX prog out there. I grabbed it and it seems cool. Yeah it is quite alpha but I think it's a good idea for this, this and this." And the developers, system integrators, sysadmins go after that program to see if it's worth a look.

    Note I'm not talking about users/developers. I'm talking about people with nearly zero knowledge about programming. There is a growing horde of them on Linux. And there is a new class, small but ambitious, that tries to look around, more than most users do. Grabbers are a known class in Windows world. There is even a black Grabbers elite that uses ready exploits for their less ethical work. But Grabbers don't end here. They are a huge class, as important as hackers. And they are hugely varied. There is a special group of visual Grabbers. People that collect programs on 2D/3D graphics processment. And they are great collectors. Some of these archives go over the Gigs. These people may not be the front line of development. But they surely are one of the most important supply lines as they show where we should go.
  • Binaries are nice to see what stage some sort of alpha software is at. However, the biggest obstacle for me to download and look at source code is all of the additional libraries you need.

    It's happen at least 4 times to me where I've downloaded CVS source to play around and spent hours trying to get the thing to work because it requires the CVS versions of libraries, and often this isn't documented.What's even more frustrating is that most of these programs don't really need the new features that badly.

    I think that most of the people who would be able to make a useful contribution to a project will have very little trouble downloading and installing source. However, a lot of these people won't have the latest CVS version of gnome-foobar.
  • I'm not sure I see what is so difficult about:

    ./configure
    make install

    Which is about all it takes to install 95% of the stuff out there from source...
  • This is just really absurd. A binary cannot be trojaned unless there is already something on the system as long as you download from the actual site or a trusted mirror.

    Don't blame a binary, blame a stupid user for downloading something from an untrusted site. Same logic goes when talking about why their are no virii for linux. It all goes into, "Only run what you trust", if you don't do that then you deserve what you get.

    And as far as KDE goes, I really have no idea what you are talking about -- I just downloaded it and compiled from scratch and it was a) significantly bigger than linux-2.2.17 and b) took about 1.5 hours on a 500Mhz to build everything

    This goes into my original point though, if you want to see what it's about use a binary so you don't have to muck with compilation. If you want to use something, compile it because you'll get the best product out of it.

  • Definitely agreed. I should setup a nice little canned email response, "This is pre-alpha, that is what is causing your problem. Bugger off".

    I like screenshots, good eye candy. Me and another developer who are working on different methodologies for gradients have shown back and forth about 5 different screenshots to see what has the best result.

  • Not too long ago, there was some common system package where the original distribution files were replaced with hacked binaries.

    Use better terminology, you do not "hack" a binary. You corrupt it. It still has absolutely nothing to do with the argument of Binary Vs. Source -- the same thing can (and did happen) with source distributions, that was why your argument was absurd. It really is, saying binaries are bad because of the possibility of infection. Both are. That is my point.

    And yes, it is KDE1 that I was referring to as KDE is still known as KDE1 and KDE2 referred commonly as KDE2. Yes, for those who don't want to dedicate an overnight compile for KDE2 should use binaries. As for myself, it's quite easy to write a shell script to recursively compile directories and check for errors and even start "xmms No_Satisfaction.mp3" on an error to wake me from my slumber.

  • Binaries and screenshots are pretty essential if you want to sell someone on an idea. Even if you don't have a GUI you can still give some examples of input and output, especially if you're doing some complex shit that doesn't exactly stand out in your code. I hate looking at a project and seeing some vague descriptions and just some code in tar.gz packages. My friend and I have been working on similar hobby projects that do similar things so we decided to combine our codebase. Before sending him my code I commented the hell out of it and compiled on my handwritten notes and flow charts to the point where my mom could have picked up my code and finished the project. Unfortunately the code I got from him was just a mess of code with some commented out lines, just stuff that was broken or unneeded. This ishow I see a lot of open source projects distributed. Spaghetti alpha quality code with little or no documentation by the dude who originally started the project. I think more people would get interested in OS projects (as asserted in the original story topic) if more people would better document the development of their program. Binaries are a must because sometimes your code won't compile correctly on other machines. Screenshots (or at least some form of output) ought to also be included not only in the package but on the website. Give some good indications to outside observers about how well the project is coming along. You're also giving people a preview of what your program will look like. The people downloading your program are also going to be the ones using it, let them see the interface and get their opinions on it. The same goes for CLI programs, let people see how data will go in and how it will come out and take some comments about how you could maybe do a better job of organizing your arguments or keeping them easy to remember.
  • is worth 1000 words. Or so some say. I always like screen shots. In fact I am less likely to download a program if i don't know what it looks like. Lets face it if you are using Linux you probably have your desktop all customized or maybe your a minimilst, and if this new app ads some look you don't like then you probably wont keep it. With screenshots you can see what it looks like. However do not put screen shots on your front door, put them as links so those who want to see them can and those that do not don't have to. The other thing I like about screen shots is that they give me an idea of what features the program has.

    Example: Someone writes an HTML editor. If there are screen shots I can see if it is just a syntax highlighter or if it is a editor like Frontpage or Composer. If there are lots of them then I can see what features it has and how they define there UI.

    This can even work with console apps that output data, like vmstat, where you can see what data it outputs.

    Source Code: To me that is less importatnt. If it is in alpha I'd rather see a screen shot so I can see what is coming. Even after it is released I don't always compile from source. Rarely do I do that now. It usually offeres little benifits to compile from source and more headaches. If I find a bug then I'd likely email the author. This also depends on the size of the program and my time. When I used to have more time then I'd do more looking into the code, but now, I do less of that. I also have less problems.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • Does it really cause a huge havoc to release the binaries? Lets see, it's not that much more work - it's easy and doesn't require any additional effort than releasing a properly setup tarball.
    I suspect that most projects that don't offer binaries don't set their tarballs up properly, either.

    There isn't a large quantity of work involved with either, but it isn't interesting work. Once a developer has learned a quirky build procedure (perhaps through trial and error combined with frequent email and IRC queries), it's easier to just put up with the procedure than to make it more user-friendly.
    --

  • This is the most insightful comment on the page; I'm glad SOMEBODY gets what open source is about. Everybody else is hereby commanded to go read 'The Cathedral and the Bazaar'
  • I would say lack of binaries is not a problem, except for the fact that i can't even remember how many times i've gotten frustrated when i tried to run configure and it died requiring the development version of some or another widget set one minor release ahead of the one i have, and i go and get that and it requires headers from openGL even though it does _nothing_ 3d, etc... and by the time i've downloaded all 50 development library sets i find out that now i can't build my own projects because i've broken some system header somewhere.
    Now that being said, most things compile fine right-out-of-the-tarball. Things that use grapics (opengl, x, widget sets) or sound, as well as things that use somebody's wacky portability layer or whatever tend to be _really_ touchy, and the documentation on what's required to build them is iffy at best.
    That being said, there are some things i wold have expected to be a royal pain that aren't, like MAME, i build that from tarballs with no problems. On the other hand i tried to build a copy of gtk+ and it was like pulling teeth.
  • The way I see it, make the source and a binary available and let the developer decide what they want. Screenshots (if the project is far enough that they can be made avail) are always a good thing to have up on the project site. Though I know I miss a good app sometime, as a user, I somtimes make a call on what app I want to use based on how it looks.

    A comprimize of having both up wouldn't hurt a thing. In all actuallity, a developer may not have all the binaries and libs required for the project at the time he looks at it, he may base his decision to help on if he can use the app, find a bug, then look in the souce to see if it's something he can fix.

    Just my thoughts as a user.
  • Screenshots can provide a quick measure of a project's interface and status. For example, I was looking for a file manager today, something like Midnight Commander, but improved. The first thing I did was check screenshots, and immediately I discovered many of the programs were obviously not what I wanted. Others had screenshots that were close, so I checked them out. After reading a bit, I tried getting a few to work.

    Precompiled binaries come in handy here, as a lot of "under development" software doesn't compile readily under systems where, for example, header files are in a different location. Also, for large projects like Mozilla or XFree86, if it doesn't have binaries, life is a pain. Just untarring Mozilla takes many minutes, let alone trying to compile it...

    Once you get something running, or at least can take a look at it from a screen shot, you can form your opinion as to whether or not to help the project. Unless you are a real die-hard coder, if it doesn't appear that the project has promise (based on it actually running, looking OK, etc.), chances are you're not going to help.

    Long story short, screenshots and binaries make it a lot easier to find the software you want, and thus be interested in helping out the project.
  • Dipshit, I'm right here.. I ment screenshots. Think!
  • doesn't strawman also include just changing the topic to any old thing to divert attention away from the real argument. For example, if I'm loosing an argument and I say "well, what about those free trips you took to bali on the company money?" which you obviously have to defend or you will loose credibility. And what's the right term for "attacking the person instead of the issue"?
  • I can point to a few projects that use obscure languages and yet expect you to download and install the compilers and runtime libraries. In many cases - no native languages - you have no choice, like python for example.
  • This is very true. Programmers are also known as "lazy bastards." We don't like to spend much time dealing with compilation. However, programmers will get really pissed off if the source isn't there.

    The only programs that I submit patches to are the programs I use everyday. So I'd say that software quality is number one importance; availability of binaries will get people starting to use it. Availability/readability of source code will eventually build the support base.
  • binaries create more enthusiasm for a project since not every linux user is a code-throwing god(TM). Remember, we want linux as a desktop for the masses, right?

  • I would have to agree with you. I don't always have the time to compile the stuff myself. To top it off, I am still new to Linux, so having to compile everything all the time might become information overload. Once I know Linuxx better, then maybe I will compile everything all the time, but I doubt it. It takes time--especially the first time you get the code.

    I prefer screen shots. I like to know if there is a GUI or not, and if there is, if the design of the GUI is intuitive or just plan off the wall.
  • I use Emacs for nearly everything I can. At home, I compile the latest myself, sometimes patched with something I'm working on at that moment. Of course, at home I have a real OS, Linux. At work, I use it under Windows. The system didn't come with a compiler and I don't want to sort out building it with Cygwin, not because I don't like Cygwin, but because I'm not working on porting it. The pre-compiled binaries allow me to use it in multiple places and only build on the machine where I'm actually working on something.
  • It's often hard to get alpha code to compile reliably; sometimes beta code is even harder because it's got more opportunities to acquire system-dependence. Providing compiled binaries not only makes it easier for someone to try your still-too-early-not-to-explode-catastrophically alpha version, it gives them some idea of whether the project is worth working on even if their environment is randomly different from yours. This is good.


    It's also a proof that you got the thing to compile so it's at least releasable as alpha code. I won't name the author or package of the really cool widget that would have been extremely useful for teaching the people I work with useful skills and giving them a testbed for trying out things, but once I got version 0.4 to compile (with a bit of help from the author on what packages he used), I couldn't get it to operate past the first step, and from reading the source code I'm not convinced there's any way the author could have done so either. So I suspect that either there's an old distribution around and I need to find the newer one (unlikely - it's in Freshmeat) or that the author posted the thing a year ago and hasn't done anything with it since then. (sigh... that's not what Abandonware is supposed to mean :-)

  • I'm writing my own linux game which will eventually (one can hope) be released sometime next year. I not only plan to show screenshots, but I also plan to release binaries. And yes, they will be x86 only.

    It's not because I have anything against people who aren't on an x86, it's because I don't have any of the other platforms required :)

    I'd like to make this clear when I release the game. Would something along the lines of "I'm providing x86 binaries only because that's all I have. If you'd like a PPC (or alpha, or sparc) package, and you have such a machine, contact me and I'll make you the official package maintainer" work?

    I guess I'm saying that there's no reason for large companies who can afford to buy this stuff not to release a PPC binary; however I personally can't release one because I don't have the cash :)

  • > they are talking about works-in-progress

    I fail to see the difference. Linux is a work in progress. There are test kernels being "released" all the time. New features are added. More compatibility is added (e.g. Pentium IV). Etc.

    > that is the way it used to be and it was a pain in the ass

    I know. Now, how many of your garden-variety users are going to make a journey to hell and back (assuming they're even able to) to try out an operating system that they probably won't like anyway? If Linux is to gain any kind of marketshare in the desktop world outside of a rounding error, your average user is going to have to be able to use it. GUIs are a step in the right direction, but if Yooser can't compile GNOME or KDE, and he doesn't have binaries, his reaction is probably going to be one of two:

    "This sucks!"

    or

    10 Enter chat room/message board
    20 Whine
    Goto 10
  • I second the motion.

    Source and Binaries reference different (overlapping) markets. Source is for the very adventuresome programmer, or the paranoid user. Object is for people who are more interested in seeing if it works. Otherwise said: some people like to break things -- some like to fix them. Even pre-alpha has a use for both groups.

    In some cases, even non-technical users can notice things that are much easier to fix before a product leaves alpha.

    In my own case, I will often download the binary out of laziness, and then, If I find a bug, I may either
    1.) Report it to the appropriate authorities
    2.) download a current ({non-.}recent) source

    But, as far as I'm concerned, there's no need to download the source until I find a problem/future feature that I think that I can meaningfully fix/contribute to in that way. Until then, I'll submit bug reports -- whether I have the source or not.
    `ø,,ø`ø,,ø!

  • The whole tenor of this discussion serves to amplify the notion that /. open source persons are pasty-faced guys who live in darkened rooms with a glowing screen, can't get a date, don't get out enough, and have little or no experience of the real world in which ordinary people live.

    Most computer users are not programmers. They don't know programming. They don't know "build". They buy a magic CD from Microsoft or whoever, stuff it in the drive, and it works. Mostly.

    If you want open source software to gain popularity among the masses, you'd better ship working binaries. With Installshield or whatever.
    While you play about in your own backyard with a pile of incomprehensible source code and even less meaningful make files (quiet at the back, hackers! -- we're talking about ordinary folks here) you will not see mass take up of open source projects -- except among programmers who can't get a date and who can, therefore, while away their empty evenings finding out how to build someone else's half baked code.
  • I think any package that wants to be of production quality should do both. There are some people that just want the binary. And they should be catered to. There are admins out there who are just okay. That's the fact of life for any IT job. If you have an elitist view about who should be a box admin then you don't have any right to complain about more companies not using linux. You can't have it both ways. It's not like making a binary is a big task in the grand scheme of the development cycle.

    However, that being said, source is also a good thing. For one, some applications may work better if they are complied certain ways. When it comes to squeezing every ounce of speed out of an app having the ability to compile for a certain CPU, or using static libs is generally a good thing.
  • Ahh, come on, goatse.cx isn't so ba-ba-ba-ba-bad. ;-)

    It can get under your skin but then on a certain level it can be funny too (I've seen some really good puns involving it, like stories about fat download pipes, shitty interfaces, etc.).


    --

  • "Precompiled binaries"? As if there were some other kind of binary?

    Easy. Basic bytecode. Early versions of Basic stored programs in RAM as bytecode to save space. For example, print was stored as a ? character on GW-Basic. Some systems even allowed the user to enter the bytecodes directly as a keystroke saver, leading to the common shortcut ? for print.

    Another kind of nonprecompiled binary is heavily obfuscated C code used in portable yet proprietary "Unix programs." There are several automatic obfuscators for C code, to remove comments, shorten variable names, and turn keywords into line noise.

    Yet another is the system used by many Alpha compilers. The Alpha architecture is notoriously hard to generate efficient jumps for; many Alpha compilers store RTL (an intermediate format used internally by compilers) in object files so that they can do additional code optimizations at link time, when jumps are easier to handle.


    Tetris on drugs, NES music, and GNOME vs. KDE Bingo [pineight.com].
  • How did you build it? Apparently your compiler was binary too.
  • and here we have the #1 reason Linux will fail.

    not a flame, a fact. Too bad too. Some people like myself like downloading programs and trying them without needing to track down 150 libraries and spend a day and a half trying to compile something.

    oh that's right, if you aren't a complete geek, you don't have any business using this OS.

    very sad people still think like this.

    ________

  • Source code is great, but so are binaries. Not everyone can understand or even compile code. Releasing as source-only won't really encourage more pairs of eyes to look at your code. It's likely to do just the opposite, IMHO. I'm not sure it's a good idea forcing things on people who got interested in open source because they wanted to get AWAY from that kind of abuse.

    People who would go through the code for themselves probably would have done so anyway even if there were binaries available. Those who can't help in that capacity could STILL help if they had binaries to run, test and provide feedback.

    I'd also classify screenshots as a good thing, as they give potential users some idea of what they're getting into. Asthetics aren't everything, but they're not completely without merit, either.
    ---
    Where can the word be found, where can the word resound? Not here, there is not enough silence.
  • ipchains -A output -d 209.242.124.241 -j REJECT

    works as well.

  • Releases for Windows:
    Executable install program, which decompresses and installs the program, and sets up the registry entries.

    Releases for Linux:
    A project.tar.gz file, which you tar zcvf into your home directory, then ./configure then make install

    Binary releases are pretty useless, since not everyone runs the same CPU and libaries. So you have to release stacks of binary releases. Just release a well set up source distribution that compiles on all (supported) platforms, with prominent information in an INSTALL file that points out what platforms it will and won't compile on. Where's the problem? Linux gets the arguments from the RPM crowd, and the apt-get crowd, but as far as I'm concerned you can't beat ./configure and make install.
  • This is not a constitutional democracy. This is a website run by people who can choose what goes and does not go on it.

    As I said before, it's free speech to make your point, even make it a few times repeatedly. But when your point is, in my opinion, dumb, and you repeat it constantly, one begins to wonder if we're arguing free speech or just giving stupid people ammunition to piss others off.

  • My reasons for wanting binaries (preferably RPMs) is simple: I don't want to develop Linux apps, only use them. There is a whole host of people who dabble in Linux because they like developing it, and I understand that. By the same token, however, I'm not at all interested in determining what variables are used in KCalc.

    It may have something to do with being raised on Windows machines (or, even before that, TI/99-4A), but I've never had an interest in seeing the code of "professional" applications. Writing code for my own little apps is a different matter (also, whenever I produce something open source, I compile it for a few major distributions as binaries).

  • Open source moves inexorably towards World Domination(tm). Of course, to achieve world domination will require dominating the world. The population of the world is somewhat larger than the number of programmers/hackers who live in it. In order to achieve world domination, open source must dominate the entire population of the world. This, by definition, must include both my mother and my mother-in-law. What do my mother and mother-in-law want? They want simple. Simplcity is elegent. Current open source is not simple: download, extract, configure, compile, debug, curse, bang head into keyboard, get new version of some obscure library, repeat. It seems that to achieve its destiny, open source must embrace those who are not programmers, maybe even AOL users .... maybe.
  • I am not a coder, but I am a user who can at least compile programs on my own. I have seen both sides of the fence, and there is a place for both.
    Remember, the original question included a mention of being able to demonstrate the program. This says Binary to me, whereas getting people to test the program needs to include the compilation of it.

    My short answer is therefore this: keep producing tarballs of source, but if you have a demo or similar, make an RPM to show how wonderful the program is!

    There endeth today's sermon.

    --

    Scientists today discovered signs of intelligent life on planet Earth.
    They believe the species died out last year.


    --

    Scientists today discovered signs of intelligent life on planet Earth.

  • Say the Linux distributions didn't want to release procompiled binaries.
    That's not the point. Sure 90+% of users want binaries, but then 90+% of users don't use pre-beta software, which is what we're talking about here. Release early and often, sure, but how to release? Personally I think source only is fine, if the project is not ready for the end user. If it's almost functionally complete (i.e. late alpha or beta quality, such as Evolution and Nautilus) then binaries should also be available.

    Of course there's no hard and fast rule, after all the Linux kernel still only has source releases.

  • Well, one normally checks to see if the user is mature enough to see the binaries!
  • As a PPC Linux user, I find that for any reasonably large package, if no one's made a binary package for my platform available, there's about a 10% chance that that package will compile and run successfully.

    Now, if you follow good programming practice one can avoid the common mistakes (usually endian errors and assumptions about the number of bits associated with certain types) but most people don't follow good programming practice. So releasing binary packages for multiple platforms is a good indication that the programmers are on the ball.

  • As a PPC Linux user, I find that for any reasonably large package, if no one's made a binary package for my platform available, there's about a 10% chance that that package will compile and run successfully.

    Now, if you follow good programming practice one can avoid the common mistakes (usually endian errors and assumptions about the number of bits associated with certain types) but most people don't follow good programming practice. So releasing binary packages for multiple platforms is a good indication that the programmers are on the ball.

  • I'm far from being a hardcore programmer type (I'm really not that good at it...) but if you use GNU tools like make and autoconfigure, it shouldn't be all that difficult for even a casual user to figure out how to compile something. (./configure --prefix=/usr/local/stow/packagename-version && make && make install && stow /usr/local/stow/packagename-version )

    (Sidenote, stow is your friend! GNU Stow [freshmeat.net]

    The advantage to this is that the end user has do more than untar the biniaries. This is good because some packages may contain newer versions of libiaries than the user has installed. Usually, the result won't be more than something wanting to be updated or recompiled, but you can break your system fairly easily.

    Also with source you have a bit more flexiblity in what libiaries the user has installed. (You need version 1.4.5 or above rather than the version this binary was compiled agnist) You'll probally get less flame style e-mail whining about "Why doesn't this work on my l33t system?"

    With software that isn't release quality, source only distribution is acceptable, and IMHO, the best option. Keep the riff raff away until it's closer to release quality.

    Screenshots, well, you always have to have screen shots. I often just look at screen shots and say "not quite ready, probally not worth my time, yet. But gee, it sure looks cool..."
  • Bug reports and feature suggestions are immensely valuable. In fact, arguably, much of the value and enhancements to MS Office and similar products are the result of such feedback. And a good way of getting such feedback is by releasing binaries.
  • It's very rarely that I see binaries that do me any good at all, as I run Linux on a nonstandard platform, LinuxPPC. So on one hand, I feel that if you are going to make binaries at all, you should make them for as many platforms as possible. However, I think developers would do better to focus their energies on making the source easy to compile.

    So, if you are able to provide lots of platform support with your binaries (like Netscape, Seti@Home, etc...) go for it. Otherwise, go over your makefiles with a fine-toothed comb.

    -Alec

  • I third. Hey, let's put this on the next poll.


  • A good number of users, even power users, are not programs. Many people are interested in using the available tools than writing new fancy ones. In order for some people to be effective programmers, power users, etc, they need pre-existing toys to improve and adapt.

    Of course, once you invest a large amount of time and effort into databases and scripts that rely on the quirks of some tool (either commercial or open source), you are not keen on tossing the whole lot out for new and better tools. Especially if the new and better tools do not fit into the hole the old one did.

    Many of us just like to unwrap the present and play with it, not spend the half-afternoon making the toy and then seeing if it fits our needs. Even if the compiled binary were 85% effective, it will give us an idea of what it does. {sarcasm}The largest software manufacturer in the world makes 85% effective software, and you feed in the other 85%!{/sarcasm}

  • If you install GNU Emacs on a classic Mac (Pre OS 10) you can use (esc)-x-shell to drop down to a shell. Then you have the very primative ability to crawl around in the Macintosh filesystem with a few commands (ls, cd, a few others). I suspect the command line could be extended with some lisp code.

    Yep, that's one way to get to the command line (and be able to textually move stuff around) on a Macintosh.
  • by JoeBuck ( 7947 ) on Monday December 18, 2000 @01:25PM (#550576) Homepage

    There seems to be an unspoken assumption in this thread that open source programs are intended to run only on 386-compatible Linux boxes, so that one set of binaries will suffice for all users. Releasing binaries only for that standard vanilla platform is a nice way of reminding users of other systems that they are second class, and also this kind of development is a nice way to make sure that your software is less portable than it otherwise should be.

    For this reason I think it's better in the early days of a project to release source only. Binaries can wait until the software's in shape for use by non-programmers.

  • by Xerithane ( 13482 ) <xerithane.nerdfarm@org> on Monday December 18, 2000 @01:00PM (#550577) Homepage Journal
    Does it really cause a huge havoc to release the binaries? Lets see, it's not that much more work - it's easy and doesn't require any additional effort than releasing a properly setup tarball.

    The benefits of this, if I think the idea behind the project is cool and it's been in an active development state and I download the source tarball and try to compile it and it fails I will probably desert the project. However if I look at something that actually runs to see if the time they've spent has been good and that it looks pretty solid and has a good start, then I'll wrestle with getting their CVS/alpha/pre package to compile and build on my machine.

    Whoever thinks binaries are a bad thing, with no good merits I feel sorry for them. However, any application that I use on a regular basis will be compiled from source and optimized for my platform if possible.

  • I for one just want the best OS for me, I don't care who else uses it... If to attain that we need to get the masses using it, so be it. But it's not a goal in itself.
    That's a bit short-sighted and simplistic in my (less than humble) opinion. Linux users want popular games, software, DVDs, ad nauseam. Much of this won't happen until the OS hits "critical mass", or, until it reaches a point where the companies making these things see that it is a OS that they can make money from.

    I desperately would like to see Linux and/or BSD become more user friendly and more used, so that I can ditch Windows solutions completely and use one or both exclusively.
  • Developers aren't the only needed participants in Open Source development. Taking the attitude that only developers can contribute is very bad. The guy that certainly is not a engineer, but a mere coder (can hammer and saw but cannot build a house).

    A non-developer that has access to a binary can:

    a) Write documentation, tutorials, etc.

    b) Excercise the application in ways that a user would, thus finding bugs that a developer would not.

    c) Ensure that the program does what it is supposed to. If there are reqs or specs, then they can be tested. If not at least it can be tested against the "web page".

    d) Get excited about the project and tell all of his developer friends.
  • by lsd ( 36021 ) on Monday December 18, 2000 @01:16PM (#550580) Homepage
    There's plenty of apps out there that are in a constant state of development, and yet are usable because of milestone builds. Two good examples are Mozilla and Evolution - both (IMHO) excellent apps which, thanks to having easily installable binaries (both are apt-able) I can easily use in a production environment at work.

    However, it's important to note that source is avaliable for both. Before i had a full-time job (and less than 512Mb of RAM :) ) I used to compile moz milestones myself, so I could ditch the mail/news component and other cruft which i didn't use. Milestone binaries are A Good Thing (TM), but they need to come with source to be really worthwhile.
  • by printman ( 54032 ) on Monday December 18, 2000 @04:51PM (#550581) Homepage
    I've found that releasing binaries are essential
    to making an open source project successful.

    That said, I usually don't release binaries for
    alpha releases or early betas that are likely to
    contain bugs - better to let the more experienced
    hackers (in the true sense of the word) run into
    any problems and report (or even better - fix!)
    them than spend days with a newbie only to find
    that they haven't found a problem but are using
    the thing wrong.

    Once you know the code is stable enough for
    mere mortals to use, get the binaries out! A lot
    of inexperienced users (and experienced ones,
    too! :) don't want the hassle of compiling the
    software themselves if they don't have to.
  • by Junks Jerzey ( 54586 ) on Monday December 18, 2000 @01:53PM (#550582)
    This assumes you are writing code with gcc. If you are using one of the other myriad of compiled languages available, everything from Lisp to OCaml to Pascal to Eiffel, then you most certainly want to distribute binaries. Not everyone has the latest versions of those compilers sitting around. There are sound software engineering reasons to not use C and C++ for everything.
  • by gfxguy ( 98788 ) on Monday December 18, 2000 @01:12PM (#550583)
    I agree with one thing, here, and that's that I like to put binaries where I think they should go (too many people think their stuff actually belongs in /usr/bin!).

    But the big problem I see with asking people to compile programs - and this goes for binaries, too, since they are typically GPL'ed and you need to have the source code available, is the dependencies.

    In other words, who here has compiled enlightenment with all the options? How many packages from other people do you need? How many image libraries and crap do you need? It gets ridiculous, because often those libraries have depencies of their own. I like keeping things simple, myself, so I just don't do it - enlightenment is a good example of something that I might want to try - but not enough to spend hours online downloading depencies and then wading through it all to make sure everything gets compiled in the right order. So I tried what came off my linux distribution and thought it was kind of bloated for my liking - saved myself a LOT of time.


    ----------

  • by 11thangel ( 103409 ) on Monday December 18, 2000 @12:57PM (#550584) Homepage
    I like the idea of releasing binaries AND source. Some programs, such as gnome or X, i just dont have the patience to compile from scratch, so RPM's are convenient. Other things, such as GAIM, i update from CVS daily. As for screenshots, i like to use those when first downloading a program to see if the program is jsut a gui or is actually filled with a few features. Sometimes they are misleading, but they are good for hooking new users. It's all personal preference.
  • by yerricde ( 125198 ) on Monday December 18, 2000 @03:06PM (#550585) Homepage Journal

    Distribution of binaries is of the utmost importance for platforms like Windows, where a compiler does not come with the operating system, and the compilers that are readily available are often non-free.

    So what if MinGW [mingw.org] or Cygwin [redhat.com] doesn't come with the system? They're both easy to download and install, and they're both GPL'd free software (based on GCC and other GNU stuff). Or, you can use the (non-free but free beer) LCC [virginia.edu] compiler. However, Mac OS 9 systems (that can't run OS X because don't have a G3 mobo and 128 MB of RAM), on the other hand, don't even have a command line; good luck getting GNU anything to work.


    Tetris on drugs, NES music, and GNOME vs. KDE Bingo [pineight.com].
  • by Chops ( 168851 ) on Monday December 18, 2000 @02:08PM (#550586)
    echo 127.0.0.1 goatse.cx >> /etc/hosts

    Problem solved.

  • by update() ( 217397 ) on Monday December 18, 2000 @01:11PM (#550587) Homepage
    The one drawback, I think, is that binaries may generate a lot of unhelpful bug reports. (See this one [kde.org], for example.)

    Binaries are susceptible to all sorts of little inconsistencies between installations that source can pave over. The result is a flood of "I get this this error on Storm Linux with XFree86 and GTK whatever." mails. Also, releasing source-only creates a small barrier to entry that restricts distribution to people who understand what "pre-alpha" means.

    Screenshots, on the other hand, seem like they're always a good thing.

  • by Falkenberg ( 234137 ) on Monday December 18, 2000 @01:06PM (#550588) Homepage
    Let us not lose sight of the fact that we are talking about the creation of Software, who's ultimate goal is to be usefull. Some people do not wish to program, but are willing to supply a list of bugs that have been found. This is an important part of the development process, even in Alpha stages. It is said that a bug caught in the earlier stages of development and fixed in an hour, may take weeks to fix if found at a later stage. So yes, allow the binaries to be downloaded. You may lose a programmer or two, but, in the long run you are allowing a greater range of people to contribute through commentary and bug reports. Falkenberg
  • The truth is that it is in the packaging. Any time that something becomes difficult to install, many people will drop it. For example, I was just trying to get Bochs [bochs.com] up and running last night. When I tried to compile the code, the compile would fail every time. I went in and found that some of the header files had a definition of NULL that the compiler didn't like. So I changed it and tried again. Now it was something else. So I found the binary for my system and tried that. Of course, libstdc++ just said "symbol __ti9exception not found". Finally, I just downloaded a BSD ports package which installed and ran without issue, even though it was source.

    The reason why binaries are more popular is that they are generally easier to package. The dependancies of source is such that you must have the right compiler, the right linker, the right header files, (sometimes) the right platform, etc. The dependencies with binaries are such that you must have the right platform, and the right libraries. The libraries can often be included or automatically installed via a packaging system. This makes binaries far easier to get running.

    So what's the moral of the story? Package your binaries to meet the needs of the target audience, and package your source to meet the needs of its audience. Between these two, your customers (paying or not) will be far more pleased.
  • by QuMa ( 19440 ) on Monday December 18, 2000 @01:04PM (#550590)
    Remember, we want linux as a desktop for the masses, right?
    We do? I for one just want the best OS for me, I don't care who else uses it... If to attain that we need to get the masses using it, so be it. But it's not a goal in itself.
  • by gfxguy ( 98788 ) on Monday December 18, 2000 @01:06PM (#550591)
    Being into graphics, I really need to see screenshots before downloading some of these graphics applications - but that's not the only thing I consider, of course.

    I also download both the binaries and the source (nice to have a cable modem) depending on the program. As has been mentioned, I'm not interested in compiling a spreadsheet or word processor. Trying to force me to do something a developer should be doing isn't going to make me want to help.

    Everybody has their area of interest, and in those areas I'll look at the source, and maybe change and compile it, but not for the other things. It's ridiculous to feel everyone should have to compile a program. Don't we want to encourage use of Open Source across demographics of users?

    In any event, not giving binaries will open the doors for new websites, maybe ad sponsored, that let you download the binaries anyway.
    ----------

  • by hipokrit ( 131173 ) on Monday December 18, 2000 @01:01PM (#550592)
    I believe that the lack of binaries being released for linux is one of the major drawbacks that is keeping it from becoming a major OS competitor. Many of my tech-friendly friends have installed linux, but lacked the programming skills to install a lot of the software they wanted. The difficulty of compiling software has actually made some of my friends reject linux as an operating system. I do believe that source code is important, but for linux to become a viable operating system, binaries will have to be released more often, or at least an easier way to install software.
  • by Fervent ( 178271 ) on Monday December 18, 2000 @01:15PM (#550593)
    This is "Offtopic". Moderaters, lay off.

    Is anybody else on Slashdot tired of these childish goatsex links? It really is a distraction, even after I set my threads to +1 and above (occasionally I want to dip down to see what AC's say, and most of what I read are these links).

    Two suggestions: AC's who post this, get a new hobby. Even the juvenile posts about grits were better than this (no image to fill up my workscreen).

    Second, Rob, Hemos, whoever's in charge of these decisions: ban the dumb link. It's one thing if it's "freedom of expression". It's another thing to see the same damn picture over and over and over again. If you cry "first amendment right", let me just say we heard you the first time, poster. Now grow up.

  • by Tumbleweed ( 3706 ) on Monday December 18, 2000 @01:08PM (#550594)
    If I can see a screenshot, I can get an immediate idea of how the interface of the program works. As a UI designer/developer, I'm SUPERPICKY about interfaces on the apps I use.

    There are many FTP clients, for instance, and most of them will do everything most people expect them to be able to do. The difference for most of them is in the _interface_.

    Downloading a screenshot lets you know right away whether this this looks like the kind of interface you'll be happy with, without the trouble of downloading a full binary and installing it, much less the time and trouble of downloading source to an app, compiling it, installing it, etc. If all you want is an idea of the interface concepts being used, a screenshot is the ONLY sane thing to use.

    Mind you, that's about ALL it'll tell you - but the interface is all-important. It doesn't matter what an app is capable of if you can't figure out how to use use it. What kind of life is it you lead if you're willing to put up with annoyingly-designed software all the time?

    It could also be used by savvy app developers to find out what people think of their app interface. If you have the binary or source available on your site, and a screenshot or two, take note of how many people check out the screenshot versus how many download the app. Take a look at the ratio and get a clue about your interface. There's a REASON KDE & Gnome exist.
  • Provide early binaries, as soon as you are ready for non-programmers to help you find bugs. Compile them with -g and make sure they clear the core-dump-size limit when they start execution, so that you can get a valid core dump.

    People who want source will click for source. Certainly I've debugged many a Debian program starting only with a binary, and then downloading the Debian source package.

    Thanks

    Bruce

  • by Kreeblah ( 95092 ) on Monday December 18, 2000 @12:57PM (#550596)
    OK. Say the Linux distributions didn't want to release procompiled binaries. Say they wanted to make their users truly understand how the various distros work. How many of you would still have tried Linux if you had to compile all the binaries yourselves?
  • Eric Raymond in his seminal work, The Cathedral and the Bazaar [tuxedo.org], stated that one of the ways to create a successful Open Source project is to release something that developers can use and find useful. As a developer it is easy for me to run a program and decide if I think it has potential or not, on the other hand it's a pain for me to look at 10 - 100 source files trying to figure it if the design is good and why I can't compile it.

    Another good thing about releasing binaries is that it gives the developers more incentive to fix bugs and create milestones than if they just released source and makefiles at random because it means they have to make the software run as smoothly as possible and tackle usability/configuration problems early.

    In my opinion screenshots are not as useful but still serve a purpose such as enticing people who are just browsing through projects at Sourceforge to take a closer look at your project.

    Grabel's Law
  • I've been programming in C and using unix systems for over 10 years, and linux since kernel rev 0.99pl14 (a few months before 1.0). The days of POSIX and linux are much better than the bad-old-days, when you'd often times have to edit the source and change <strings.h> to <string.h>, and dozens of other minor (and many not-so-minor) tweaks that I'm thankful are only a distant memory. When I was a grad student at OSU [orst.edu], I'd spend a lot of late nights trying to get code (usually written at Berkley) for SunOS to compile on HP/UX (HP has a major presence in Corvallis, which is otherwise a college town), 'cause the free code from Berkley tended to work a lot better than the bloated crap from a major EDA vendor who's located about 70 miles to the north (that was their 8.0 release, which basically didn't work at all, it had so many bugs). [mentorgraphics.com]

    Today's world is so much nicer... "./configure", "make", "make install" (well, I'm a bit wary of that last part, as it usually needs root). When this very nice process doesn't work, usually the configure script tells you when you need to do. Pretty cool.

    Still, there are source-only distributions that fail to build. Now I can understand this if it's from an up-to-the-minute CVS, but from a tarball on a web page or ftp server, that's not so cool. As a programmer, the software needs to be something pretty special for me to go dig in and fix the build process. It's just not fun work (particularily for a large project), and unless you've got quite a bit of experience, it can be nearly impossible.

    So if you're an open/free source author and you don't offer binaries, make sure the code builds on the systems you're hoping your users have.

  • by bcrowell ( 177657 ) on Monday December 18, 2000 @01:11PM (#550599) Homepage
    I'm not sure I even buy the argument that a project should reach a certain point of maturity before one releases the binaries.

    Several reasons:

    1. It seems to be a Law of Nature that open-source projects attract the most help when they need it the least -- i.e. once they're mature. At the beginning, it makes sense to do everything you can to encourage people to participate, including enticing them with binaries.
    2. Early on is when people are most likely to encounter problems compiling through no fault of their own. Come on, how many software projects are designed to be perfectly platform- and compiler-independent from the ground up?
    3. If you're starting out with a one-person project, you have the luxury of waiting as long as you want before you even open-source the code. Suppose you write an initial version that is full of security holes, but that does demonstrate some key functionality. You might want to release a binary, then spend a month fixing the security, then open-source the project.

    I think it depends a lot on the project. My only open-source project [lightandmatter.com] is an applet that shows the planets in the night sky. I've gotten lots of help from strangers with translating it into various languages, and that's actually the full extent of other people's involvement since I open-sourced it. I don't think any of those people would have known or cared about the project if it hadn't already been an applet that was sitting there on my web page and was actually useful for something.

Kleeneness is next to Godelness.

Working...