Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
GNU is Not Unix

GPL and Project Forking 178

Norm writes "Linuxcare is running an informative article about project forking and how the GPL serves to prevent forking in many cases; a good expose on a timely subject, given recent fears about kernel forks, etc.. " Being at Comdex, I've heard a lot of people wondering about why Linux stays together, questions about what the GPL means and how it works. There seems to be a lot of confusion about what different distributions actually mean.
This discussion has been archived. No new comments can be posted.

GPL and Project Forking

Comments Filter:
  • by Signal 11 ( 7608 ) on Thursday November 18, 1999 @08:01AM (#1521284)
    I'm not confused - not a single little tiny bit. Alan Cox has an entire "fork" of the linux source sitting on his harddrive, and he syncs in changes from other developers on a regular basis. Eventually linus will put those patches in. I've seen a plethora of linux patches on the 'net doing everything from hardening the kernel against hacking attempts to running on hardware I didn't even know existed until they released linux on it!

    Forking is not the big issue people think it is. Usually the fork is necessary to, say, test out a new idea (egcs for example), or because the current development has gotten stale (gimp). In both cases there was negligible impact on the users. egcs has superceded gcc, and has done so without incident. There are major differences between the internals of gcc and egcs. Gimp development has been "revived" (some question if it was ever dead!), and everything is happy in linux land.


    --
  • by Denor ( 89982 ) <denor@yahoo.com> on Thursday November 18, 1999 @08:12AM (#1521287) Homepage
    That is, somebody may eventually propose to the Linux kernel team some extension that's simply outside the scope of the project, and yet builds enough support behind it, and has enough reason for existing, that it proceeds anyway.

    I think reviewing this [slashdot.org] old slashdot feature ("TurboLinux Releases "Potentially Dangerous" Clustering Software?") in the light of this newer article is particularly interesting - folks were worried that turbolinux might have a clustering solution. A clustering kernel is pretty specialized, and would have pretty much the same qualities that the article recognized as being required for a code fork.
    Question: If the code does fork, do they still call it Linux, or is that just going to create confusion?
  • by afniv ( 10789 ) on Thursday November 18, 1999 @08:13AM (#1521288) Homepage
    I seem to recall, say about a year ago, the answer to everything regarding the benefits of open source was essentially: just fork it!

    You have the source code, you have your bright ideas (so you think), and you want make some software better. So open source, being open source, would promote better software by allowing anyone to fork the code to increase the quality of the software. I think even Netscape was used as an example. If you don't like the browser, build your own from the pieces of Netscape that do work (can you do this with the Netscape license?).

    But lately, with folks talking about forking the actual Linux kernel, forking seems to be the bad answer.

    I think instead of arguing over forking, the argument should be over freedom of choice. Whether you fork or not, it's GPL. If someone likes your forked code better, isn't that success? If someone doesn't likes your code and stays with the original, isn't that success?

    Who gives a hoot to forking? I have the Linux that I have. I have all the source code I need. If I ever learned programming, I could do more about it. Since I don't, I can hire somebody to do make changes for me. If there is a better feature, I'll just add it. But, I figure, if it's really that cool of a feature, someone else will add it to the current Linux kernel or other open source software as well.

    The short answer or question is: Who's going to take the software (or OS) away?

    Still, I'd like to count how many times forking was discussed as a benefit to open source before it seemed to become a dirty word lately.

    ~afniv
    "Man könnte froh sein, wenn die Luft so rein wäre wie das Bier"
  • by _ska ( 114561 ) on Thursday November 18, 1999 @08:16AM (#1521290)
    This article is quite well done. The scope is not that large, so he doesn't get bogged down in all the usual license bickering. He does make a reasonably good case for why the GPL dissuades certain types of forks, and why OS in general dissuades forking over all. I find the comments on the BSDish license promoting (a) fork interesting but underdeveloped.

    The idea that a lot of non OS people are having difficulty getting is that for a fork of an OS project to be effective, there must be some sort of 'collective' agreement that is a good idea by (a significant part of) the community using that project. In many cases this is simply not going to happen. But it does allow the fork to occur when sufficient people believe it is neccessary (ie gcc->egcs->gcc).


    I think the examples he gives are useful for neophytes and others who wonder what a fork is all about. I'm glad he resisted the urge to go into the muddled history of some of them in great depth --- that can be found elsewhere on the net/usenet if you really need to read about how obnoxious some people behave in the name of protecting there favorite project...

    S.
  • If the code does fork, do they still call it Linux, or is that just going to create confusion?
    Yes, it's still called linux, and maybe it'll cause confusion. Doesn't matter.
    Redhat is different from Debian is different from Suse; they're all fundamentally the same but they do have their differences. A 'true' fork is the same thing but on a larger scale.
    If everyone likes corel's new install, we'll see it appearing elsewhere. If they don't, we won't. If this clustering software from Turbolinux is that shit hot, it'll be assimilated. If something is cool but not Free, it may enjoy usage but it won't become part of Linux.

    That's a bit of a ramble, so I'll sum up by agreeing with what others have said, forks don't matter.
  • Seeing as little of this article will be news to most delving into this thread, here's a link on the flagship Queen Elizabeth 2 [odin.co.uk] to help you appreciate that analogy while increasing cultural literacy.
  • by nuintari ( 47926 ) on Thursday November 18, 1999 @08:22AM (#1521293) Homepage
    Why is it that every person i have ever heard proclaiming the usefullness and flexability of Linux has claimed that forking the code is a possability, and a great idea. But whenever it makes Slashdot news people rise like religious zealots to knock it down? "Don't mess with our source!" They say. Well, its my source too, its all of ours, and we can change to fit our needs, and we should. If we change it for the better, it may cease to be a fork, if we change it for the worse, it'll probobly just die off. Code forking is at the heart of the gpl, and almost crucial to its effectiveness.
  • by EngrBohn ( 5364 ) on Thursday November 18, 1999 @08:24AM (#1521294)
    At a recent presentation by SGI (Linux University Road Tour), I think it was put best (paraphrasing):
    "Like all evolving open-source projects, Linux will always appear to be on the verge of forking, due to the constant experimentation going on. This is a healthy thing for Linux."
    Christopher A. Bohn
  • by Christopher B. Brown ( 1267 ) <cbbrowne@gmail.com> on Thursday November 18, 1999 @08:25AM (#1521295) Homepage
    Is there a worthwhile feature that anyone has developed, debugged, and perfected that the Linux guys have said no to?
    The following items are not all of identical merit, but looking at things through overly-rosy glasses is unwise.
    • Support for ANSI C, and thereby the ability to compile the kernel using EGCS?

      We still have the situation that GCC v2.95 Or Higher Still Out Of Favor For 2.2.3pre11 [linuxcare.com]

      (That has changed recently, but Linus was refusing "EGCS compatibility" patches for rather a while there...)

    • Removal of arguably-obscene language [linuxcare.com] took a long time to be accepted...
    • ACL's haven't yet seen implementation [linuxcare.com] (although nobody has seriously tried)
    • devfs [linuxcare.com] has been tracking Linux kernels for quite some time, but has never been included.

      Recently discussed. [linuxcare.com]

      This debate first came up in Issue #25, Article 2. This time it started innocently enough with a discussion of USB device number allocation. Pavel Machek pointed out that USB was finally starting to get useful, which meant it was time to allocate /dev entries for various USB devices. He allocated 32 entries for 16 devices, and Steffen Grunewald asked about other USB devices like monitors, speakers, etc.; and Dan Hollis replied, "The desperate need for devfs becomes all more clear." At this point there was no turning back. The debate raged for about a week and a half, generating over 600 posts. Linus, although back from vacation, posted nothing in any of the related threads; while Alan addressed his posts strictly to peripheral, technical details.
  • by Tokyo Joe ( 102302 ) on Thursday November 18, 1999 @08:27AM (#1521296) Homepage
    There is a lot of misunderstanding in the Windows World about the different Distros.

    In my Place of employment people think Software has to be re-writen for each distro, or at the least re-compiled. They get this idea from industry magazines that mindlessly push the M$ FUD.

    I try to point out that it's no worse than software differences between NT and 95 etc, but that at least I can re-compile if I need to...

  • by _ska ( 114561 ) on Thursday November 18, 1999 @08:30AM (#1521297)
    I think it is just you ;)

    Seriously, I have always considered forking of an OS project to be a last resort. If you have the 'bright ideas' and the source code you can become a *contributer*. I think you will find that this is (and always was) a pretty common outlook on the subject. The ability to fork is great. The neccessity of forking is unfortuneate.

    The point behind (successful) forking was always that if a group of people irreconcileably dissagree about what constitutes a 'bright idea' ... they can split up and each go after what they see as best. If you and your bright ideas are operating in a vacuum, it isn't much of a fork. Another possibility is that a project has fallen into disrepair, or hasn't been updated as rapidly as people want --- but the people involved refuse to do anything about it or let you help. Then OS means you can pick up where they left off.

    Both these situations are pretty extreme though. In general, everyone will win if you can figure out a way to patch up your differences and all push in the same direction, no?

    S.
  • Forking is only a problem when the fork tries to create a new standard or refuses to obey the old standards and interfaces. That's why open standards are so important. So long as people are writing code that obeys a well defined interface/protocol there is no problem with forks, because people can still use pieces of the new and the old as they see fit.
  • I think he was to the point. In my opinion forking the code works well with GPL, because those forks that are usefull to the community as a whole are incorporated into the main source. Yes, you may have specialized forks, but wasn't that the purpose of having the code be open, so that you could adapt it to your own needs. Having the source available allows you to experiment and try out new ideas built opon the old, the good code stays around or is incorporated into new projects, and the bad or stale is dropped by the wayside.

    ********

  • I personally like forks.

    Not everyone's requirements are the same. It's not possible for one package to meet those requirements when they are fundementally incompatable.

    This gives us 3 solutions:

    1. Don't meet some people's requirements.
    2. Have multiple very different packages.
    3. Have multiple similar packages

    Personally, out of this list, I'd much rather have multiple similar packages than either of the alternatives.

  • The argument in the article is 100% right on mark. However, I was just thinking... all this only works if the GPL is actually enforced. A large company could easily come along, take GPL code, add stuff to it, and make a proprietary product out of it (ignoring the GPL). If nobody takes legal action against this, it would just result in the "bad" forking scenario among the Unices. Does FSF enforce the GPL this way (not just for their own software)? Would we have enough funding (and manpower?) to enforce the GPL should the need arise?

    (Note: I'm not being critical of FSF or GPL or whatever, this is just my consideration.)

  • by Anonymous Coward on Thursday November 18, 1999 @08:36AM (#1521304)
    If all you care about is "They can't take it
    away from me", then obviously you dont care about
    forking.

    If on the other hand, you actually want linux
    to be a viable commercially acceptible commodity,
    and widely installed in homes and businesses...
    you had better damn well care about avoiding forking.

    People have gone through all kinds of obscene
    contortions and backflips, just to avoid dealing
    with more than one vendor, or more than one version of ANYTHING.

    This "one vendor uber alles" mentality by the
    CONSUMER, is a huge reason why microsoft got
    so entrenched.
    So if you want linux to succeed in the same
    areas, pay attention to what has worked in
    the past.


  • I don't believe that the GPL has ever been tested in Court. Does anyone have any examples/information about this?
  • Now that you guys are going public all the ppl at /. should be able to afford this:

    http://www.amazon.com/exec/obidos/ASIN/020530902 X/o/qid=942953894/sr=8-1/002-9418028-77682 60

    Please buy it! Cripe, gimme your addresses and I'll buy it for you ppl!
  • I wouldn't call Alans version a 'fork'... except in the trivial sense that every multi-developer project has a bunch of forks on different peoples drives. Probably ditto for the small patches --- I think the existence of these is a feature of the development model, certainly not a fork in the sense of emacs->xemacs or BSD->SunOS.

    The capability to fork in the above sense is both neccesary and mostly undesireable --- but not something to freak out about the possibility of. I wonder how much of the current mindshare GPL forks have is due to FUD from various commercial interests?

    S.
  • Agreed. The various kernel patches (FreeSWAN, Reiserfs, the real-time patches, the various ACL projects, etc) all constitute "forks" in the sense that they are development off the main tree.

    Yet, despite all these "forks", there are no world-wars going on, no chaos and confusion, aircraft aren't falling out the sky - even UserFriendly is surviving. Despite the recent soap opera.

    It's very true that forks allow for fresh development. Indeed, I can't think of a single major project which -doesn't- have a development tree and a "stable" tree, which is in itself a fork!

    The issue is pure FUD, in it's most classic form. Not necessarily deliberately so, but it wouldn't surprise me if certain people who would *cough* appreciate Linux getting a bad rep, ummm, encouraged a certain level of hostility.

  • Yes, but even then with two open compilers, there would have been competition between the two camps, fueling inovation, and a desire to be better. This would have been good, and the better ideas could be incorporated by both.

    Forking happens all the time, but the source is available. The best code can always be built on and improved. Like in Zen, perfection is always strived for, but never achieved.It's a "Good Thing". :)

    ********

  • by mr ( 88570 )
    Sun OS was the BSD product.

    Solaris is the result of Sun paying a one time licence to AT&T and then making changes/bringing in BSD compatibility.

    (hence all the hate and discontent of some Sun users when Sun OS 4.x was dumped for Solaris)
  • Uh, you might want to read the referenced article, which covers both the egcs/gcc split and the xemacs/emacs split.

  • Now the tweaked, buggy egcs has been foisted off as the ONLY upgrade to gcc. Which means sane, reliability-minded people who loved the old gcc, now have nowhere to turn to.
    Unless, of course, you take the last "pure" gcc version and fork it to ACgcc.
    Christopher A. Bohn
  • in the article, the example is given of a hypothetical Linux fork called "fooware os" and the additional hypothetical that the perpetrator of fooware is a crack ninja programmer with an army of software ninjas straight out of a bruce lee film.

    one alternative not explored was suppose the fooware ninjas come up with a cool thing and Linus says, "no way, that's not going into the kernel." in this case, the coolness of the cool thing increases pressure on the system to either accept the patches into Linux, or switchover form Linux to Fooware.

    what's lovely is how OpenSource routes around intransigence. we're all human, nobody's perfect, and the character traits that make us great software developers also cause us to get in our own way. when that happens, the fooware os fork becomes a Good Thing.

    Also, hidden in the fork between gnu/emacs and xemacs was the different programming styles. the procedural versus oo indicates that team based projects will probably stick with OO or easily modularized projects, and gnarlie "keep it all inside one head" projects will be one-great-man projects. the risk of the gnarlies is the death or disinterest of the one-great-man. note how OO and componentized strategies favor open source teaming.

    sorry, i'm stating the obvious.
  • I love Linux and I hate it. You know why I love it. Now listen: the Windows UI paradigm provides some Good Stuff. Shift-selection. Standard cut-n-paste hotkeys. Control-tab for MDI window cycling. Etc. There are a bunch of Desktops for Linux, and they're mostly damned good. Unfortuntely, they all use different paradigms. Therefore, Linux lacks the App/Desktop standardization that gives me "moderate proficiency" in any new Windows app that I happen to pick out of the trash bin. The "design fork" in the Linux Desktop Paradigm (if I may use the p-word without sounding like a PBH) makes it seem unlikely that I will see this kind of stuff as Standard Feature stuff in Linux apps anytime soon. None of this keeps me from running Debian on 2 of my 3 machines, but the third one is the one that I do all my work on. Flame away. I hope to learn that I'm wrong.
  • well if you've ever submitted an article then you'd know that the user types out the headlines. i can't say rather hemos edited this at all, but i think flaming someone for that is senseless.

    i was going to moderate you down a point but i decided just to reply.

    maybe if you think slashdot is so bland you should start submitting interesting articles?

    tyler
  • by smileyy ( 11535 ) <smileyy@gmail.com> on Thursday November 18, 1999 @09:00AM (#1521324)

    I'd suggest a read of Jakob Nielsen's column on writing microcontent [useit.com]. Some useful snippets:

    Online headlines are often displayed out of context: as part of a list of articles, in an email program's list of incoming messages, in a search engine hitlist, or in a browser's bookmark menu or other navigation aid. Some of these situations are very out of context: search engine hits can relate to any random topic, so users don't get the benefit of applying background understanding to the interpretation of the headline.
    Even when a headline is displayed together with related content, the difficulty of reading online and the reduced amount of information that can be seen in a glance make it harder for users to learn enough from the surrounding data. In print, a headline is tightly associated with photos, decks, subheads, and the full body of the article, all of which can be interpreted in a single glance. Online, a much smaller amount of information will be visible in the window, and even that information is harder and more unpleasant to read, so people often don't do so. While scanning the list of stories on a site like news.com, users often only look at the highlighted headlines and skip most of the summaries.

    Also, the impact of good headlines can be seen in this article on the cost of poor information on intranets [useit.com], but is relevant to anything that has a large number of readers -- though the economics aren't as direct.

    Consider, for example, the impact of violating the guidelines for microcontent authoring in writing the headline for a news item on an intranet home page. For a company with 10,000 employees, the cost of a single poorly written headline on an intranet home page is almost $5,000. Considerably more than the cost of having a good home page editor rewrite the headline before it goes up.

    If Hemos spends 5 extra minutes writing a clear, concise headline, and that saves 10,000 slashdot readers 5 seconds of scanning and thinking each, then that's a gain of 49,700 seconds for the /. community.

  • Not all forking is bad. Where two groups intend to take a project in mutually incompatible directions, there should be a fork. For example, if one group wants to make the Linux kernel work well in multi-processor scenarios, and another wants to make the Linux kernal into an RTOS, there might be changes that each would need to make that would be incompatible with the other. In a case like this, there should be two different versions of the kernel, because they are justified by the very different goals of the two groups (before I get flamed, this is just a hypothetical scenario -- as far as I know an RTOS multiprocessor kernel is perfectly feasible -- but there must be some situations where incompatible goals spawn incompatible code).

    What open source development discourages is bad forking. For instance, if I went into the Linux kernel and made a bunch of trivial changes to suit my tastes, without any real benefit to others, my forked kernel would sit there gathering dust -- no one else would work on it. That would be a bad fork. A good fork is one which is justified for a good reason, and for that reason it is supported by a community of developers willing to work on it.

    Just my random thoughts on the matter.

    -Steve

  • I think it's an internal arguement between the "One Big Tent" people and the "Celebrate Diversity" folks.

    The thing is, there's enough room in the big tent for all sorts of diverse activities.
  • I might just be missing the point, but it seems that if the kernel or any other opensource project forks, it will be up to the users to decide which ones keep being developed. If no one uses the new fork when it becomes stable, the developers of that fork will most likely stop the development on it.

    As well, while reading over that article, it seems that alot of things used by Linux users is a fork, so forking might not be the worst thing in the world.

    Patryn
  • The original post (and all those succeeding) are offtopic. It'd be nice to have a "meta" flag that could be turned on for posts to talk about the post itself, rather than the contents. That way, things could be filtered out by that. Also, a forum for the discussion of the mechanics of /. might be nice. So people can be on-topic when flaming Hemos for his English skills. =)

  • IMHO, the biggest legal issue with the GPL is that the user does not necessarily agree with the licensing terms: that is, they didn't actually sign anything.

    The irony is, if a commercial shop uses this to break out of the GPL, the same legal precedent can be used to break all shrinkwrap licenses.

  • by Anonymous Coward
    Gosh, you're a quick one, aren't you? Saw the headline, read Hemos's blurb, and pulled not one but TWO examples of forking right off the top of your head! Very, very good! Now what those of us who actually READ the article know is that those are only two of SEVERAL examples which the author, an excellent writer, used to demonstrate his point about code forks. Now that you've proven that you understand his plot devices, why don't you stop by LinuxCare and read the guy's conclusion too? It's very good, I promise.
  • by sethg ( 15187 ) on Thursday November 18, 1999 @09:15AM (#1521332) Homepage
    The GPL hasn't been tested in court, but companies with a strong financial incentive to violate it, and whose IP lawyers could find any loophole in the GPL worth exploiting, have decided that they'd rather comply with it than challenge it in court.

    See, for example, the "Pragmatic Idealism" [gnu.org] essay on the FSF's Web site. NeXT made an Objective-C front end to the GNU C compiler, and wanted to make this front end proprietary. The FSF's lawyer told them this would violate the GPL, and NeXT gave in.

  • Can someone explain to me who these "Linux guys" are specifically? Who does make the decision to put something in?
  • Look at SGI's contributions [sgi.com]. Linus nixed a few and incorporated others. No offense intended to Linus, but I'd certainly trust SGI to have a lot more knowledge of what makes an OS scalable compared to Linus. That's their territory. It's not his fault though. Those machines cost $$$, so no developer is going to independantly purchase one just to develop a more scalable SMP kernel.
  • I believe that the desktop development forking is causing pain today, but will improve Linux in the long run.

    I would not even call the multiple desktop problem a true "forking" issue, since I don't think that the desktops started from a common source.

    In the short term, you have a host of competing desktops, all trying to be The One True Desktop. However, since it is more professional pride/ego than dollars motivating development, the competition is more likely to be a footrace than a demolition derby. That is, I don't expect the GNOME and KDE guys to put any work into keeping the other from working well.

    What will happen? Binary Darwinism. The poor interfaces will die out, and their good features and good developers will be at least partly absorbed by the better ones. Eventually, there will be either One True Desktop, or Several True Desktops that the user can choose from.

    The Open Source community can afford to "burn" effort making multiple attempts to solve the same problem; indeed, I think that we can't affort not to. The diversified desktops of today will show us what a good desktop would be like, and the myriad will merge back to one or several.

  • I assumed from (your?) 'reliable and stable'
    comment that 2.7 was meant.

    2.8.1 wasn't reliable *or* stable, for me, on Suns, Alphas or i386s... I don't know what the earlier post was referring to. 2.95.1
    has been quite good for me --- but I have been
    using more i386 boxes lately than when I was
    playing with 2.8.x so that may skew the results a bit.

    Note that I am not (primarily) a linux user
    either --- that being said I can sympathise with
    the pgcc/egcs crowds frustration with the fairly pathetic i386 support in 2.8.x ....

    S.
  • SGI is writing modules to provide ACLs in Linux file systems (likely, concentrating on their XFS for Linux implementation) as part of their effort to provide Goverment-standard B1 and C2 "Orange Book" security to Linux.

    You can see a presentation (recorded in Washington DC at "Linux University") at http://www.sgilinux.org/ [sgilinux.org]

  • by DragonHawk ( 21256 ) on Thursday November 18, 1999 @09:28AM (#1521340) Homepage Journal
    Think of the ability to fork as a fire hose. You generally don't want to use a fire hose, because the huge volume of water will cause significant damage. But if you have a bad fire in a building, the fire damage is a bigger problem then the water damage would be. So you turn on the water. The analogy isn't perfect, but it works. The GPL allows the code to be forked if things need it badly enough. However, the overhead of having an entirely seperate project often is not worth it. Thus, people generally approach forking with caution.

  • Question: If the code does fork, do they still call it Linux, or is that just going to create confusion?

    Not unless Linus licenses the name to them.. Linus owns the 'Linux' name...

    -joev

  • From the GPL:

    "You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License."
  • Unfortunatly, in order for your example to happen Torvalds and Cox would have to turn into blithering idiots. Coolness escalates the likelyhood of patch inclusion. If our vaulted tree-keepers ever get that way, I hope we do indeed fork. ;-)

    There is another possibility. If the team of ninja Foo programmers initiated cool changes at a much more rapid pace than the main Linux development tree, a fork could also result. Unfortunatly, I find it hard to believe that even the most targeted team of caffiene junkie, black magic wielding Ninjitsu programmers could ever outpace the main kernel, with several hundred programmers. (I think this falls under the 'blithering idiot' case to a certain extent.
  • Forking may seem like come off as bad thing when it happens, but in the long term, as long as everyone adheres to to a set of respectable practices (follows licenses, doesn't resort to FUD or other Microsofty tactics, etc.) it is good for the long term. It is like genetic variation in evolution -- it creates a split whereby the most fit fork will eventually survive. Sucky in the short term (especially if you aren't the most fit), but works out in the end.
  • The article says that Solaris is under the SCSL. Is this even true? I can't find any indication of that on Sun's website, and I would think that I'd have heard something about this in various places. Slashdot would have had a story, and bugtraq would have a flurry of Solaris security holes posted once the source is being looked over for holes by a few thousand people wanting to make them public, rather than a few hundred people wanting to keep them secret.
  • by Per Bothner ( 19354 ) <per@bothner.com> on Thursday November 18, 1999 @09:44AM (#1521348) Homepage
    While I fully agree with the thrust of the article, and it provides several good examples of forking, unfortunately the article is full of mistakes and misleading statements.

    For example, the article states that "Lucid Emacs" was proprietary, and implies that it predates the GPL. Both are false: Lucid Emacs was based on GNU Emacs 18. Lucid Emacs and Xemacs have always been released under the GPL. And the aricle left out one major reason why a merge would be very difficult: The Xemacs people do not require copyright assignments for donated code, and Stallman does require such paperwork.

    The history of the gcc/egcs/pgcc is also very misleading.

    Finally, Stallman did not write glibc. The original author/maintainer was Roland McGrath; the current author/maintainer is Ulrich Drepper.

    The mention of non-free BSD-based commercial Unixes implies that these implementation came after the release of the free BSDs and the AT&T lawsuit; they long pre-date both.

  • This is a great article! I think in fact it's a must-read for anyone who's
    interested in doing some serious Open Source hacking (OTOH must serious Open
    Source hackers probably know almost everything in this article already) or
    just generally interested in the miracles of Open Source and the advantages of
    its development process.
    Recently I have talked about Open Source with lots of people, mostly
    non-programmers who wanted to know about that new thing called Linux everyone
    is talking about.
    Their first reaction to my explanations of the meaning of the word free
    (speech, not just beer!) in this context was: "Oh, but you'll get a lot of
    code forking then..." (well, they didn't state it this way but what they meant
    was exactly that), so I carried on to explain to them (1) why this is happening
    so seldomly and (2) how, at the same time, the POSSIBILITY of code forking
    was a good thing.
    Basically what I told them was just a subset of this article. I was really
    amazed at finding *EVERY* bit I told them in this article and the article
    having the same structure as those monologues I gave to my friends (first some
    examples, then the "analysis" with the 2 main conclusions I mentioned above).
    Only that the article is much more complete and convincing than everything I
    ever came up with.
    Thank you Rick!
    After reading it I am more-than-ever convinced that we do not have to fear
    code forking! It can happen, it will happen, but the new branch will
    survive if AND ONLY IF the advantages outweigh the disadvantages.
  • by Jamie Zawinski ( 775 ) <jwz@jwz.org> on Thursday November 18, 1999 @09:52AM (#1521352) Homepage

    Sun OS was the BSD product. Solaris is the result of Sun paying a one time licence to AT&T and then making changes/bringing in BSD compatibility.

    Sorry, you're wrong. (If you're going to pick nits, get the facts right!)

    • SunOS is and always has been the name of Sun's Unix operating system. That's why uname says what it does.
    • Solaris is the name of a particular bundle of software from Sun: it is the environment which includes SunOS, OpenWindows, and a handful of other crap.

    • Solaris 1.0 was SunOS 4.1.1B plus OpenWindows 2.0.
    • Solaris 1.0.1 was SunOS 4.1.2 plus OpenWindows 2.0.
    • Solaris 1.1 was SunOS 4.1.3 plus OpenWindows 3.0.
    • Solaris 2.0 was SunOS 5.0 plus OpenWindows 3.0.1.
    • Solaris 2.1 was SunOS 5.1 plus OpenWindows 3.1.
      (and so on, through 2.6 = 5.6.)
    • Solaris 1.1.2 was SunOS 4.1.4 plus OpenWindows 3.0
      (released long after Solaris 2.0 due to customer backlash).

    • SunOS 4.x was BSD.
    • SunOS 5.x is SYSV.

    • OpenWindows is X11 plus OpenLook plus some other crap (sometimes NeWS, sometimes DPS, sometimes SunView, sometimes Motif.)
    • OpenWindows 2.0 - 3.2 were X11R4.
    • OpenWindows 3.3 was X11R5.
    • OpenWindows 3.6 was X11R6.

    Oh, and

    • SGI had t-shirts that said, ``Irix 5.1: it's not the best operating system, but it is the best one numbered 5.1.''

    Hope that helps...

  • by somebody ( 49542 ) on Thursday November 18, 1999 @09:53AM (#1521353)
    The article was very well written and insiteful but didn't convince me that forking isn't really a threat. It also minimized the impact on productivity that forking has caused.

    Today, the different Linux distros can cause a headache for people dealing with product installation issues, usually with scripts. This isn't so bad because most UNIX people are already used to that. But it does scare off software companies. Think about it, for Windows, you just buy InstallShield or Wise and most of the problems of OS differences are taken care of. Not true for Linux today.

    It gets worse at the API level. If the Linux kernel forks and the APIs contain minor annoying incompatibilities, it will be just as bad as the UNIX days of old.

    I'm a strong advocate of Linux mainly because it is Open Source. I feel the advantage of this is huge, but mainly for developers. Developers need to be able to trust that the APIs they are using a) work as advertised, b) can be fixed quickly when they don't and c) aren't subject to the whims of a particular profit driven organization. Open Source, and in particular GPL'ed code guarantees those things. Nothing else does.

    These benefits aren't immediately visible to the consumers (ie. the non-programmers who just use a computer to get something done). But the benefits do trickle down, when the code they use can be made more reliable and can safely incorporate innovations. The time spent reinventing the wheel for minor variations of operating systems could be spent doing useful innovations.

    Realistically, freeware will never replace commercial applications, and I don't want it to. What I want to see is new products with genuinely new features, and I'm willing to pay for them, with or without source. Those new products will come a lot faster if there is a common API to work with. There will always be competing versions of products, but at some point there will be features we come to expect of all of them, and the advantages to the different versions of the products become trivial. At that point, it makes more sense to standardize on a freeware version, and forget all the others. I believe at this point in time, there are not enough technical advantages to the competing operating systems to warrant their existence. It is a detriment to everyone's productivity. Therefore, it's time for an Open Source OS to move to the forefront, and Linux is the closest of any to doing that.

    Right now there is one major fork in the Linux world, and that's GNOME vs. KDE. This is particularly nasty, because there really is no way to develop software that supports both. (I mean totally supports both, not just using some common subset of features) This is a long-term threat to the viability of the OS for commercial development. Let's get real, they both are trying to accomplish mostly the same goals: a common look and feel for graphical applications. As long as they both fight for mindshare, that won't happen! I really hope at some point in time, one or the other surrenders, and concentrates their efforts on taking the useful innovations they have and putting them into the other, so we can all get on with things.

    If you really want Linux to replace Windows, stop arguing over petty differences and work together to build an OS that truly offers all the advantages that Windows currently offers.

  • I mean "design forking," not code forking. I haven't been around long enough to know what the original X desktop was, but I bet there was one. And someone wrote a better one. Someone else, too. That's great, because now there are three good desktops, and each will make the others better. It also sucks, because each one has a camp of devoted followers, each developing to a different paradigm. It's not the division of labour that concerns me. That's almost always productive, useful, and good. The struggle for user-mindshare, however, is always bad. I don't want to learn to use apps, I want to use apps. Binary compatibility is not enough -- I, Joe Sixpack, want ergonomic compatibility too. This is not a question of Linux "winning." Linux doesn't need to win. It simply is. My concern is simply about Linux making my life easier, sooner. Your point about E-Darwinism is a good one. I must, however, quote Keynes' refutation of Adam Smith: "In the long run, we are all dead." That said, I hope you're right. Autumn
  • Yes it did! It did, it did!

    Forking of the unices hurt unix really badly, and for a very long time. I tell you three times -- if not for the fact that the world standardized on Linux, I would be running IP masquerade on NT right now.

  • by sethg ( 15187 ) on Thursday November 18, 1999 @09:58AM (#1521358) Homepage
    IMHO, the biggest legal issue with the GPL is that the user does not necessarily agree with the licensing terms: that is, they didn't actually
    sign anything.
    The GPL isn't a contract, in the sense that shrink-wrap licenses want to be contracts.

    Through the GPL, the author of a program is unilaterally granting permission for the recipient to copy the program -- under certain circumstances. If the recipient doesn't want to abide by the terms of the GPL, that's fine -- but then the recipient, under copyright law, then has no right (except for the usual fair-use conditions) to copy the GPL'ed program.

    By contrast, shrink-wrap or click-wrap licenses try to give a software vendor more power than mere copyright law grants. That's why they have these "if you click this button you are agreeing to these ten pages of fine print" messages. They (might) create a contract between the vendor and the consumer: in exchange for the privilege of using the vendor's software, the consumer agrees not to reverse-engineer it, not to benchmark it, not to install it on more than one computer, etc., etc. Under copyright law, the courts would laugh at restrictions like this, but if clicking on the appropriate button does create a contract, then the vendor can enforce the license through contract law.

    (As contracts, click-wrap licenses are iffy, because by the time you see the license, you've already coughed up your money and taken the disk home, and the click-wrap license is now trying to renegotiate the terms of a sale that's already taken place. But that's an argument for another thread.)

    Disclaimer: IANAL.

  • There, I've said it. :-)

    I don't disagree with you philosophically, but take Red Hat (please!): RPMs, directories in different places, etc. etc. Granted, not "kernel" mods, but "different" -- and significantly different (IMHO) than any other Linux distro. Yet, nobody but nobody has adopted their modifications. I'm not a kernel geek, but I'd be willing to bet that there are kernel differences too. Hey, maybe I'm wrong.

    Anyway, I am not saying that what Red Hat is doing is a Bad Thing (tm), but at the same time clearly they (RH) has no intention of letting their mods die, and no one else has any intention of adopting them. What we end up with is a distro of Linux that you must know in order to administer; e.g. you can't be a Debian admin and just walk off the street and admin a Red Hat box. To me, that represents the Bad Thing (tm) in forking.
  • If someone released a bunch of songs under a GPL like license would that be protecting their rights? No because they couldn't make money on the cd sales or the live performances of the songs! Please don't try to bullshit us about it protecting freedom. I am all for OSS, but I think the GPL is too extreme. Of course the SCSL is terrible but frankly the BSD license is the best popular license right now. Allowing the GPL fanatics to define OSS is as logical as allowing Puritans or Jehova's Witnesses to define christianity.
  • I think that if Stallman were here, he would say that songs are different from computer software and so that the same thinking doesn't apply.

    Songs are not source code that is translated into machine code. We generally do not have access to the ``source'' for the music, only to the performance, which is captured by recording the sound waves.

    The GPL is special in its requirements related to the relationship between the source code and compiled code; in other respects it is a license that permits free distribution of something.

    If it is the free distribution that you object to, then it's meaningless to have a debate about the relative merits of various freeware licenses, all of which permit free redistribution of the source.

    Anyway, the GPL protects primarily the freedom of _users_, not the freedoms of those who want to profit by making software proprietary. Stallman has argued that this is not really freedom, but the exercise of power. (As in power == control over things that affect others, freedom == control over things that affect yourself).
  • Yeah XEmacs devs will be happy that linuxcare considers their project dead...

    And the gcc story is almost revisionism, gcc died because nobody was maintaining it and facing the corpse RMS had to do something.

    The author also missed the point that very often one of the two projects have to die...

    Is that avoiding forking ? :-))

    BTW I definitly find that forking is more a personnality issue than a licence issue.

  • Today, the different Linux distros can cause a headache for people dealing with product installation issues, usually with scripts. This isn't so bad because most UNIX people are already used to that. But it does scare off software companies. Think about it, for Windows, you just buy InstallShield or Wise and most of the problems of OS differences are taken care of. Not true for Linux today.
    You've definitely hit the nail on the head there. For all the ivory-tower surrealistis repeat their myth about linux=o/s=kernel, the stark reality is that those who must produce, test, distribute, install, and administrate regular software applications (not drivers) have absolutely no choice but to count most of the innumerable different Linuxes out there as different operating system. Self-serving sophistry aside, these people all have real work to do, and they can't get it done by pretending there is one coherent thing called "the Linux Operating System". Sure, there's a Linux kernel, but this is but a small part of the many significant platform considerations that producers and consumers of applications must keep aware of.

    And yes, it makes this stuff hard, because it becomes a combinatoric nightmare. If people would

    1. stop repeating this nonsense of there being One True Linux
    2. recognize that the vendors like Redhat, Corel, SuSE, and all the rest of them will always try to differentiate themselves from one another
    3. admit that for all the intents and purposes of people who are making and installing these applications, it is for them a different OS
    If those steps could happen, then perhaps progress can be made. I don't think that they will be, however, because too many people have too much ego wrapped up in the myth. Which is a crying shame, because defining a problem out of Platonic existence rather than admitting its reality and repercussions helps nothing but the propaganda machine. It interferes with real people trying to get real work done. And it's so obviously a half-truth as to make plenty of folks look a lot more closely at other assertions held as Gospel.
  • by Anonymous Coward
    I believe the question was: "Is there a worthwhile feature that anyone has developed, debugged, and perfected that the Linux guys have said no to?" None of your examples met these criteria. Sure, there are ideas that Linus doesn't think are implemented well or completely enough to include (your examples: EGCS compatibility, ACL, devfs, along with ISDN support and probably many, many other related and unrelated items). That's a different issue entirely.
  • Well, if by X desktop you mean the non-open CDE (which may be an open standard that you can buy, but is not open source), yes, there may be an original X desktop.

    But X users have not been able to agree on a window manager: Motif, OpenLook, fvwm, tvwm, WindowMaker, dtwm (CDE's) and so on. Most well-behaved X programs will be usable under any window manager, so people pick the one they like best.

    Sun has a desktop envrionment of their own and offers CDE; IBM used to have their own but forced everyone to switch to CDE or just use plain Motif; I think HP did something similar; NeXT had a desktop which predated CDE (and which a number of the Linux desktops and window managers mimic).

    The point is, there was no one X desktop environment to fork. Had the X server itself forked on Linux, that could become a serious problem. (X is already forked; every vendor's X is proprietary and closed source. XFree86, XFree68, and so on are the only open source X servers I know of... that can actually render on a display.)

    The X base code is free, but derivative works do not have to be free. Since the base code does not support any display hardware, we have vendor forks for every UNIX, plus the XFree* forks.

    Fortunately, people only mess with the device driver side, and so X programs continue to work across many window managers, and display properly on different remote systems.
  • The Alan Cox example is a micro-fork, which is also, say, my version of BidWatcher (it has some stuff I added and submitted patches for, but they're not folded into the release).

    What the article concerned was forking on a larger scale.

    _Deirdre
  • by Kaz Kylheku ( 1484 ) on Thursday November 18, 1999 @10:44AM (#1521370) Homepage
    Somewhere out there is a company collecting the best part of the GNU/Linux work, adding their own code with intent to repackage. This is what the open source movement should be afraid of.

    In today's anti-piracy climate, woe to whoever is caught! The horribly bad publicity alone arising out of discovery wouldn't be worth it.

    Let's look at this closely: suppose that someone does take GNU code and incorporates it into a proprietary product. Does a truly cutting edge company need to steal code? You are playing catch-up if you need to steal.

    Secondly, what if someone does that? At best, they will buy themselves reduced development time on some isolated project. The real benefits of the code, namely openness, will be lost. People using the main development stream will get the latest features and bugfixes, and the pirates will be locked into playing catch-up. They can't openly advertize that they have stolen code, so if the code really has a great reputation, they can't boast of it. They can't actively participate in the development process.

    Thirdly, no serious company is going to risk it. I know that in my company, nobody would even want to hear of such as thing as GPL'ed code being incorporated into our products. If we use free software, we evaluate the licenses carefully. It would be foolhardy to do otherwise.

    There is plenty of useful code out there which has licenses that are more permissive than the GPL. Particularly things that provide some generic, low-level functionality such as, say, compression.
  • humphrm wrote:

    I don't disagree with you philosophically, but take Red Hat (please!): RPMs, directories in different places, etc. etc. Granted, not "kernel" mods, but "different" -- and significantly different (IMHO) than any other Linux distro. Yet,
    nobody but nobody has adopted their modifications.


    Other than Mandrake, Macmillan, LinuxPPC, and a horde of other distributions. Last I checked, the LSB project has determined that RPM will be the standard file format for Linux packaging. That's why Debian is working on becoming less package-format dependant.


    I'm not a kernel geek, but I'd be willing to bet that there are kernel differences too.

    Red Hat generally ships its kernel with the AC patches compiled in. Most of the elements of the AC patches find their way into the main kernel tree eventually.


    e.g. you can't be a Debian admin and just walk off the street and admin a Red Hat box.

    It's certainly easier to go between the various Linux distros than it is to go between the various commercial Unixes. I had little problem going from Slackware to Red Hat, personally. I don't see how a Debian->Red Hat or Red Hat->Debian migration would be harder than that, likely it will be easier.

    ----
  • Granted, not "kernel" mods, but "different" -- and significantly different (IMHO) than any other Linux distro. Yet, nobody but nobody has adopted their modifications.

    Um what about Mandrake ,PPC Linux and MkLinux. These are all RPM based distros.
    I'm not a kernel geek, but I'd be willing to bet that there are kernel differences too.

    Only in the fact that the precompiled default binary is not the same in every distro. It's all still Linux, but with various different modules loaded. I would only call it a fork when it is something such as PPC Linux and MkLinux. ;)
  • The article says that Solaris is under the SCSL.

    It does? I certainly didn't mean to imply that. Clearly, Sun Microsystems is contemplating such a move, but has not released the source code except possibly under NDA to some of its close business partners.

    If I did imply that, than I must have been rather sloppy. Understand, please, that the whole thing got written on a laptop machine last Saturday, to occupy my mind as I waited in a hospital waiting room for my girlfriend to get medical attention. And I was seriously ill with a case of the 'flu. I'm surprised it came out as well as it did.

    -- Rick M.
  • He does make a reasonably good case for why the GPL dissuades certain types of forks

    I'd say he makes a better case that open source dissuades forks, or encourages remerging of forks. Specifically singling out the GPL is inappropriate, since there is no example given of a BSD-Licensed app having problems with a proprietary fork.
  • Fork puns can be fun! For example, an application that wishes to detach from it's tty can...fork off and die.

    Of course, forking causes children.

  • If this clustering software from Turbolinux is that shit hot, it'll be assimilated

    AFAIK, the problem with clustering software is the US government won't allow exports. Linus won't let anything in the kernel that isn't exportable, so such a fork will remain until the US decides clustering tech isn't a weapon.

    Man's unique agony as a species consists in his perpetual conflict between the desire to stand out and the need to blend in.

  • If those steps could happen, then perhaps progress can be made.

    Absolutely. It is important to recognize the inherent tension between vendors trying to differentiate themselves and vendors avoiding scaring off application developers due to the difficulty of targeting (in effect) multiple platforms. This conflict creates a tough problem.

    I don't think that they will be, however, because too many people have too much ego wrapped up in the myth.

    I don't follow you here. Which academics "repeat their myth about linux=o/s=kernel" and why do they do it? I'm not disagreeing with you - I just have not heard many people making this claim.

    John Regehr

  • There is no doubt to the growing size of human civilization when there will be comfortably 60-70 billion people on the planet. At a time when we have such diverse culture imagine what it would be like then. Anyway, I have no doubt to seeing future kernel forks. No doubt.

    Most any of the more serious linux users have reconfigured their kernel to better fit their computer using style, and sooner or later people with similar using habits will form groups. For the future Linux users, they will have much greater variety in system software they can use.

    I guess Linux, in a way, could provide a cultural 'norm.' Everyone will use it, just different varieties. I could see it.
  • For example, the article states that "Lucid Emacs" was proprietary, and implies that it predates the GPL.

    Oops, you're right. I was thinking of Gosling emacs, not Lucid/xemacs. That's what I get for not double-checking my work.

    The history of the gcc/egcs/pgcc is also very misleading.

    Unfortunately, the facts are somewhat murky, and more than a little disputed. I notice that my brief account does, for whatever it's worth, match the one give at http://www.tuxedo.org/~esr/writings/cathedral-baza ar/cathedral-bazaar-15.html [tuxedo.org].

    Finally, Stallman did not write glibc.

    Not guilty. I made no such claim.

    The mention of non-free BSD-based commercial Unixes implies that these implementation came after the release of the free BSDs and the AT&T lawsuit; they long pre-date both.

    Ditto. I implied nothing of the kind.

    -- Rick M.
  • Liked the article a lot, found it very clear
    and well-written. However...

    I don't think the facts are quite straight
    regarding FSF Emacs vs XEmacs. Questions
    of history aside, the main thing preventing
    a merge right now is that the XEmacs folks
    don't have legal papers for all their contributions. The FSF requires these for code
    it releases; hence, no merge into FSF Emacs, at
    least not in such a way that the FSF will serve
    the result from their servers.

    The statements about the current maintainability
    of XEmacs vs FSF Emacs are also suspect. I don't
    think FSF Emacs is so complex as to require a
    "genius level" maintainer -- and, as far as I know, Stallman is not in fact its primary
    maintainer right now, although he is involved.
    (The fact that multiple people are involved in
    FSF Emacs's maintenance is a testament to its
    maintainability... as is its 20 year age!)

    Also, with regards to GCC->EGCS->GCC, I don't
    believe Stallman ever did anything to delay the
    arrival of Pentium optimizations. Rather, the
    GCC maintainer (who was not Stallman) did not fold
    in patches he received for these optimizations
    Out of impatience, the EGCS team forked and made their own version of GCC. Eventually the FSF (i.e., Stallman) decided to make EGCS the official
    FSF "GCC". Far from being a resister who tried
    to keep the optimizations out of GCC, Stallman
    was responsible for a major step in the
    EGCS->GCC transition.

    Well-written article, just don't want to see
    Stallman get blamed for stuff he's not responsible
    for.

    -Karl
  • by Anonymous Coward

    X itself has thrived because the API has been standard from day 1. Writing software that directly uses Xlib is portable and low-maintenance. I doubt anybody would even dream of introducing a new windowing system for UNIX.

    So why do people continually insist on making new window managers? Because the original "standard" window manager (twm) sucked, and it never got updated. So every programmer with an itch for some hot new feature took twm and used it as a basis for a new window manager. This has led to some great ideas, but it hasn't made Joe end-user's life any easier.

    As for CDE and the other supposedly common desktop environments, they all came about too late, and they weren't all that advanced when they were introduced.

    Why do people insist that they need to be able to choose different window managers, when all they really want is to be able to customize it to look and act like whatever they are really used to using? This sort of immaturity gives fuel to the Open Source detractors. Hopefully, one of the new themed window managers will settle down all the competition.
  • by Gleef ( 86 ) on Thursday November 18, 1999 @11:26AM (#1521382) Homepage
    ska wrote:

    I find the comments on the BSDish license promoting (a) fork interesting but underdeveloped.

    The article doesn't say the BSD license is more prone to forking than the GPL, but it does comment on the fork from 386BSD to FreeBSD/OpenBSD/NetBSD/BSDi. It says that fork is stable because of the different focuses of the different forks. GPL programs are just as likely or unlikely to have forks which persist due to such differences in focus.

    Note BSDi in the above list, however. It points out one reason to fork that BSD permits but the GPL doesn't, license change. If someone wants to fork a project because they want to distribute their modifications with more license restrictions than the original, BSD allows it and GPL doesn't.

    [Disclaimer: I am not saying that either BSD or GPL are better because of this difference, only that this is a notable difference]

    ----
  • Sun OS was the BSD product.

    Solaris is the result of Sun paying a one time licence to AT&T and then making changes/bringing in BSD compatibility.

    (hence all the hate and discontent of some Sun users when Sun OS 4.x was dumped for Solaris)

    You're of course quite right, except that this obligatory ritual conversation goes on to say that Solaris still is based on SunOS in some odd Sun Marketing sort of way, with cut-and-paste excerpts from login screens to prove it.

    All of which was so far from the article's topic that I decided it's better not to go into it.

    -- Rick M.
  • The devfs code has been tracking the kernel for a goodly amount of time now; it is most certainly developed, considerably debugged, and perfection is certainly a matter in the eye of the beholder.

    EGCS compatibility is somewhat different; it represents some examples of cases where Linux was not conforming to the official standards of how C is supposed to work. (Largely regarding treatment of pointer aliasing, I believe...) Linux has arguably been made buggy in doing things that C isn't supposed to support, but which GCC used to.

    Conformance to standards isn't a "feature" in the way many kernel facilities are...

    Consider that I didn't mention GGI [hex.net] as an example; it is an example of something that was initially rejected, and quite legitimately so, as the proposed implementation was not, three years ago, developed, debugged, and perfected. Interestingly, the recent framebuffer support is now the GGI support; they don't need to integrate big gobs of stuff as they have the crucial interface that they do need, with the benefit that it has made supporting some of the more obscure systems ( e.g. - Atari ST [hex.net] and such) easier.

  • i agree. Usually the two groups can reconcile and find something to benefit the project in whole.

    But for some projects, if the maintainer isn't capable (or absent), a fork is usually necessary.
  • Which academics "repeat their myth about linux=o/s=kernel" and why do they do it?
    It's not really academics who are guilty of this, although it is a somewhat academic perspective to call whatever's in kernel space an operating system [slashdot.org] and nothing else. I'm sure I've been guilty of the same thing, especially back when I was doing a lot of kernel hacking. That's just how we think.

    That's not the problem. It's not academics, really. Rather, the fault lies with those zealots who claim that anything running a Linux kernel Linux, as though that were all that mattered. Remember how they like to flame the BSD people for having 4 different operating systems while they steadfastly claim to have just one? It's a political stunt with no basis in matters practical. In fact, this whole "distro" jazz is a veiled euphemism to hide the fact that there are a zillion different Linux operating systems out there. Sometimes they're just repeating what they've heard, not understanding that "distro" is a cutsie dodge to avoid saying "operating system".

    But to someone trying to develop, produce, test, distribute, configure, install, and adminstrate this applications software, they are different operating systems. Stop playing games to make your team seem less splintered than it is. The benefits of pretending the Emperor is wearing lovely new clothes are not, in my opinion, of greater import than the real-world ramifications of living a lie. People are trying to get honest work done, and this kind of crap just doesn't fly when you get down in the trenches.

  • PB: The history of the gcc/egcs/pgcc is also very misleading.

    RM: Unfortunately, the facts are somewhat murky, and more than a little disputed. I notice that my brief account does, for whatever it's worth, match [Eric Raymond's].

    Well, your account goes well beyond ESR's, for example in emphasising the Pentium optimizations as being the major factor in the fork. The real problem with pgcc was that it was very poorly written, not Stallman's stubborness!

    PB: Finally, Stallman did not write glibc.

    RM: Not guilty. I made no such claim.

    Let me quote your words directly: For the GNU Project, Richard M. Stallman's (remember him?) GNU Project wrote the GNU C Library, or glibc, starting in the 1980s.

    PB: The mention of non-free BSD-based commercial Unixes implies that these implementation came after the release of the free BSDs and the AT&T lawsuit; they long pre-date both.

    RM: Ditto. I implied nothing of the kind.

    Perhaps not directly. But the way the article is written, one gets that impression, because the commercial Unixes are mentioned in the same paragraph (and after) the mentiond of the BSDs and the law-suit. That is what I mean by "misleading statements". Perhaps I should have said "misleading exposition": When you put two facts next to each other in a historical background piece, the natural assumption is that the events follow in that order, or that there is some logical connection. Careful writing (and proof-reading) means minimizing such misleading inferences.


  • I guess my wording was a little incomplete. I was referring to the section:

    2. BSD --> FreeBSD, NetBSD, OpenBSD, BSD OS, MachTen, NeXTStep (which has recently mutated into Apple Macintosh OS X Server), and SunOS (now called Solaris)
    ...AT&T did notice, panicked, and sued. That, too, is a long story best omitted. Under the stress of the lawsuit, freeware BSD split into three..

    But I should have been more specific. What I meant was it would have been interesting if he had expanded a bit on what he thought the difference a GPL type license would have been, or the difference between the effect of this proprietary+free branches fork and a pure OS fork.

    I wasn't trying to suggest a GPL good BSD bad slant.
    This probaly answers the other poster as well...

  • That's not the problem. It's not academics, really. Rather, the fault lies with those zealots who claim that anything running a Linux kernel Linux, as though that were all that mattered .... Sometimes they're just repeating what they've heard, not understanding that "distro" is a cutsie dodge to avoid saying "operating system".


    Yes - sometimes the problem is ignorance, and sometimes it's zealotry. You might say that a zealot is just the kind of person who takes any difference ("there's only one Linux kernel") and thinks that it's an argument in favor of his favorite system.

    Thanks for clarifying,

    John Regehr
  • Quite Right.. If you try to become really good at *everything* you will become the master of *none*. BSD forking is fine by me: I am sure if you ask some user on an obscure platform they will tell you NetBSD is a good source fork. I am sure if you ask some user with really high security concerns they will tell you OpenBSD is a good source fork. I am sure if you ask some user with a shed full of 80x86 servers they will tell you FreeBSD is a good source fork. Anyhow - do you really eat source with a fork - wouldnt a spoon be easier?!? :)
  • That capacity to reduce diversity to a single monoculture is the _main_ reason Windows viruses are so nasty. It's the reason most Windows people are using one particular interface that is jack of all trades, master of none. It is not a good thing at all.
    The only reason people think it's a good thing is because until recently there has been no evidence to suggest you could survive doing anything else. Hence, the emotional reaction is 'we must all do this or die!'. First of all, Linux is still around without doing that (and indeed growing, and indeed there are still other options neither Windows nor Linux), and secondly, this assumption was formed from observing a monopoly at the top of its form and making every effort to kill off everything resembling 'diversity'.
    If even this has not made the 'monoculture', everybody-runs-Windows approach safe and beneficial to all, what good would it be to try and make Linux a monoculture, with all the disadvantages it brings, but with none of the ability to exert corporate power and influence and throw around huge sums of money?
    Seriously. That'd be a _phenomenally_ bad tactical move.
  • by JoeBuck ( 7947 ) on Thursday November 18, 1999 @12:18PM (#1521393) Homepage

    Rick, you're someone I respect, but you richly deserve flaming, maybe more so than anyone I've encountered in a while. When you're wrong, fess up. Don't post bogus defenses.

    I'm sorry, but the sloppiness of your history simply can't be justified, especially since you repeatedly just make assertions that are completely bogus. Almost everything you say about gcc is complete nonsense. Your defense that "the facts are somewhat murky" is so weak as to be embarrassing. There is no murkiness at all; tons of people have been involved and know all about it. Your history is not wrong because of differences of opinion. It is wrong because it is wrong. If you got any of this nonsense from ESR, you need to get him educated. ESR hasn't been involved in gcc development in at least the past four years or so, but this hasn't stopped him from declaiming authoritatively about it.

    Example: the origin of pgcc. Stallman didn't "ignore repeated requests" for Pentium-specific optimization, the necessary information was trade secret at the time. The gcc maintainers simply did not have the technical information required, and the folks in a position to work on gcc full-time (mainly at Cygnus) had no contracts to do PC work, so both information and resources were lacking. When the Pentium first came out, one had to sign a nondisclosure to get the needed information and the gcc developers couldn't do that. Intel gradually released more information in dribs and drabs, but at the time, instruction scheduling on the Pentium was very tricky and not well understood outside of Intel.

    Finally, some Intel engineers did a hack of gcc version 2.4.0 to do Pentium optimization. Unfortunately, they made no attempt to honor the structure of the compiler, e.g. the distinction between the front end (machine-independent) and the back end (machine-dependent). This meant that it wasn't possible to integrate their work. pgcc is based on that work. Thanks to Marc Lehmann and others, the distance between pgcc and egcs/gcc has been steadily reduced; pgcc is currently maintained as a patch against first egcs and then gcc, and the size of the patch has steadily been reduced. Cygnus never had anything to do with pgcc; the Cygnus developers I know considered the original pgcc to be misdesigned and buggy, though it has gotten much better since then.

    egcs started as a branch off of the gcc2 development tree, in the long period between 2.7.2 and 2.8.0. I was involved in the discussions that led to the project. When we started egcs, we talked to the pgcc people because we were seeking to decrease the number of gcc forks in the world; in addition to pgcc there were the Cygnus customer releases, which were ahead of the FSF release at the time. I won't go into all the breakdowns that had stalled gcc development at the time, but it had become a mess. We were definitely motivated by ESR's CatB paper. However, we did, in the process of egcs development, demonstrate that one of ESR's contentions is false: copyright assignments are not a barrier to the bizaar model. egcs/gcc is closer to the bizaar model than the Linux kernel, as far more developers have the right to check in code.

    With the assistance of Intel, Cygnus has produced a new ia32 backend, which should be out in gcc 2.96; this will finally make pgcc obsolete and hopefully complete the reunification of gcc.

    On other topics:

    Your history of Lucid Emacs/XEmacs is, as Per has pointed out, completely wrong -- it was GPLed from day one, and much of the code in XEmacs was written by RMS. Go talk to Jamie Zawinski, father of XEmacs, for some education (you might have heard of him ;-). Copyright assignments were one issue that kept the fork from healing, but technical differences between RMS and the XEmacs maintainers were also a factor.

    Your history of the Unix forks isn't as bad, but it appears that you have the chronology confused a bit. The splits in the BSD camp predated the AT&T/BSDI/UC Berkeley lawsuit, for example.

  • It's not possible to develop any closed source commercial software for KDE without reimplementing everything. Therefore all the closed source applications will be Gnome.

    Will this make a difference? I think so; many of us especially in commercial environments have to use at least one binary-only tool (even if we wish we could get rid of it)
  • Get the word right.
  • I don't think the facts are quite straight regarding FSF Emacs vs XEmacs.

    No, they absolutely aren't. I really was meaning to write about Gosling emacs there, but somehow got sidetracked onto xemacs. Apologies to users of emacsen everywhere, and especially to Jamie.

    At the time I wrote that, I was a bit distracted, as I was typing it into a laptop next to my girlfriend's hospital bed, last Saturday. But I should have caught that gaffe before sending it to Linuxcare. (The original version was my parting-gift essay for Linuxcare's sales staff: I had resigned from that firm the previous day.)

    Questions of history aside, the main thing preventing a merge right now is that the XEmacs folks don't have legal papers for all their contributions.

    Yes, collecting the copyrights would be an additional obstacle. However, the ones listed were those I recalled from Ben Wing's talk at SVLUG on July 1, 1998.

    The statements about the current maintainability of XEmacs vs FSF Emacs are also suspect.

    They are some combination of Ben Wing's SVLUG presentation and my own surmises. But, of course, differences of opinions are what give us horse races.

    Also, with regards to GCC->EGCS->GCC, I don't believe Stallman ever did anything to delay the arrival of Pentium optimizations.

    I did not mean to imply personal intransigence on Richard's part, just that FSF was dragging its feet.

    -- Rick M.
  • Yes it does. The copyright holder can do ANYTHING with GPL software he wrote. Sell it as a closed product or whatever. What you speak of is "Public Domain" software. They are not the same thing.
  • Your history of the Unix forks isn't as bad, but it appears that you have the chronology confused a bit. The splits in the BSD camp predated the AT&T/BSDI/UC Berkeley lawsuit, for example.

    Your exactly right. The time table of BSD [netbsd.org] shows Rick is incorrect. If I remember correctly (and whatever I say is from reading around. I wasn't into UNIX then), Bill Joltz took the BSD code and began removing AT&T code. He then released his implements under a free license so that BSD could was held back. However, as he lost interest in the project and wouldn't maintain it (he could have been the Trovalds of BSD, perhaps), and both FreeBSD and NetBSD sprouted up by developers. 386BSD eventually died, and of course OpenBSD sprouted off of NetBSD (Theo's archive of why seems to give him good reason).

    Here's where I'm a bit mucky on things. I thought Bill Joltz told free developers that they couldn't use his code, and there was a scramble to move FreeBSD to 4.3BSD-lite.

    I am glad Rick pointed out that BSD splitting had very good reason that and that if such existed for Linux, it would split too. I don't believe the GPL prevents forking, because the reason Rick noted (that any improvements would return be integrated to Linux), is the same with BSD. However, I believe the GPL reduces forking by creating the idea that there is a dictatorship, while BSD groups create whole bodies to look after the code. People think of Alan Cox and Linus Trovalds when they think of who looks over the kernel, while they think of FreeBSD, Inc., when they want to improve that system. Both have core members, just the way they present themselves looks different (while it might not be).
  • Your exactly right. The time table of BSD shows Rick is incorrect.

    That might be a valid criticism if my article had included any sort of chronology. But it did not: I omitted any mention of when BSD OS, SunOS, and Jolix forked off from BSD 4.x because it simply was not germane to the point of the article.

    I in fact ran 386BSD 0.1 -- downloaded by modem and and written out to floppies. But that would be a completely different article.

    -- Rick M.
  • you richly deserve flaming....

    Oh, don't worry. I won't take it personally. People seem to get very worked up over these matters. Why, I don't know.

    There is no murkiness at all; tons of people have been involved and know all about it.

    Unfortunately, they do not appear to agree, as can be seen by examining the comments here, if not elsewhere. A situation that isn't aided by people going out of their way to misread what I wrote, and read into it meanings I never stated.

    Example: the origin of pgcc. Stallman didn't "ignore repeated requests" for Pentium-specific optimization, the necessary information was trade secret at the time.

    Oh, at one time, they were indeed. And then, later, they were available, but not accepted into gcc. Which is what I said.

    Cygnus never had anything to do with pgcc.

    What I remember saying is that individual Cygnus staffers were involved with PCG. Is this not correct? I'm pretty sure I verified that, back when I was running a Stampede Linux beta.

    Your history of Lucid Emacs/XEmacs is, as Per has pointed out, completely wrong.

    Indeed. I had it confused, at the time I wrote that, with Gosling emacs.

    -- Rick M.
  • Solaris is the name of a particular bundle of software from Sun: it is the environment which includes SunOS,

    Sorry. If Sun OS is included in Solaris, why doesn't software that worked with Sun OS 4.x need to have system calles changed to work with Solaris?

    Uh, because SunOS 4.x and SunOS 5.x aren't totally compatible? You should have said "...to work with SunOS 5.x", not "...to work with Solaris." Because Solaris 1.x and Solaris 2.x are incompatible in the same way as SunOS 4.x and SunOS 5.x.

    Saying ``SunOS software doesn't work on Solaris'' is obviously false, because Solaris 1.0 was SunOS 4.1.1. So your use of the word ``Solaris'' is wrong.

    You should have specified version numbers. :-)

    No, ``Solaris includes SunOS'' is true without qualification. You should have said SunOS 5 when you meant SunOS 5, not Solaris (which can mean either SunOS 4 or 5, despite what the ambiguous common usage is.)

    Not that anyone really cares... ``Solaris'' means ``SYSV'' in most people's minds just like ``hacking'' means ``breaking into computers.''

  • A BSD license potentially helps non-free software.

    This is true.

    If the goal is to have lots of free software, than the GPL is necessary.

    This does not follow. If the goal is to have only GPLed software and no non-GPLed software, then the GPL is necessary.

    You assume that to have lots of free software, there must be no non-free software, which is obviously not the case. If 10% of the world's software is free and 90% is non-free, then increasing the amount of software in the world would increase both the amout of free and non-free software (assuming the ratio held.) This is called ``growing the market'' and it benefits all who participate in that market, even if they are direct competitors.

    So there are cases where non-free software can help the goal of having lots of free software. In fact, given that very nearly every piece of free software is a clone of some piece of non-free software, one could argue that non-free software is a necessary catalyst for free software! Would GIMP be as good today if they didn't have Photoshop to mine for ideas? Much though I respect the GIMP developers, I most sincerely doubt it.

    Also note that the GPL is not the only license that is considered a ``free software'' license, even by GPL advocates. Don't say ``free'' when you really mean ``GPLed.''

  • This OS is all about what you want!
    More communist programmers, more indignant code, and less code tyops!

    We have forked the code, and are in the process of patching in more rude comments! Yeah!

    On top of that, we have an exclusive (but GPLed) program for finding and fixing common tyops in the code. This is already hard at work on our copy of the kernel for ICO.

    On top of that, to make sure no-one can steal our One True OS, we are making it completely incompatible! HAhah! Red Hat, your pathetic efforts to make your distro un-de-forkable are NOTHING compared to our pissing on Linux(r) standards! Want the source to the kernel? No problem, go look in /src/usr/kernel (haha!). Lots of sed time must be spent to fix a dir name like that, and we have many more dir names that also violate the LSB, as well as general common sense (much like our IPO plans).
    Not to mention our amazing new package format -- but we won't even tell you about it, as you'll never be able to use it (and you thought RPM's false dependency messages were evil).

    As a sign of our affection for our users, we have posted this to Slashdot. We wait for you to abandon Linux(r) for ICO!

    Btw, we are IPOing in a few months. As we have the assorted quota of buzzwords (slashdot, GPL, Linux(r), fork, Red Hat, kernel), we expect to have a strong opening price (around 100$ a share).

    IPO keyword highlighting via Silly_Linux_IPO_Bot v1.3
    ---
  • by Jamie Zawinski ( 775 ) <jwz@jwz.org> on Thursday November 18, 1999 @08:10PM (#1521437) Homepage

    It would be better (on a philosophical level, at least) to live in a world with NO software than in a world with proprietary software.
    [...]
    I don't care what anyone says, nobody could possibly want to be restricted when they have the choice to be free. AS long as non-free software exists, someone is living in a limited form of slavery.

    It's so much fun to be a teenager.

  • Maybe his conclusions are correct, but in that case they definitely aren't based on his understanding of history, which is non-existent.

    1. Unix.

    Basically ok, as a kids version of history.

    2. BSD

    MachTen, NeXTStep, SunOS belong to the "Unix" camp above. OpenBSD split from NetBSD, and can thus not be characterized as a splinterproject from 386bsd.

    3. Emacs

    Totally crap. GNU Emacs was always under a "remain free" license. The non-free Emacsen were all written from scratch. Lucid Emacs was not only free, but the code ownership was assigned to the FSF for merging back. However, later Sun contributed code which was still GPL, but *not* assigned to the FSF.

    The real story is that RMS wanted to keep total control over Emacs development, and refused to release Emacs 19 when it suited Lucid commercial interests. Today a merge is prevented partly because of the control issue, and partly because of the unassigned Sun code the FSF don't want to use.

    4. httpd

    I never followed that one.

    5. gcc

    Crap. *intel* made a pentium optimized port of gcc, and released it as GPL. They did *not* assign the code to the FSF. The version released by Intel did *not* work on any other platforms. So there were never any possibility of a remerge. Intel has later payed Cygnus for developing a new Pentium II optimized backend, which *will* be in gcc 3.0.

    Egcs was created because gcc 2.8 never seemed to materialize, and in particular the C++ frontend of gcc 2.7.2 was embarrassingly old and buggy compared compared to what the Cygnus engineers had developed. Also, the Cygnus engineers found it very hard to attract outside developers with the closed developing model of gcc. Egcs was created as an experiment, with RMS blessing, to demonstrate the efficiency of a more open development model. It was a success.

    6. glibc

    Ok, except that the split then probably wasn't a mistake. Linux needed a working libc *then*, they couldn't wait until glibc was finished.

  • It's so much fun to be a teenager.

    Yes. Don't waste it by spending all your time worrying about proprietary software.

    --
  • The three BSDs have different goals, so a merge is not a solution, but the three projects actively follow the others changes, and merge wat interrest them in their own tree.

    Subscribe to source-changes for any of these projects, you'll see countless commit message with (from NetBSD/FreeBSD/OpenBSD)

  • I do believe it's distributions which are the real issue here, rather than kernel forking. The article, which does a good job making the point that it does, whatever factual errors aside, seems to consider kernel forking as the main issue and equivalent to the "Unix Wars" which I believe is a mistake. From a ISV point of view the distro differences are closer to the issues of the Unix wars than any kernel variations between distros.
    That's right. It's not as though this flavor of Unix has read(2), write(2), and open(2), but that one doesn't. The kernel API wasn't so mutated. It was all the stuff in user-space, admin and set-up issues. And that's why all these versions of Linux make it hard for a lot of folks. I've had experiences similar to yours with regard to RPMs, but that's a tirade I'll save for later.

    Since (if?) Mandrake Linux is a branch off of Redhat Linux, and Redhat Linux is a branch of Linux, and Linux is a branch of Unix, then the family tree is a huge collection of highly ramified dialects. There are places where these hundred millions flowers all blooming with varied scents is a burden, but others where it is a blessing. Let's just not call the violet a rose, because sweet though it is, it's not quite the same aroma.

  • Sheet music is not to a performance what source code is to binary code.

    The relationship between sheet music and a musical performance is more like the relationship between program text and the performance of a program, but even that is a bit of a strech.

    The performance of sheet music has as its ingredients the information in the sheet music itself, plus the artist's interpretation which endow the performance with meaningful nuances.

    The performance of a program, likewise, is derived from the program's code and whatever input goes into it, which may include static data, real-time inputs, etc.

    Not all musical performance comes from notation; just like not all data is the output of a program.

    The relationship between source and binary code is more like the relationship between a musical score and some lower-level representation of the musical score, like a piano roll. A piano roll is just like machine language: it has binary words consisting of punched holes. These trigger a ``control unit'' which drives the hammers that strike strings. Admittedly, it's a very horizontal encoding. :)

    It would be reasonable to distribute piano rolls under a GPL-like license requiring that the sheet music be available to the pianist. But a key motivation for this isn't there: namely the need to modify. Neither form allows for easy modification, and few players are interested in rewriting a piece. There is a pragmatic need to be able to modify programs to suit changing requirements which is doesn't apply to expressive works.

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...