GPL and Project Forking 178
Norm writes "Linuxcare is running an informative article about project forking and how the GPL serves to prevent forking in many cases; a good expose on a timely subject, given recent fears about kernel forks, etc.. " Being at Comdex, I've heard a lot of people wondering about why Linux stays together, questions about what the GPL means and how it works. There seems to be a lot of confusion about what different distributions actually mean.
Confusion? (Score:4)
Forking is not the big issue people think it is. Usually the fork is necessary to, say, test out a new idea (egcs for example), or because the current development has gotten stale (gimp). In both cases there was negligible impact on the users. egcs has superceded gcc, and has done so without incident. There are major differences between the internals of gcc and egcs. Gimp development has been "revived" (some question if it was ever dead!), and everything is happy in linux land.
--
Totally forked (Score:3)
I think reviewing this [slashdot.org] old slashdot feature ("TurboLinux Releases "Potentially Dangerous" Clustering Software?") in the light of this newer article is particularly interesting - folks were worried that turbolinux might have a clustering solution. A clustering kernel is pretty specialized, and would have pretty much the same qualities that the article recognized as being required for a code fork.
Question: If the code does fork, do they still call it Linux, or is that just going to create confusion?
Is it just me, or have attitudes changed? (Score:4)
You have the source code, you have your bright ideas (so you think), and you want make some software better. So open source, being open source, would promote better software by allowing anyone to fork the code to increase the quality of the software. I think even Netscape was used as an example. If you don't like the browser, build your own from the pieces of Netscape that do work (can you do this with the Netscape license?).
But lately, with folks talking about forking the actual Linux kernel, forking seems to be the bad answer.
I think instead of arguing over forking, the argument should be over freedom of choice. Whether you fork or not, it's GPL. If someone likes your forked code better, isn't that success? If someone doesn't likes your code and stays with the original, isn't that success?
Who gives a hoot to forking? I have the Linux that I have. I have all the source code I need. If I ever learned programming, I could do more about it. Since I don't, I can hire somebody to do make changes for me. If there is a better feature, I'll just add it. But, I figure, if it's really that cool of a feature, someone else will add it to the current Linux kernel or other open source software as well.
The short answer or question is: Who's going to take the software (or OS) away?
Still, I'd like to count how many times forking was discussed as a benefit to open source before it seemed to become a dirty word lately.
~afniv
"Man könnte froh sein, wenn die Luft so rein wäre wie das Bier"
GPL and forks (Score:3)
The idea that a lot of non OS people are having difficulty getting is that for a fork of an OS project to be effective, there must be some sort of 'collective' agreement that is a good idea by (a significant part of) the community using that project. In many cases this is simply not going to happen. But it does allow the fork to occur when sufficient people believe it is neccessary (ie gcc->egcs->gcc).
I think the examples he gives are useful for neophytes and others who wonder what a fork is all about. I'm glad he resisted the urge to go into the muddled history of some of them in great depth --- that can be found elsewhere on the net/usenet if you really need to read about how obnoxious some people behave in the name of protecting there favorite project...
S.
Re:Totally forked (Score:2)
Yes, it's still called linux, and maybe it'll cause confusion. Doesn't matter.
Redhat is different from Debian is different from Suse; they're all fundamentally the same but they do have their differences. A 'true' fork is the same thing but on a larger scale.
If everyone likes corel's new install, we'll see it appearing elsewhere. If they don't, we won't. If this clustering software from Turbolinux is that shit hot, it'll be assimilated. If something is cool but not Free, it may enjoy usage but it won't become part of Linux.
That's a bit of a ramble, so I'll sum up by agreeing with what others have said, forks don't matter.
Offtopic, the QE2 (Score:1)
We can, but we shouldn't?!?! (Score:4)
Re:Confusion? (Score:4)
"Like all evolving open-source projects, Linux will always appear to be on the verge of forking, due to the constant experimentation going on. This is a healthy thing for Linux."
Christopher A. Bohn
Re:Why Linux doesn't fork (Score:4)
We still have the situation that GCC v2.95 Or Higher Still Out Of Favor For 2.2.3pre11 [linuxcare.com]
(That has changed recently, but Linus was refusing "EGCS compatibility" patches for rather a while there...)
Recently discussed. [linuxcare.com]
Differetn Distros (Score:3)
In my Place of employment people think Software has to be re-writen for each distro, or at the least re-compiled. They get this idea from industry magazines that mindlessly push the M$ FUD.
I try to point out that it's no worse than software differences between NT and 95 etc, but that at least I can re-compile if I need to...
Re:Is it just me, or have attitudes changed? (Score:4)
Seriously, I have always considered forking of an OS project to be a last resort. If you have the 'bright ideas' and the source code you can become a *contributer*. I think you will find that this is (and always was) a pretty common outlook on the subject. The ability to fork is great. The neccessity of forking is unfortuneate.
The point behind (successful) forking was always that if a group of people irreconcileably dissagree about what constitutes a 'bright idea'
Both these situations are pretty extreme though. In general, everyone will win if you can figure out a way to patch up your differences and all push in the same direction, no?
S.
Forking & Open Standards (Score:1)
Good article (Score:1)
********
Re:Is it just me, or have attitudes changed? (Score:2)
Not everyone's requirements are the same. It's not possible for one package to meet those requirements when they are fundementally incompatable.
This gives us 3 solutions:
Personally, out of this list, I'd much rather have multiple similar packages than either of the alternatives.
Forking works only if GPL is *enforced* (Score:2)
The argument in the article is 100% right on mark. However, I was just thinking... all this only works if the GPL is actually enforced. A large company could easily come along, take GPL code, add stuff to it, and make a proprietary product out of it (ignoring the GPL). If nobody takes legal action against this, it would just result in the "bad" forking scenario among the Unices. Does FSF enforce the GPL this way (not just for their own software)? Would we have enough funding (and manpower?) to enforce the GPL should the need arise?
(Note: I'm not being critical of FSF or GPL or whatever, this is just my consideration.)
Re:Is it just me, or have attitudes changed? (Score:3)
away from me", then obviously you dont care about
forking.
If on the other hand, you actually want linux
to be a viable commercially acceptible commodity,
and widely installed in homes and businesses...
you had better damn well care about avoiding forking.
People have gone through all kinds of obscene
contortions and backflips, just to avoid dealing
with more than one vendor, or more than one version of ANYTHING.
This "one vendor uber alles" mentality by the
CONSUMER, is a huge reason why microsoft got
so entrenched.
So if you want linux to succeed in the same
areas, pay attention to what has worked in
the past.
Re:Forking works only if GPL is *enforced* (Score:1)
Re:Not such a great headline (Score:1)
http://www.amazon.com/exec/obidos/ASIN/02053090
Please buy it! Cripe, gimme your addresses and I'll buy it for you ppl!
Re:Confusion? (Score:1)
The capability to fork in the above sense is both neccesary and mostly undesireable --- but not something to freak out about the possibility of. I wonder how much of the current mindshare GPL forks have is due to FUD from various commercial interests?
S.
Re:Confusion? (Score:2)
Yet, despite all these "forks", there are no world-wars going on, no chaos and confusion, aircraft aren't falling out the sky - even UserFriendly is surviving. Despite the recent soap opera.
It's very true that forks allow for fresh development. Indeed, I can't think of a single major project which -doesn't- have a development tree and a "stable" tree, which is in itself a fork!
The issue is pure FUD, in it's most classic form. Not necessarily deliberately so, but it wouldn't surprise me if certain people who would *cough* appreciate Linux getting a bad rep, ummm, encouraged a certain level of hostility.
Re:Forking does happen with gpl (Score:1)
Forking happens all the time, but the source is available. The best code can always be built on and improved. Like in Zen, perfection is always strived for, but never achieved.It's a "Good Thing". :)
********
Minor Nit (Score:1)
Solaris is the result of Sun paying a one time licence to AT&T and then making changes/bringing in BSD compatibility.
(hence all the hate and discontent of some Sun users when Sun OS 4.x was dumped for Solaris)
Re:Forking does happen with gpl (Score:1)
Re:GPL and forks (Score:1)
Christopher A. Bohn
Fooware OS' ninjas (Score:2)
one alternative not explored was suppose the fooware ninjas come up with a cool thing and Linus says, "no way, that's not going into the kernel." in this case, the coolness of the cool thing increases pressure on the system to either accept the patches into Linux, or switchover form Linux to Fooware.
what's lovely is how OpenSource routes around intransigence. we're all human, nobody's perfect, and the character traits that make us great software developers also cause us to get in our own way. when that happens, the fooware os fork becomes a Good Thing.
Also, hidden in the fork between gnu/emacs and xemacs was the different programming styles. the procedural versus oo indicates that team based projects will probably stick with OO or easily modularized projects, and gnarlie "keep it all inside one head" projects will be one-great-man projects. the risk of the gnarlies is the death or disinterest of the one-great-man. note how OO and componentized strategies favor open source teaming.
sorry, i'm stating the obvious.
Forking is impeding progress Right Now (Score:1)
Re:Not such a great headline (Score:1)
i was going to moderate you down a point but i decided just to reply.
maybe if you think slashdot is so bland you should start submitting interesting articles?
tyler
Re:Not such a great headline (Score:3)
I'd suggest a read of Jakob Nielsen's column on writing microcontent [useit.com]. Some useful snippets:
Also, the impact of good headlines can be seen in this article on the cost of poor information on intranets [useit.com], but is relevant to anything that has a large number of readers -- though the economics aren't as direct.
If Hemos spends 5 extra minutes writing a clear, concise headline, and that saves 10,000 slashdot readers 5 seconds of scanning and thinking each, then that's a gain of 49,700 seconds for the /. community.
GPL Makes Good Forking Likely (Score:2)
Not all forking is bad. Where two groups intend to take a project in mutually incompatible directions, there should be a fork. For example, if one group wants to make the Linux kernel work well in multi-processor scenarios, and another wants to make the Linux kernal into an RTOS, there might be changes that each would need to make that would be incompatible with the other. In a case like this, there should be two different versions of the kernel, because they are justified by the very different goals of the two groups (before I get flamed, this is just a hypothetical scenario -- as far as I know an RTOS multiprocessor kernel is perfectly feasible -- but there must be some situations where incompatible goals spawn incompatible code).
What open source development discourages is bad forking. For instance, if I went into the Linux kernel and made a bunch of trivial changes to suit my tastes, without any real benefit to others, my forked kernel would sit there gathering dust -- no one else would work on it. That would be a bad fork. A good fork is one which is justified for a good reason, and for that reason it is supported by a community of developers willing to work on it.
Just my random thoughts on the matter.
-Steve
Re:Is it just me, or have attitudes changed? (Score:1)
The thing is, there's enough room in the big tent for all sorts of diverse activities.
Missing the point? (Score:1)
As well, while reading over that article, it seems that alot of things used by Linux users is a fork, so forking might not be the worst thing in the world.
Patryn
Re:Not such a great headline (Score:1)
The original post (and all those succeeding) are offtopic. It'd be nice to have a "meta" flag that could be turned on for posts to talk about the post itself, rather than the contents. That way, things could be filtered out by that. Also, a forum for the discussion of the mechanics of /. might be nice. So people can be on-topic when flaming Hemos for his English skills. =)
Re:Forking works only if GPL is *enforced* (Score:2)
The irony is, if a commercial shop uses this to break out of the GPL, the same legal precedent can be used to break all shrinkwrap licenses.
READ first, THEN post! (Score:1)
testing the GPL (Score:4)
See, for example, the "Pragmatic Idealism" [gnu.org] essay on the FSF's Web site. NeXT made an Objective-C front end to the GNU C compiler, and wanted to make this front end proprietary. The FSF's lawyer told them this would violate the GPL, and NeXT gave in.
Re:Why Linux doesn't fork (Score:1)
Re:Why Linux doesn't fork (Score:2)
Re:Forking is impeding progress Right Now (Score:2)
I would not even call the multiple desktop problem a true "forking" issue, since I don't think that the desktops started from a common source.
In the short term, you have a host of competing desktops, all trying to be The One True Desktop. However, since it is more professional pride/ego than dollars motivating development, the competition is more likely to be a footrace than a demolition derby. That is, I don't expect the GNOME and KDE guys to put any work into keeping the other from working well.
What will happen? Binary Darwinism. The poor interfaces will die out, and their good features and good developers will be at least partly absorbed by the better ones. Eventually, there will be either One True Desktop, or Several True Desktops that the user can choose from.
The Open Source community can afford to "burn" effort making multiple attempts to solve the same problem; indeed, I think that we can't affort not to. The diversified desktops of today will show us what a good desktop would be like, and the myriad will merge back to one or several.
Re:GPL and forks (Score:1)
comment that 2.7 was meant.
2.8.1 wasn't reliable *or* stable, for me, on Suns, Alphas or i386s... I don't know what the earlier post was referring to. 2.95.1
has been quite good for me --- but I have been
using more i386 boxes lately than when I was
playing with 2.8.x so that may skew the results a bit.
Note that I am not (primarily) a linux user
either --- that being said I can sympathise with
the pgcc/egcs crowds frustration with the fairly pathetic i386 support in 2.8.x
S.
ACL's (Re:Why Linux doesn't fork) (Score:1)
SGI is writing modules to provide ACLs in Linux file systems (likely, concentrating on their XFS for Linux implementation) as part of their effort to provide Goverment-standard B1 and C2 "Orange Book" security to Linux.
You can see a presentation (recorded in Washington DC at "Linux University") at http://www.sgilinux.org/ [sgilinux.org]
Can do != should do (Score:3)
No free beer. (Score:1)
Not unless Linus licenses the name to them.. Linus owns the 'Linux' name...
-joev
Re:Forking works only if GPL is *enforced* (Score:2)
"You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License."
Re:Fooware OS' ninjas (Score:2)
There is another possibility. If the team of ninja Foo programmers initiated cool changes at a much more rapid pace than the main Linux development tree, a fork could also result. Unfortunatly, I find it hard to believe that even the most targeted team of caffiene junkie, black magic wielding Ninjitsu programmers could ever outpace the main kernel, with several hundred programmers. (I think this falls under the 'blithering idiot' case to a certain extent.
Forking evolutionary! (Score:1)
Solaris under the SCSL? (Score:1)
Article full of errors (Score:4)
For example, the article states that "Lucid Emacs" was proprietary, and implies that it predates the GPL. Both are false: Lucid Emacs was based on GNU Emacs 18. Lucid Emacs and Xemacs have always been released under the GPL. And the aricle left out one major reason why a merge would be very difficult: The Xemacs people do not require copyright assignments for donated code, and Stallman does require such paperwork.
The history of the gcc/egcs/pgcc is also very misleading.
Finally, Stallman did not write glibc. The original author/maintainer was Roland McGrath; the current author/maintainer is Ulrich Drepper.
The mention of non-free BSD-based commercial Unixes implies that these implementation came after the release of the free BSDs and the AT&T lawsuit; they long pre-date both.
Great article (Score:1)
interested in doing some serious Open Source hacking (OTOH must serious Open
Source hackers probably know almost everything in this article already) or
just generally interested in the miracles of Open Source and the advantages of
its development process.
Recently I have talked about Open Source with lots of people, mostly
non-programmers who wanted to know about that new thing called Linux everyone
is talking about.
Their first reaction to my explanations of the meaning of the word free
(speech, not just beer!) in this context was: "Oh, but you'll get a lot of
code forking then..." (well, they didn't state it this way but what they meant
was exactly that), so I carried on to explain to them (1) why this is happening
so seldomly and (2) how, at the same time, the POSSIBILITY of code forking
was a good thing.
Basically what I told them was just a subset of this article. I was really
amazed at finding *EVERY* bit I told them in this article and the article
having the same structure as those monologues I gave to my friends (first some
examples, then the "analysis" with the 2 main conclusions I mentioned above).
Only that the article is much more complete and convincing than everything I
ever came up with.
Thank you Rick!
After reading it I am more-than-ever convinced that we do not have to fear
code forking! It can happen, it will happen, but the new branch will
survive if AND ONLY IF the advantages outweigh the disadvantages.
Re:Minor Nit (Score:3)
Sorry, you're wrong. (If you're going to pick nits, get the facts right!)
(and so on, through 2.6 = 5.6.)
(released long after Solaris 2.0 due to customer backlash).
Oh, and
Hope that helps...
Forking is a double-edged sword, both edges cut! (Score:5)
Today, the different Linux distros can cause a headache for people dealing with product installation issues, usually with scripts. This isn't so bad because most UNIX people are already used to that. But it does scare off software companies. Think about it, for Windows, you just buy InstallShield or Wise and most of the problems of OS differences are taken care of. Not true for Linux today.
It gets worse at the API level. If the Linux kernel forks and the APIs contain minor annoying incompatibilities, it will be just as bad as the UNIX days of old.
I'm a strong advocate of Linux mainly because it is Open Source. I feel the advantage of this is huge, but mainly for developers. Developers need to be able to trust that the APIs they are using a) work as advertised, b) can be fixed quickly when they don't and c) aren't subject to the whims of a particular profit driven organization. Open Source, and in particular GPL'ed code guarantees those things. Nothing else does.
These benefits aren't immediately visible to the consumers (ie. the non-programmers who just use a computer to get something done). But the benefits do trickle down, when the code they use can be made more reliable and can safely incorporate innovations. The time spent reinventing the wheel for minor variations of operating systems could be spent doing useful innovations.
Realistically, freeware will never replace commercial applications, and I don't want it to. What I want to see is new products with genuinely new features, and I'm willing to pay for them, with or without source. Those new products will come a lot faster if there is a common API to work with. There will always be competing versions of products, but at some point there will be features we come to expect of all of them, and the advantages to the different versions of the products become trivial. At that point, it makes more sense to standardize on a freeware version, and forget all the others. I believe at this point in time, there are not enough technical advantages to the competing operating systems to warrant their existence. It is a detriment to everyone's productivity. Therefore, it's time for an Open Source OS to move to the forefront, and Linux is the closest of any to doing that.
Right now there is one major fork in the Linux world, and that's GNOME vs. KDE. This is particularly nasty, because there really is no way to develop software that supports both. (I mean totally supports both, not just using some common subset of features) This is a long-term threat to the viability of the OS for commercial development. Let's get real, they both are trying to accomplish mostly the same goals: a common look and feel for graphical applications. As long as they both fight for mindshare, that won't happen! I really hope at some point in time, one or the other surrenders, and concentrates their efforts on taking the useful innovations they have and putting them into the other, so we can all get on with things.
If you really want Linux to replace Windows, stop arguing over petty differences and work together to build an OS that truly offers all the advantages that Windows currently offers.
Re:Forking is impeding progress Right Now (Score:2)
Re:Forking is impeding progress Right Now (Score:1)
Forking of the unices hurt unix really badly, and for a very long time. I tell you three times -- if not for the fact that the world standardized on Linux, I would be running IP masquerade on NT right now.
copyright and contract (Score:3)
Through the GPL, the author of a program is unilaterally granting permission for the recipient to copy the program -- under certain circumstances. If the recipient doesn't want to abide by the terms of the GPL, that's fine -- but then the recipient, under copyright law, then has no right (except for the usual fair-use conditions) to copy the GPL'ed program.
By contrast, shrink-wrap or click-wrap licenses try to give a software vendor more power than mere copyright law grants. That's why they have these "if you click this button you are agreeing to these ten pages of fine print" messages. They (might) create a contract between the vendor and the consumer: in exchange for the privilege of using the vendor's software, the consumer agrees not to reverse-engineer it, not to benchmark it, not to install it on more than one computer, etc., etc. Under copyright law, the courts would laugh at restrictions like this, but if clicking on the appropriate button does create a contract, then the vendor can enforce the license through contract law.
(As contracts, click-wrap licenses are iffy, because by the time you see the license, you've already coughed up your money and taken the disk home, and the click-wrap license is now trying to renegotiate the terms of a sale that's already taken place. But that's an argument for another thread.)
Disclaimer: IANAL.
What about Red Hat? (Score:1)
I don't disagree with you philosophically, but take Red Hat (please!): RPMs, directories in different places, etc. etc. Granted, not "kernel" mods, but "different" -- and significantly different (IMHO) than any other Linux distro. Yet, nobody but nobody has adopted their modifications. I'm not a kernel geek, but I'd be willing to bet that there are kernel differences too. Hey, maybe I'm wrong.
Anyway, I am not saying that what Red Hat is doing is a Bad Thing (tm), but at the same time clearly they (RH) has no intention of letting their mods die, and no one else has any intention of adopting them. What we end up with is a distro of Linux that you must know in order to administer; e.g. you can't be a Debian admin and just walk off the street and admin a Red Hat box. To me, that represents the Bad Thing (tm) in forking.
The GPL doesn't protect developers' freedom (Score:1)
``Bunch of songs'' under GPL. (Score:2)
Songs are not source code that is translated into machine code. We generally do not have access to the ``source'' for the music, only to the performance, which is captured by recording the sound waves.
The GPL is special in its requirements related to the relationship between the source code and compiled code; in other respects it is a license that permits free distribution of something.
If it is the free distribution that you object to, then it's meaningless to have a debate about the relative merits of various freeware licenses, all of which permit free redistribution of the source.
Anyway, the GPL protects primarily the freedom of _users_, not the freedoms of those who want to profit by making software proprietary. Stallman has argued that this is not really freedom, but the exercise of power. (As in power == control over things that affect others, freedom == control over things that affect yourself).
Re:Article full of errors (Score:1)
Yeah XEmacs devs will be happy that linuxcare considers their project dead...
And the gcc story is almost revisionism, gcc died because nobody was maintaining it and facing the corpse RMS had to do something.
The author also missed the point that very often one of the two projects have to die...
Is that avoiding forking ? :-))
BTW I definitly find that forking is more a personnality issue than a licence issue.
Re:Forking is a double-edged sword, both edges cut (Score:4)
And yes, it makes this stuff hard, because it becomes a combinatoric nightmare. If people would
Re:Why Linux doesn't fork (Score:1)
Re:X has always been a mess (Score:2)
But X users have not been able to agree on a window manager: Motif, OpenLook, fvwm, tvwm, WindowMaker, dtwm (CDE's) and so on. Most well-behaved X programs will be usable under any window manager, so people pick the one they like best.
Sun has a desktop envrionment of their own and offers CDE; IBM used to have their own but forced everyone to switch to CDE or just use plain Motif; I think HP did something similar; NeXT had a desktop which predated CDE (and which a number of the Linux desktops and window managers mimic).
The point is, there was no one X desktop environment to fork. Had the X server itself forked on Linux, that could become a serious problem. (X is already forked; every vendor's X is proprietary and closed source. XFree86, XFree68, and so on are the only open source X servers I know of... that can actually render on a display.)
The X base code is free, but derivative works do not have to be free. Since the base code does not support any display hardware, we have vendor forks for every UNIX, plus the XFree* forks.
Fortunately, people only mess with the device driver side, and so X programs continue to work across many window managers, and display properly on different remote systems.
Re:Confusion? (Score:1)
What the article concerned was forking on a larger scale.
_Deirdre
FUD (Score:3)
In today's anti-piracy climate, woe to whoever is caught! The horribly bad publicity alone arising out of discovery wouldn't be worth it.
Let's look at this closely: suppose that someone does take GNU code and incorporates it into a proprietary product. Does a truly cutting edge company need to steal code? You are playing catch-up if you need to steal.
Secondly, what if someone does that? At best, they will buy themselves reduced development time on some isolated project. The real benefits of the code, namely openness, will be lost. People using the main development stream will get the latest features and bugfixes, and the pirates will be locked into playing catch-up. They can't openly advertize that they have stolen code, so if the code really has a great reputation, they can't boast of it. They can't actively participate in the development process.
Thirdly, no serious company is going to risk it. I know that in my company, nobody would even want to hear of such as thing as GPL'ed code being incorporated into our products. If we use free software, we evaluate the licenses carefully. It would be foolhardy to do otherwise.
There is plenty of useful code out there which has licenses that are more permissive than the GPL. Particularly things that provide some generic, low-level functionality such as, say, compression.
Re:What about Red Hat? (Score:2)
I don't disagree with you philosophically, but take Red Hat (please!): RPMs, directories in different places, etc. etc. Granted, not "kernel" mods, but "different" -- and significantly different (IMHO) than any other Linux distro. Yet,
nobody but nobody has adopted their modifications.
Other than Mandrake, Macmillan, LinuxPPC, and a horde of other distributions. Last I checked, the LSB project has determined that RPM will be the standard file format for Linux packaging. That's why Debian is working on becoming less package-format dependant.
I'm not a kernel geek, but I'd be willing to bet that there are kernel differences too.
Red Hat generally ships its kernel with the AC patches compiled in. Most of the elements of the AC patches find their way into the main kernel tree eventually.
e.g. you can't be a Debian admin and just walk off the street and admin a Red Hat box.
It's certainly easier to go between the various Linux distros than it is to go between the various commercial Unixes. I had little problem going from Slackware to Red Hat, personally. I don't see how a Debian->Red Hat or Red Hat->Debian migration would be harder than that, likely it will be easier.
----
Re:What about Red Hat? (Score:1)
Um what about Mandrake
I'm not a kernel geek, but I'd be willing to bet that there are kernel differences too.
Only in the fact that the precompiled default binary is not the same in every distro. It's all still Linux, but with various different modules loaded. I would only call it a fork when it is something such as PPC Linux and MkLinux.
Re:Solaris under the SCSL? (Score:2)
The article says that Solaris is under the SCSL.
It does? I certainly didn't mean to imply that. Clearly, Sun Microsystems is contemplating such a move, but has not released the source code except possibly under NDA to some of its close business partners.
If I did imply that, than I must have been rather sloppy. Understand, please, that the whole thing got written on a laptop machine last Saturday, to occupy my mind as I waited in a hospital waiting room for my girlfriend to get medical attention. And I was seriously ill with a case of the 'flu. I'm surprised it came out as well as it did.
-- Rick M.Re:GPL and forks (Score:2)
I'd say he makes a better case that open source dissuades forks, or encourages remerging of forks. Specifically singling out the GPL is inappropriate, since there is no example given of a BSD-Licensed app having problems with a proprietary fork.
Re:Fork you (Score:1)
Fork puns can be fun! For example, an application that wishes to detach from it's tty can...fork off and die.
Of course, forking causes children.
Re:Totally forked (Score:1)
If this clustering software from Turbolinux is that shit hot, it'll be assimilated
AFAIK, the problem with clustering software is the US government won't allow exports. Linus won't let anything in the kernel that isn't exportable, so such a fork will remain until the US decides clustering tech isn't a weapon.
Man's unique agony as a species consists in his perpetual conflict between the desire to stand out and the need to blend in.
academics and linux distributions (Score:1)
Absolutely. It is important to recognize the inherent tension between vendors trying to differentiate themselves and vendors avoiding scaring off application developers due to the difficulty of targeting (in effect) multiple platforms. This conflict creates a tough problem.
I don't think that they will be, however, because too many people have too much ego wrapped up in the myth.
I don't follow you here. Which academics "repeat their myth about linux=o/s=kernel" and why do they do it? I'm not disagreeing with you - I just have not heard many people making this claim.
John Regehr
the fragmented future of Linux. (Score:1)
Most any of the more serious linux users have reconfigured their kernel to better fit their computer using style, and sooner or later people with similar using habits will form groups. For the future Linux users, they will have much greater variety in system software they can use.
I guess Linux, in a way, could provide a cultural 'norm.' Everyone will use it, just different varieties. I could see it.
Re:Article full of errors (Score:2)
For example, the article states that "Lucid Emacs" was proprietary, and implies that it predates the GPL.
Oops, you're right. I was thinking of Gosling emacs, not Lucid/xemacs. That's what I get for not double-checking my work.The history of the gcc/egcs/pgcc is also very misleading.
Unfortunately, the facts are somewhat murky, and more than a little disputed. I notice that my brief account does, for whatever it's worth, match the one give at http://www.tuxedo.org/~esr/writings/cathedral-baza ar/cathedral-bazaar-15.html [tuxedo.org].
Finally, Stallman did not write glibc.
Not guilty. I made no such claim.
The mention of non-free BSD-based commercial Unixes implies that these implementation came after the release of the free BSDs and the AT&T lawsuit; they long pre-date both.
Ditto. I implied nothing of the kind.
-- Rick M.Good article, but facts wrong w.r.t. Stallman (Score:1)
and well-written. However...
I don't think the facts are quite straight
regarding FSF Emacs vs XEmacs. Questions
of history aside, the main thing preventing
a merge right now is that the XEmacs folks
don't have legal papers for all their contributions. The FSF requires these for code
it releases; hence, no merge into FSF Emacs, at
least not in such a way that the FSF will serve
the result from their servers.
The statements about the current maintainability
of XEmacs vs FSF Emacs are also suspect. I don't
think FSF Emacs is so complex as to require a
"genius level" maintainer -- and, as far as I know, Stallman is not in fact its primary
maintainer right now, although he is involved.
(The fact that multiple people are involved in
FSF Emacs's maintenance is a testament to its
maintainability... as is its 20 year age!)
Also, with regards to GCC->EGCS->GCC, I don't
believe Stallman ever did anything to delay the
arrival of Pentium optimizations. Rather, the
GCC maintainer (who was not Stallman) did not fold
in patches he received for these optimizations
Out of impatience, the EGCS team forked and made their own version of GCC. Eventually the FSF (i.e., Stallman) decided to make EGCS the official
FSF "GCC". Far from being a resister who tried
to keep the optimizations out of GCC, Stallman
was responsible for a major step in the
EGCS->GCC transition.
Well-written article, just don't want to see
Stallman get blamed for stuff he's not responsible
for.
-Karl
Re:X has always been a mess (Score:1)
X itself has thrived because the API has been standard from day 1. Writing software that directly uses Xlib is portable and low-maintenance. I doubt anybody would even dream of introducing a new windowing system for UNIX.
So why do people continually insist on making new window managers? Because the original "standard" window manager (twm) sucked, and it never got updated. So every programmer with an itch for some hot new feature took twm and used it as a basis for a new window manager. This has led to some great ideas, but it hasn't made Joe end-user's life any easier.
As for CDE and the other supposedly common desktop environments, they all came about too late, and they weren't all that advanced when they were introduced.
Why do people insist that they need to be able to choose different window managers, when all they really want is to be able to customize it to look and act like whatever they are really used to using? This sort of immaturity gives fuel to the Open Source detractors. Hopefully, one of the new themed window managers will settle down all the competition.
GPL and BSD and forks (Score:3)
I find the comments on the BSDish license promoting (a) fork interesting but underdeveloped.
The article doesn't say the BSD license is more prone to forking than the GPL, but it does comment on the fork from 386BSD to FreeBSD/OpenBSD/NetBSD/BSDi. It says that fork is stable because of the different focuses of the different forks. GPL programs are just as likely or unlikely to have forks which persist due to such differences in focus.
Note BSDi in the above list, however. It points out one reason to fork that BSD permits but the GPL doesn't, license change. If someone wants to fork a project because they want to distribute their modifications with more license restrictions than the original, BSD allows it and GPL doesn't.
[Disclaimer: I am not saying that either BSD or GPL are better because of this difference, only that this is a notable difference]
----
Re:Minor Nit (Score:1)
Sun OS was the BSD product.
Solaris is the result of Sun paying a one time licence to AT&T and then making changes/bringing in BSD compatibility.
(hence all the hate and discontent of some Sun users when Sun OS 4.x was dumped for Solaris)
You're of course quite right, except that this obligatory ritual conversation goes on to say that Solaris still is based on SunOS in some odd Sun Marketing sort of way, with cut-and-paste excerpts from login screens to prove it.
All of which was so far from the article's topic that I decided it's better not to go into it.
-- Rick M.No code is "perfect," but some are developed (Score:2)
EGCS compatibility is somewhat different; it represents some examples of cases where Linux was not conforming to the official standards of how C is supposed to work. (Largely regarding treatment of pointer aliasing, I believe...) Linux has arguably been made buggy in doing things that C isn't supposed to support, but which GCC used to.
Conformance to standards isn't a "feature" in the way many kernel facilities are...
Consider that I didn't mention GGI [hex.net] as an example; it is an example of something that was initially rejected, and quite legitimately so, as the proposed implementation was not, three years ago, developed, debugged, and perfected. Interestingly, the recent framebuffer support is now the GGI support; they don't need to integrate big gobs of stuff as they have the crucial interface that they do need, with the benefit that it has made supporting some of the more obscure systems ( e.g. - Atari ST [hex.net] and such) easier.
Re:Is it just me, or have attitudes changed? (Score:1)
But for some projects, if the maintainer isn't capable (or absent), a fork is usually necessary.
Re:academics and linux distributions (Score:3)
That's not the problem. It's not academics, really. Rather, the fault lies with those zealots who claim that anything running a Linux kernel Linux, as though that were all that mattered. Remember how they like to flame the BSD people for having 4 different operating systems while they steadfastly claim to have just one? It's a political stunt with no basis in matters practical. In fact, this whole "distro" jazz is a veiled euphemism to hide the fact that there are a zillion different Linux operating systems out there. Sometimes they're just repeating what they've heard, not understanding that "distro" is a cutsie dodge to avoid saying "operating system".
But to someone trying to develop, produce, test, distribute, configure, install, and adminstrate this applications software, they are different operating systems. Stop playing games to make your team seem less splintered than it is. The benefits of pretending the Emperor is wearing lovely new clothes are not, in my opinion, of greater import than the real-world ramifications of living a lie. People are trying to get honest work done, and this kind of crap just doesn't fly when you get down in the trenches.
Re:Article full of errors (Score:1)
RM: Unfortunately, the facts are somewhat murky, and more than a little disputed. I notice that my brief account does, for whatever it's worth, match [Eric Raymond's].
Well, your account goes well beyond ESR's, for example in emphasising the Pentium optimizations as being the major factor in the fork. The real problem with pgcc was that it was very poorly written, not Stallman's stubborness!
PB: Finally, Stallman did not write glibc.
RM: Not guilty. I made no such claim.
Let me quote your words directly: For the GNU Project, Richard M. Stallman's (remember him?) GNU Project wrote the GNU C Library, or glibc, starting in the 1980s.
PB: The mention of non-free BSD-based commercial Unixes implies that these implementation came after the release of the free BSDs and the AT&T lawsuit; they long pre-date both.
RM: Ditto. I implied nothing of the kind.
Perhaps not directly. But the way the article is written, one gets that impression, because the commercial Unixes are mentioned in the same paragraph (and after) the mentiond of the BSDs and the law-suit. That is what I mean by "misleading statements". Perhaps I should have said "misleading exposition": When you put two facts next to each other in a historical background piece, the natural assumption is that the events follow in that order, or that there is some logical connection. Careful writing (and proof-reading) means minimizing such misleading inferences.
Re:GPL and BSD and forks (Score:1)
I guess my wording was a little incomplete. I was referring to the section:
2. BSD --> FreeBSD, NetBSD, OpenBSD, BSD OS, MachTen, NeXTStep (which has recently mutated into Apple Macintosh OS X Server), and SunOS (now called Solaris)
But I should have been more specific. What I meant was it would have been interesting if he had expanded a bit on what he thought the difference a GPL type license would have been, or the difference between the effect of this proprietary+free branches fork and a pure OS fork.
I wasn't trying to suggest a GPL good BSD bad slant.
This probaly answers the other poster as well...
Re:academics and linux distributions (Score:1)
That's not the problem. It's not academics, really. Rather, the fault lies with those zealots who claim that anything running a Linux kernel Linux, as though that were all that mattered
Yes - sometimes the problem is ignorance, and sometimes it's zealotry. You might say that a zealot is just the kind of person who takes any difference ("there's only one Linux kernel") and thinks that it's an argument in favor of his favorite system.
Thanks for clarifying,
John Regehr
Re:Is forking so bad? (Score:1)
I don't. (Score:2)
The only reason people think it's a good thing is because until recently there has been no evidence to suggest you could survive doing anything else. Hence, the emotional reaction is 'we must all do this or die!'. First of all, Linux is still around without doing that (and indeed growing, and indeed there are still other options neither Windows nor Linux), and secondly, this assumption was formed from observing a monopoly at the top of its form and making every effort to kill off everything resembling 'diversity'.
If even this has not made the 'monoculture', everybody-runs-Windows approach safe and beneficial to all, what good would it be to try and make Linux a monoculture, with all the disadvantages it brings, but with none of the ability to exert corporate power and influence and throw around huge sums of money?
Seriously. That'd be a _phenomenally_ bad tactical move.
Re:Article full of errors (Score:3)
Rick, you're someone I respect, but you richly deserve flaming, maybe more so than anyone I've encountered in a while. When you're wrong, fess up. Don't post bogus defenses.
I'm sorry, but the sloppiness of your history simply can't be justified, especially since you repeatedly just make assertions that are completely bogus. Almost everything you say about gcc is complete nonsense. Your defense that "the facts are somewhat murky" is so weak as to be embarrassing. There is no murkiness at all; tons of people have been involved and know all about it. Your history is not wrong because of differences of opinion. It is wrong because it is wrong. If you got any of this nonsense from ESR, you need to get him educated. ESR hasn't been involved in gcc development in at least the past four years or so, but this hasn't stopped him from declaiming authoritatively about it.
Example: the origin of pgcc. Stallman didn't "ignore repeated requests" for Pentium-specific optimization, the necessary information was trade secret at the time. The gcc maintainers simply did not have the technical information required, and the folks in a position to work on gcc full-time (mainly at Cygnus) had no contracts to do PC work, so both information and resources were lacking. When the Pentium first came out, one had to sign a nondisclosure to get the needed information and the gcc developers couldn't do that. Intel gradually released more information in dribs and drabs, but at the time, instruction scheduling on the Pentium was very tricky and not well understood outside of Intel.
Finally, some Intel engineers did a hack of gcc version 2.4.0 to do Pentium optimization. Unfortunately, they made no attempt to honor the structure of the compiler, e.g. the distinction between the front end (machine-independent) and the back end (machine-dependent). This meant that it wasn't possible to integrate their work. pgcc is based on that work. Thanks to Marc Lehmann and others, the distance between pgcc and egcs/gcc has been steadily reduced; pgcc is currently maintained as a patch against first egcs and then gcc, and the size of the patch has steadily been reduced. Cygnus never had anything to do with pgcc; the Cygnus developers I know considered the original pgcc to be misdesigned and buggy, though it has gotten much better since then.
egcs started as a branch off of the gcc2 development tree, in the long period between 2.7.2 and 2.8.0. I was involved in the discussions that led to the project. When we started egcs, we talked to the pgcc people because we were seeking to decrease the number of gcc forks in the world; in addition to pgcc there were the Cygnus customer releases, which were ahead of the FSF release at the time. I won't go into all the breakdowns that had stalled gcc development at the time, but it had become a mess. We were definitely motivated by ESR's CatB paper. However, we did, in the process of egcs development, demonstrate that one of ESR's contentions is false: copyright assignments are not a barrier to the bizaar model. egcs/gcc is closer to the bizaar model than the Linux kernel, as far more developers have the right to check in code.
With the assistance of Intel, Cygnus has produced a new ia32 backend, which should be out in gcc 2.96; this will finally make pgcc obsolete and hopefully complete the reunification of gcc.
On other topics:
Your history of Lucid Emacs/XEmacs is, as Per has pointed out, completely wrong -- it was GPLed from day one, and much of the code in XEmacs was written by RMS. Go talk to Jamie Zawinski, father of XEmacs, for some education (you might have heard of him ;-). Copyright assignments were one issue that kept the fork from healing, but technical differences between RMS and the XEmacs maintainers were also a factor.
Your history of the Unix forks isn't as bad, but it appears that you have the chronology confused a bit. The splits in the BSD camp predated the AT&T/BSDI/UC Berkeley lawsuit, for example.
But KDE is GPL, not LGPL ... (Score:2)
Will this make a difference? I think so; many of us especially in commercial environments have to use at least one binary-only tool (even if we wish we could get rid of it)
Fucked, not forked (Score:1)
Re:Good article, but facts wrong w.r.t. Stallman (Score:1)
I don't think the facts are quite straight regarding FSF Emacs vs XEmacs.
No, they absolutely aren't. I really was meaning to write about Gosling emacs there, but somehow got sidetracked onto xemacs. Apologies to users of emacsen everywhere, and especially to Jamie.
At the time I wrote that, I was a bit distracted, as I was typing it into a laptop next to my girlfriend's hospital bed, last Saturday. But I should have caught that gaffe before sending it to Linuxcare. (The original version was my parting-gift essay for Linuxcare's sales staff: I had resigned from that firm the previous day.)
Questions of history aside, the main thing preventing a merge right now is that the XEmacs folks don't have legal papers for all their contributions.
Yes, collecting the copyrights would be an additional obstacle. However, the ones listed were those I recalled from Ben Wing's talk at SVLUG on July 1, 1998.
The statements about the current maintainability of XEmacs vs FSF Emacs are also suspect.
They are some combination of Ben Wing's SVLUG presentation and my own surmises. But, of course, differences of opinions are what give us horse races.
Also, with regards to GCC->EGCS->GCC, I don't believe Stallman ever did anything to delay the arrival of Pentium optimizations.
I did not mean to imply personal intransigence on Richard's part, just that FSF was dragging its feet.
-- Rick M.Re:The GPL doesn't protect developers' freedom (Score:2)
Re:Article full of errors (Score:2)
Your exactly right. The time table of BSD [netbsd.org] shows Rick is incorrect. If I remember correctly (and whatever I say is from reading around. I wasn't into UNIX then), Bill Joltz took the BSD code and began removing AT&T code. He then released his implements under a free license so that BSD could was held back. However, as he lost interest in the project and wouldn't maintain it (he could have been the Trovalds of BSD, perhaps), and both FreeBSD and NetBSD sprouted up by developers. 386BSD eventually died, and of course OpenBSD sprouted off of NetBSD (Theo's archive of why seems to give him good reason).
Here's where I'm a bit mucky on things. I thought Bill Joltz told free developers that they couldn't use his code, and there was a scramble to move FreeBSD to 4.3BSD-lite.
I am glad Rick pointed out that BSD splitting had very good reason that and that if such existed for Linux, it would split too. I don't believe the GPL prevents forking, because the reason Rick noted (that any improvements would return be integrated to Linux), is the same with BSD. However, I believe the GPL reduces forking by creating the idea that there is a dictatorship, while BSD groups create whole bodies to look after the code. People think of Alan Cox and Linus Trovalds when they think of who looks over the kernel, while they think of FreeBSD, Inc., when they want to improve that system. Both have core members, just the way they present themselves looks different (while it might not be).
Re:Article full of errors (Score:2)
Your exactly right. The time table of BSD shows Rick is incorrect.
That might be a valid criticism if my article had included any sort of chronology. But it did not: I omitted any mention of when BSD OS, SunOS, and Jolix forked off from BSD 4.x because it simply was not germane to the point of the article.
I in fact ran 386BSD 0.1 -- downloaded by modem and and written out to floppies. But that would be a completely different article.
-- Rick M.Re:Article full of errors (Score:2)
you richly deserve flaming....
Oh, don't worry. I won't take it personally. People seem to get very worked up over these matters. Why, I don't know.
There is no murkiness at all; tons of people have been involved and know all about it.
Unfortunately, they do not appear to agree, as can be seen by examining the comments here, if not elsewhere. A situation that isn't aided by people going out of their way to misread what I wrote, and read into it meanings I never stated.
Example: the origin of pgcc. Stallman didn't "ignore repeated requests" for Pentium-specific optimization, the necessary information was trade secret at the time.
Oh, at one time, they were indeed. And then, later, they were available, but not accepted into gcc. Which is what I said.
Cygnus never had anything to do with pgcc.
What I remember saying is that individual Cygnus staffers were involved with PCG. Is this not correct? I'm pretty sure I verified that, back when I was running a Stampede Linux beta.
Your history of Lucid Emacs/XEmacs is, as Per has pointed out, completely wrong.
Indeed. I had it confused, at the time I wrote that, with Gosling emacs.
-- Rick M.Re:Thus computer history becomes a quagmire (Score:2)
Uh, because SunOS 4.x and SunOS 5.x aren't totally compatible? You should have said "...to work with SunOS 5.x", not "...to work with Solaris." Because Solaris 1.x and Solaris 2.x are incompatible in the same way as SunOS 4.x and SunOS 5.x.
Saying ``SunOS software doesn't work on Solaris'' is obviously false, because Solaris 1.0 was SunOS 4.1.1. So your use of the word ``Solaris'' is wrong.
No, ``Solaris includes SunOS'' is true without qualification. You should have said SunOS 5 when you meant SunOS 5, not Solaris (which can mean either SunOS 4 or 5, despite what the ambiguous common usage is.)
Not that anyone really cares... ``Solaris'' means ``SYSV'' in most people's minds just like ``hacking'' means ``breaking into computers.''
Re:application framework forking (Score:2)
This is true.
This does not follow. If the goal is to have only GPLed software and no non-GPLed software, then the GPL is necessary.
You assume that to have lots of free software, there must be no non-free software, which is obviously not the case. If 10% of the world's software is free and 90% is non-free, then increasing the amount of software in the world would increase both the amout of free and non-free software (assuming the ratio held.) This is called ``growing the market'' and it benefits all who participate in that market, even if they are direct competitors.
So there are cases where non-free software can help the goal of having lots of free software. In fact, given that very nearly every piece of free software is a clone of some piece of non-free software, one could argue that non-free software is a necessary catalyst for free software! Would GIMP be as good today if they didn't have Photoshop to mine for ideas? Much though I respect the GIMP developers, I most sincerely doubt it.
Also note that the GPL is not the only license that is considered a ``free software'' license, even by GPL advocates. Don't say ``free'' when you really mean ``GPLed.''
Indignant Communist OS (ICO) (Score:2)
More communist programmers, more indignant code, and less code tyops!
We have forked the code, and are in the process of patching in more rude comments! Yeah!
On top of that, we have an exclusive (but GPLed) program for finding and fixing common tyops in the code. This is already hard at work on our copy of the kernel for ICO.
On top of that, to make sure no-one can steal our One True OS, we are making it completely incompatible! HAhah! Red Hat, your pathetic efforts to make your distro un-de-forkable are NOTHING compared to our pissing on Linux(r) standards! Want the source to the kernel? No problem, go look in
Not to mention our amazing new package format -- but we won't even tell you about it, as you'll never be able to use it (and you thought RPM's false dependency messages were evil).
As a sign of our affection for our users, we have posted this to Slashdot. We wait for you to abandon Linux(r) for ICO!
Btw, we are IPOing in a few months. As we have the assorted quota of buzzwords (slashdot, GPL, Linux(r), fork, Red Hat, kernel), we expect to have a strong opening price (around 100$ a share).
IPO keyword highlighting via Silly_Linux_IPO_Bot v1.3
---
Re:application framework forking (Score:3)
It's so much fun to be a teenager.
List of errors in the article (Score:2)
1. Unix.
Basically ok, as a kids version of history.
2. BSD
MachTen, NeXTStep, SunOS belong to the "Unix" camp above. OpenBSD split from NetBSD, and can thus not be characterized as a splinterproject from 386bsd.
3. Emacs
Totally crap. GNU Emacs was always under a "remain free" license. The non-free Emacsen were all written from scratch. Lucid Emacs was not only free, but the code ownership was assigned to the FSF for merging back. However, later Sun contributed code which was still GPL, but *not* assigned to the FSF.
The real story is that RMS wanted to keep total control over Emacs development, and refused to release Emacs 19 when it suited Lucid commercial interests. Today a merge is prevented partly because of the control issue, and partly because of the unassigned Sun code the FSF don't want to use.
4. httpd
I never followed that one.
5. gcc
Crap. *intel* made a pentium optimized port of gcc, and released it as GPL. They did *not* assign the code to the FSF. The version released by Intel did *not* work on any other platforms. So there were never any possibility of a remerge. Intel has later payed Cygnus for developing a new Pentium II optimized backend, which *will* be in gcc 3.0.
Egcs was created because gcc 2.8 never seemed to materialize, and in particular the C++ frontend of gcc 2.7.2 was embarrassingly old and buggy compared compared to what the Cygnus engineers had developed. Also, the Cygnus engineers found it very hard to attract outside developers with the closed developing model of gcc. Egcs was created as an experiment, with RMS blessing, to demonstrate the efficiency of a more open development model. It was a success.
6. glibc
Ok, except that the split then probably wasn't a mistake. Linux needed a working libc *then*, they couldn't wait until glibc was finished.
Re:application framework forking (Score:2)
Yes. Don't waste it by spending all your time worrying about proprietary software.
--
You forget one point (Score:2)
The three BSDs have different goals, so a merge is not a solution, but the three projects actively follow the others changes, and merge wat interrest them in their own tree.
Subscribe to source-changes for any of these projects, you'll see countless commit message with (from NetBSD/FreeBSD/OpenBSD)
Re:Differentiation (Score:2)
Since (if?) Mandrake Linux is a branch off of Redhat Linux, and Redhat Linux is a branch of Linux, and Linux is a branch of Unix, then the family tree is a huge collection of highly ramified dialects. There are places where these hundred millions flowers all blooming with varied scents is a burden, but others where it is a blessing. Let's just not call the violet a rose, because sweet though it is, it's not quite the same aroma.
Scores, sources, performances and piano rolls. (Score:2)
The relationship between sheet music and a musical performance is more like the relationship between program text and the performance of a program, but even that is a bit of a strech.
The performance of sheet music has as its ingredients the information in the sheet music itself, plus the artist's interpretation which endow the performance with meaningful nuances.
The performance of a program, likewise, is derived from the program's code and whatever input goes into it, which may include static data, real-time inputs, etc.
Not all musical performance comes from notation; just like not all data is the output of a program.
The relationship between source and binary code is more like the relationship between a musical score and some lower-level representation of the musical score, like a piano roll. A piano roll is just like machine language: it has binary words consisting of punched holes. These trigger a ``control unit'' which drives the hammers that strike strings. Admittedly, it's a very horizontal encoding.
It would be reasonable to distribute piano rolls under a GPL-like license requiring that the sheet music be available to the pianist. But a key motivation for this isn't there: namely the need to modify. Neither form allows for easy modification, and few players are interested in rewriting a piece. There is a pragmatic need to be able to modify programs to suit changing requirements which is doesn't apply to expressive works.