Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Squaring the Open Source/Open Standards Circle 255

Andy Updegrove writes "Before there was Linux, before there was open source, there was of course (and still is) an operating system called Unix that was robust, stable and widely admired. It was also available under license to anyone that wanted to use it, and partly for that reason many variants grew up and lost interoperability - and the Unix wars began. Those wars helped Microsoft displace Unix with Windows NT, which steadily gained market share until Linux, a Unix clone, in turn began to supplant NT. Unfortunately, one of the very things that makes Linux powerful also makes it vulnerable to the same type of fragmentation that helped to doom Unix - the open source licenses under which Linux distributions are created and made available. Happily, there is a remedy to avoid the end that befell Unix, and that remedy is open standards - specifically, the Linux Standards Base (LSB). The LSB is now an ISO/IEC standard, and was created by the Free Standards Group. In a recent interview, the FSG's Executive Director, Jim Zemlin, and CTO, Ian Murdock, creator of Debian GNU/Linux, tell how the FSG works collaboratively with the open source community to support the continued progress of Linux and other key open source software, and ensure that end users do not suffer the same type of lock in that traps licensees of proprietary software products."
This discussion has been archived. No new comments can be posted.

Squaring the Open Source/Open Standards Circle

Comments Filter:
  • Fear of fork. (Score:5, Interesting)

    by killjoe ( 766577 ) on Tuesday May 30, 2006 @06:53AM (#15427358)
    The article summary is a bit of a flamebait. In order for a product to fork there must be two forces in action.
    1) Licensing that allows a fork.
    2) Frustrated users who feel like they can't shape the future of the product via existing channels.

    This is why there are at least three forks of java and none of perl. I suppose one could argue that the forks of Java are not true forks but attempts at re-engineering but the end result is the same.

    Will linux fork like Unix? Well in a way it already has, there is real time kernel, different kernels for devices etc but not in the way the article talks about it. The article isn't talking about forks per se it's talking about distros. The author seems to have missed the point that the Unix forks were actual forks in the kernel not "just" distros.

    Weird article really. Kind of pointless too.
  • by wandm ( 969392 ) on Tuesday May 30, 2006 @06:56AM (#15427368)
    I'd like to support the nonfragmentation of Linux - as I guess many would. But looking at the LSB 3.0 certified list http://freestandards.org/en/Products [freestandards.org], just shows Red Hat, SUSE and Asianux. Are these all the choices I have?

    Could someone please explain me?
  • Unix never died (Score:4, Interesting)

    by Steeltoe ( 98226 ) on Tuesday May 30, 2006 @07:53AM (#15427499) Homepage
    From the submission:
    Unfortunately, one of the very things that makes Linux powerful also makes it vulnerable to the same type of fragmentation that helped to doom Unix - the open source licenses under which Linux distributions are created and made available.

    I believe fragmentation has very little to do with the issue concerning the doom of UNIX. My three top reasons are:

    1) Price of purchase
    2) Expensive/hard to administer
    3) Stagnation in development

    Users want the cheapest, easiest and most feature-filled solution. It's pretty straightforward actually, and a Personal Computer with Windows was the first to fill the niche, if you leave out Apple.

    Apple lost because they wanted monopoly on _both_ hardware and software, while Microsoft only wanted to control the OS (in the beginning). More importantly, Microsoft was better at hyping/marketing their next generation, something that Apple has learned to do better in the recent years.

    UNIX and IBM lost because they failed to scale down to personal PCs, which is where the commodization of computing happened in the 90's. IBM and other mainframe dealers refused to understand the Personal Computer (too much vested in big contracts), thus the clones took over along with Microsoft Windows while the dinosaurs waited it out.

    Without the IBM PC Clone, the computing world would probably look very different today. In those days it was very attractive to be able to upgrade the PC, exchange parts and use commodized hardware for the whole rig. Many tasks which rented expensive CPU-time on UNIX mainframes, were moved over to PCs during the 90's.

    Fragmentation, no doubt, can be very bad for development, but it is also a boon since it leaves developers free to explore different avenues regardless of politics and limitations. I think once a system becomes popular enough like "Linux", the demand for standardization will pull it together. Hey, even the BSDs keeps compatibility with "Linux".

    What killed UNIX was lack of creativity, focus, commodization, too much control and maybe most importantly: arbitrary high prices just to milk customers.

    Linux may have killed off UNIX (oh what irony), but NT have been beating the crap out of it for many years. Linux and UNIX never actually competed on even terms, because UNIX has already been pretty much abandonded for a long time - it's owners only keeping it for milking the last drops.

    My pet peevee with bash and the GNU utilities is the lack of standards, and lack of further development of the command-line. In that regard, I hope "Linux" can progress without having to be beat by Microsoft releasing a better command-line.

    POSIX is really an antique joke compared to what could be possible via the command-line. So the trap "Linux" might fall into, is the same as for UNIX: stagnation, because most users drool at eye-candy and not the actual implementation in the back-end. However, maybe the cost of switching command-line is not worth the gain, time will tell.

  • by Dan Ost ( 415913 ) on Tuesday May 30, 2006 @08:05AM (#15427534)
    However, if in a specific instance the Windows method is better, shouldn't it then be preferable?

    Only if it can be added in such a way that it has zero impact on those
    of us who are not interested in it. Nothing pisses me off more than when
    I have to relearn how to configure fundamental subsystems becuase they've
    been changed to make things easier for users of software that I don't use.

    Out of curiosity, why didn't you show your girlfriend the find command?
    If that wouldn't have increased her geek-cred, then nothing would have.
    Also, isn't it trivial to make an rpm give you the installed manifest of
    its contents?
  • by Dan Ost ( 415913 ) on Tuesday May 30, 2006 @08:16AM (#15427552)
    Support standardization where it makes sense.

    For distros that have a regular release cycle, something like LSB makes
    sense. For distros that are moving targets by design (Gentoo, Arch,
    Debian), then any standard that specifies specific versions of
    libraries and compilers would reduce the value of these distros and so
    they're better off ignoring those parts of the standard (and thus will
    never be certified).
  • by midicase ( 902333 ) on Tuesday May 30, 2006 @08:34AM (#15427616)
    The overlap of functionality between NT and Linux is, really, quite small. There aren't many cases for which Linux is a good solution, where NT could also be (and vice versa).

    Does not matter to the manager that wants a particular OS deployed for a particular solution. A few years ago I migrated a Netware printing system that handled tens of thousands of documents per day to an NT solution. It ended up requiring 16 NT servers to replace 3 Netware servers. Of course NT was not the correct solution but management insisted on making it work.

    Many times developers are limited to a particular OS, particularly in enterprise systems where there is only "one approved platform".
  • by ortholattice ( 175065 ) on Tuesday May 30, 2006 @10:02AM (#15427960)
    For distros that are moving targets by design (Gentoo, Arch, Debian)...

    Perhaps it's a matter of opinion, but I'd hardly call Debian stable (plus security updates, of course) a "moving target". Isn't the real reason that LSB requires RPM? (Not wanting to start a flame war, the greatest benefit I found when I switched from R.H. to Debian was no longer having to use RPM. But that's just my personal preference, I guess.) In fact a search leads us to Red Hat package manager for LSB package building [debian.org] which says, "This is a version of rpm built to create rpm v3 packages as used in the Linux Standards Base. You should need this package only if you are developing LSB packages; you do not need it to install or use LSB packages on Debian."

  • by HighOrbit ( 631451 ) * on Tuesday May 30, 2006 @10:39AM (#15428159)
    You shouldn't see different binaries for different distros. A Linux app should be an Linux app, period.

    Amen! Not only is it frustrating figuring out where all the config files are, but having an app fail to install or work because of dependancy or lib versions is also frustrating. I remember having fits trying to install Oracle 8i (circa 1999-2000) and having the install fail because the linker was choking over libc version incompatabilities and LD_ASSUME_KERNEL settings. Ofcourse, all the problems could be resolved by tinkering and patching, but that turns a 30 minute install into a 2 hour install. Installs should be a clean process, not a tinkerthon.

    I saw another post here that mentioned apgcc [autopackage.org]. This was the first i've heard of it, but from the description at its website, it looks like a good idea. Basically, it looks to enforce lowest common denominator libraries and static linking. I like the idea of a fat binary. I also like the idea of self contained app directories (I've never owned a Mac, but I've been lead to believe that is the way it works). Diskspace is cheap nowdays. I don't see why everything needs to be dynamically linked.

    Now let me digress into the config file issue. This seems to be a favorite flame topic, so let me don my asbestos suit and jump in. IIRC, the cannonical UNIX practice is to install everything not part of the core OS into /usr/local, including the config files. Most linux distrubtions seem throw everything including the kitchen sink into /usr/bin and then put the config files in /etc. My own personal feelings on the matter are that nothing but core os components should go into /usr/bin and /etc (and no, GNOME and AFTERSTEP are not core OS componets). Everything else (including config files) into /usr/local or /opt.

    If Apple can do universal binaries across architectures, you'd think all us linux whiz kids could get a cross distrubtion (and cross version) system of binaries working. Ofcourse, Apple has unitary leadership and direction, instead of "hearding piss-ants".
  • by KWTm ( 808824 ) on Tuesday May 30, 2006 @04:04PM (#15430813) Journal
    This is one thing I've been trying to figure out.

    So different distros will put their files in different places. (Actually, I can't believe programs will actually have the library locations hard-coded in, but whatever; I'll accept that the alternatives have some disadvantages.) So Ubuntu will store its WonderfulLibrary.so in /lib/UbuntuLib/, and Slackware will put it in /var/log/opt/etc/usr/lib/. So why can't we just massively symlink the bloody directories together? Someone create a script file with two hundred and ninety-six lines of:

    ln -s /var/log/opt/etc/usr/lib /lib/UbuntuLib
    ln -s /var/log/opt/etc/usr/lib /usr/lib/ObscureSuSEdirectory
    ln -s /var/log/opt/etc/usr/lib /some/other/RedHat/lib/location ... [etc, and vice-versa]

    or whatever, and just make sure every possible library directory is symlinked to every other library directory, and we'll be done! It sounds like this way, a distro can meet the (file location requirements of) Linux Standards Base and still be backward-compatible. And we can actually have packages from one distro installing on another! Wouldn't that be great?

    It seems so simple and so logical that I must be missing something. Someone please tell me why we're not taking advantage of that epitome of what makes the POSIX filesystems better than the Microsoft filesystems, the symlink?
  • by Lemmy Caution ( 8378 ) on Tuesday May 30, 2006 @05:23PM (#15431266) Homepage
    Your story and others like it rings far too true. For my part, I find Linux and other community-made open-source OS's suitable when I have a stable list of things I expect from the system: file sharing, print sharing, routing, firewall services, web services, etc.

    When I want a computer as a flexible environment, however, in which I will install and uninstall games, media players, various productivity applications that I may be trying out, and the like, I just can't imagine going back to Linux. In the 4+ years I tried to use a Linux desktop, I went through more blind alleys, false-promises, aborted projects, package inconsistencies, etc. than I care to recount. When I relegated my Linux system to serve as a general home-office server and moved to a (hardened) Windows as a desktop, I actually regretted the time I spent trying to make Linux work for me as a desktop system. Granted, by the time I moved to Windows, W2K was out and XP was around the corner, and so I probably missed Windows' most painful years.

    Now, I prefer my linux headless, or tiny (as in my iRiver.)

Old programmers never die, they just hit account block limit.

Working...