Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Software

Linux Standard Effort Edges Ahead 138

ErikPeterson writes "The Free Standards Group has released its third version of the Linux Standard Base, an effort to unify some of the workings of the open-source operating system. The LSB is designed to make it easier for those producing higher-level software to support different versions of Linux. Pledges to conform to the requirements of Version 3 are Red Hat, Novell's Suse Linux, Asianux and Debian."
This discussion has been archived. No new comments can be posted.

Linux Standard Effort Edges Ahead

Comments Filter:
  • by TripMaster Monkey ( 862126 ) * on Wednesday September 21, 2005 @10:59AM (#13613774)

    Pledges to conform to the requirements of Version 3 are Red Hat, Novell's Suse Linux, Asianux and Debian.

    Four down, only 458 [lwn.net] to go.
  • Standards (Score:4, Insightful)

    by Tomchu ( 789799 ) <tomchu@NoSpAm.tomchu.com> on Wednesday September 21, 2005 @11:06AM (#13613830) Homepage
    A standard is a standard when everyone is using it. Just calling it one doesn't make it so.
    • by Anonymous Coward
      Then, by that definition, neither metric nor imperial are standard systems of measurements. Why? Because not everyone is using it.

      You lose. Have a nice day.
    • Re:Standards (Score:2, Informative)

      by Anonymous Coward
      No, that's a de facto standard. We're talking de jure, as in planned and drafted.
    • do they care? (Score:3, Interesting)

      by newr00tic ( 471568 )
      I mean, do software developers usually keep to this standard, or is it more like them cutting some slack due to numerous distros not adhering to it?

      What if those releasing the libs/support files (QT/GTK2, etc.) _only_ allowed you to use them for free, _IF_ the end product adhered to LSB specs.? --It'd force developers to be less sloppy, and some form of unity might come sooner than expected..

      Yes, an arrogant idea, but just read it as an "what-if" -kind of thing.
    • "A standard is a standard when everyone is using it. Just calling it one doesn't make it so."

      That's why Microsoft says .doc is a standard document format. Just because a majority use something doesn't make it a standard.

    • A standard is a standard when everyone is using it. Just calling it one doesn't make it so

      Tell that to NIST and ANSI.

    • Re:Standards (Score:2, Insightful)

      by __aajwxe560 ( 779189 )
      Whoever flagged you as insightful should be ashamed of themselves. A standard provides a common goal for whomever is interested to work towards. Whether or not the majority work towards the goal does not make it any less of a standard. The short lived DIVX player from Circuit City was a standard, as there exists many different standards for DVDs. Whether or not the majority of devices embrace such a standard does not refute the fact that it exists. Many different types of standards start out in the mi
    • Maybe you will learn to enjoy Linux when you turn 15
      • Nah, I very much enjoy my Mini running OS X, laptop running XP, and server running Server 2003. My time has value. I can't be wasting it compiling kernels and editing config files all day long.
        • Re:Standards (Score:1, Redundant)

          by sloanster ( 213766 )
          LOL, I haven't compiled a kernel in years, I've been using vendor distro, and as it turns out, they supply a kernel.

          Apparently my time has even greater value than yours. I don't use ms windoze because I can't be bothered futzing around with constant virus updates, spyware removal, "hot fixes", reboots etc. With Linux, I just get the job done, and don't have to spend any time or effort trying to keep the whole thing afloat as would be the case with ms windoze.

          Agreed, the mac mini is nice. I bought a mini a f
  • Package management (Score:5, Insightful)

    by Anonymous Coward on Wednesday September 21, 2005 @11:08AM (#13613844)
    What the LSB should, imo, do is make autopackage the format of choice for installing applications and then have the default package manager (such as rpms and debs and stuff) to download the dependent libs and keep the base system up-to-date. That way, everyone's happy. The newbies get their easy program installers and the seasoned veterans get their apt. But, alas, it's apparently not to be.
    • by Benanov ( 583592 )
      Autopackage isn't quite integrated with the default package manager yet (per their website, http://autopackage.org/ [autopackage.org]) Once that happens I can see that being very likely. ;)
    • How is autopackage any better than rpm (which is what it uses)? LSB packages don't have dependencies, that's the whole point of the LSB, so what're the advantages, and do they outweigh the difficulties of switching?
      • Autopackage generates distribution-specific packages from generic autopackage'd sources. All you need to do is add support for your distribution/package manager to autopackage in order to be able to install any autopackage'd source on your machine as a native package - it is automatically generated by autopackage.
        • Autopackage requires running vendor supplied binaries as root. It is a *very* *very* *bad* idea.
          • So what? RPM and Debian packages both run their pre- and post-installation scripts as root. If you don't trust the software provider, you are in hot water no matter where you stand.
            • > RPM and Debian packages both run their pre- and post-installation scripts as root.

              Yeah and the lsb shouldn't standardise on them either (except if they specify that pre and post installation scripts do not have to be supported).

              > If you don't trust the software provider, you are in hot water no matter where you stand.

              Not true at all. You are only in hot water if you run the software as root. Most application software *never* gets run as root, so why should the vender be trusted with RootPower for th
              • Not true at all. You are only in hot water if you run the software as root.
                We'd like that to be true, but in reality there probably isn't a single version of Linux that is not vulnerable to a root exploit from an unprivileged process. Anyway, you could use trust to decide whether packages signed by this person/company should be able to execute code as root on your machine. It doesn't have to be all or nothing.
                • We'd like that to be true, but in reality there probably isn't a single version of Linux that is not vulnerable to a root exploit from an unprivileged process.

                  Do you have any evidence for this? The local root in 2.4 kernel was fixed pretty darn quick, and I haven't heard of any being discovered since then. Anyway, security is about reducing risk to an acceptable level, it's impossible to eliminate it entirely. It's a lot harder to escalate privileges and then do something nasty than just write a program th

                  • There were several root exploits as late as 2.6.10. That is a hell of a lot of vulnerable kernels out there. And as soon as another one is found, it starts all over again.

                    Bluetooth socket exploit [frsirt.com]
                    LSM exploit [iu.edu]
                    uselib() exploit [frsirt.com]
                    stack growth exploit [frsirt.com]

                    • There were several root exploits as late as 2.6.10. That is a hell of a lot of vulnerable kernels out there.

                      Only one of those you list would have affected my system, I'd forgotten about the uselib() one but that's all. I think it's far from given that a typical cracker/script kiddie/virus writer would be able to get root on a typical system, and as soon as the exploit became known it would be fixed, meaning the malware would stop affecting any patched systems which hadn't already been broken into and would

                    • I agree with you regarding the likelihood of such an exploit, but my thesis still stands: the source of the software must be trusted to ensure security. So far, a Linux system secure against hostile user code does not exist because it hasn't been proven to exist, and because there is a sufficient body of evidence to show that no amount of claiming Linux is secure will make it secure. In the realm of installing software, this means you are forced to trust the program itself due to privilege escalation, jus
                    • So far, a Linux system secure against hostile user code does not exist because it hasn't been proven to exist, and because there is a sufficient body of evidence to show that no amount of claiming Linux is secure will make it secure.

                      You're treating security as a binary thing here. Yes, a Linux system isn't absolutely secure when faced with hostile user code, but it is more secure than when faced with hostile root code.

                      In the realm of installing software, this means you are forced to trust the program itse

                    • We're just going to have to disagree on the security issue. If I know most systems running my software will have a Linux 2.6 kernel and that a vulnerability exists that has only been patched recently, I am going to be able to get root on most of them no matter who runs the script. This scenario has been constant throughout the history of Linux. Just at the time a previous exploit was finally becoming ancient history, a new vulnerability is found, so the window of opportunity for privilege escalation expl
                • > We'd like that to be true, but in reality there probably isn't a single version of Linux that is not vulnerable to a root exploit from an unprivileged process.

                  There's a big difference between a system that's vulnerable due to a bug, and one that's vulnerable by design. Primarily, the one that's vulnerable by design has to stay vulnerable (since everyday stuff breaks if you fix it), the one with bugs can be fixed and then attackers have to find a new vector.
        • All you need to do is add support for your distribution/package manager to autopackage in order to be able to install any autopackage'd source on your machine as a native package - it is automatically generated by autopackage.

          And all you need to do is make your system LSB compliant and then you can install any LSB-compliant package on it. And all the big vendors seem to be moving for compliancy. How does autopackage handle things like different names and locations for libraries?

          • And all you need to do is make your system LSB compliant and then you can install any LSB-compliant package on it.

            No. A package needs to be built against the same versions of shared libraries that are on the target system or breakage will occur. This is what leads to vendors having to ship 20 different RPMs for a single application.

            How does autopackage handle things like different names and locations for libraries?

            Since the distribution provides autopackage itself, I would presume that the distribution pa

            • No. A package needs to be built against the same versions of shared libraries that are on the target system or breakage will occur. This is what leads to vendors having to ship 20 different RPMs for a single application.

              And that's why the LSB standardises on which shared libraries are available and where they will be located, as well as what changes can occur to them. If the distributions are LSB-compliant, vendors don't have to worry about it.

              Since the distribution provides autopackage itself, I would pr

              • And that's why the LSB standardises on which shared libraries are available and where they will be located, as well as what changes can occur to them.

                What? Maybe for core libraries like libc (see section II and III), but it most certainly does not help you with anything further than core libraries. Example: one system has a c102 version of QT3 and another has a version compiled with an older C++ compiler. Yes, the vendor does have to worry about this and ship a separate application for the two distribut

                • Yes, the vendor does have to worry about this and ship a separate application for the two distributions.

                  Read what you quoted just after this: "If an application cannot limit itself to the interfaces of the libraries previously listed, thento minimize runtime errorsthe application must either bundle the nonspecified library as part of the application, or it must statically link the library to the application." If they build their package the LSB way, it will work on any LSB distro. That's the whole point.

                  A

    • I agree that it should be the applications which have to conform, not the distros.

      For starters, applications should not assume a certain directory layout, and should just install to the appropriate places based on the distro. Yes, this means that package managers might need to be slightly smarter than they currently are. But existing source-based installation already works for the majority of packages.

      Case in point: GoboLinux [gobolinux.org]. Now, those guys have introduced a more intuitive filesystem hierarchy that a

  • ... apart from Ingo
  • LSB Website (Score:5, Informative)

    by Grey_14 ( 570901 ) on Wednesday September 21, 2005 @11:11AM (#13613865) Homepage
    Why doesn't he blurb link to the LSB website at all? it's here [linuxbase.org] Anyway's.
  • Whats the timing issue mentioned in TFA?
  • WOW (Score:3, Insightful)

    by TampaDeveloper ( 834876 ) on Wednesday September 21, 2005 @11:14AM (#13613890)
    Wow. I'm very happy. LSB might actually make Linux useful for those of us trying to make a living off of software development...
    • Re:WOW (Score:5, Interesting)

      by Miniluv ( 165290 ) on Wednesday September 21, 2005 @11:32AM (#13614052) Homepage
      Don't get your hopes up. While the LSB appears like a very useful standard, as many have noted there are some real holes, and the test suite is by all accounts utterly useless. Further, there's not as tight of control of the testing so it appears at least some vendors are doing bizarre things to be compliant without waivers, despite tests that don't run in real world situations.
      One example that Ulrich Drepper of RedHat pointed out is the thread test, which won't run on an SMP box. The LSB people's response? Run it on a slow uniprocessor. What's the point of this again?
      • That's too bad. Hopefully the vendors themselves will realize the importance, even if just to promote their own distributions, and send in additional intellectual muscle. Until then,... what? A bad standard is better than no standard? I can't decide. I think bad standard is better than none. I'm not sure why the world is getting dumber. Maybe its just the United States..... Maybe we've got a food-culprit... Time to start keeping better track of those preservatives and sugar substitutes.
        • Re:WOW (Score:5, Interesting)

          by Miniluv ( 165290 ) on Wednesday September 21, 2005 @01:57PM (#13615320) Homepage
          I'm torn on whether a bad standard is actually better than none. I don't think the problems lie so much in the LSBv3 standard itself, as in the poor management of the standard that such a young standards body is having.
          RedHat is really the company which needs to drive this standard, and while so far they've been doing a lot to do so, its not really in their best competitive interests. Consider that all the major "enterprise" products that folks would want on Linux (WebSphere, Oracle, WebLogic, etc) all specify RedHat as their supported distro.
          I think we need to heap scorn on the crappy test suite now, to try and force them to clean up their act before they engender too much negative press and reputation. Once we hit a certain point where the negative reputation builds up, the standard will be doomed forever.
          • Re:WOW (Score:2, Insightful)

            I certainly understand why RedHat would see it as a conflict of interest. But I think they need to start thinking about the viability of the Linux community as a whole. There are two MAJOR competitors just now coming up to speed; OpenSolaris and OSX for x86. Then theirs OpenBSD. OpenBSD moves slower than Linux because nobody likes to sit around waiting for quality, but in the Unix race, quality seems to win in the long-haul. So if the leaders in the Linux community can't wrangle things together, I think
          • Re:WOW (Score:3, Interesting)

            by syousef ( 465911 )
            A bad standard will ensure everyone suffers and everyone does things equally poorly.

            I'd rather no standard. People are then not pressured into doing stupid things. Eventually the software people judge to be better prevails and becomes a pseudo-standard. If nothing's significantly better than anything else out there I'd still rather we were forced to deal with the headache of incompatibility than have everyone use a system that is bad and will eventually die.

  • LSB (Score:5, Funny)

    by gustgr ( 695173 ) <rondina AT gmail DOT com> on Wednesday September 21, 2005 @11:17AM (#13613918) Homepage
    The article gets funnier when you read LSB as Least Significant Bit.
  • Release notes (Score:3, Informative)

    by Spy der Mann ( 805235 ) <spydermann.slashdot@gmail . c om> on Wednesday September 21, 2005 @11:17AM (#13613921) Homepage Journal
    Here are the LSB 3.0 Release notes [nyud.net]. I'd appreciate it if somebody explained if there is a significant or revolution or something. Thank you.
  • Are these four distribution releasing this as a standard or only these four have agreed to follow it?

    Because in former case, it will never be standard in the first place. And in the latter, well, how are we going to ensure it is followed by others too?

    • Re:Whose standard? (Score:3, Insightful)

      by LnxAddct ( 679316 )
      Well considering that Red Hat, Fedora, Novell, and Debian together hold about 3.5 million servers according to netcraft (as of last march), those are the only players that really matter. Red Hat has about 1.8 million, Fedora: 400,000 , Novell: 400,000 and Debian around 800,000. I haven't read the report in a while but at the time Fedora was expanding at 120% every few months, where as the next fastest distro (I think it was gentoo with 60,000) was growing at 40% over the same time, and all the other distros
    • Re:Whose standard? (Score:2, Informative)

      by Anonymous Coward
      I think this page [opengroup.org] might be of interest to you.
  • Like it or not ... (Score:4, Insightful)

    by b3x ( 586838 ) on Wednesday September 21, 2005 @11:18AM (#13613927) Journal
    This sort of thing is a necessity. With the variety of Distros and each having its own idea of where things should be, it leads to a lot of unecessary confusion. Regardless of whether the confusion is legitimate or slightly hyped by bullet points in paid research docs, it exists.
    • Why does Linux even needs its own set of standards? Aren't the standards of UNIX in general good enough? I've developed programs by simply doing so carefully and smartly, that work fine on Linux, BSD, and several other flavors of UNIX. And in some cases, people have reported they work even on Windows (which I had made not effort to support). I think maybe the biggest area of confusion is software developers that just don't know how to write portable code.

  • I'm impressed (Score:4, Insightful)

    by beforewisdom ( 729725 ) on Wednesday September 21, 2005 @11:35AM (#13614072)
    I'm impressed that Red Hat has signed on.

    Along with 2 other of the more established distros being onboard this standard has a chance.
  • 64bit status? (Score:5, Interesting)

    by gr8_phk ( 621180 ) on Wednesday September 21, 2005 @11:43AM (#13614138)
    I know the debian port for AMD64 decided to make the 64bit arch a first class citizen. i.e. there is a /lib directory. Fedora OTOH uses a /lib64 directory. This is like saying there is something special about 64bit libraries on a 64bit arch. Does the new LSB specify how this should be handled? Who will have to change, debian or Red Hat? I run Fedora and am disappointed to have a /lib64 full of stuff and /lib that is almost empty. Thoughts on this?
    • Re:64bit status? (Score:3, Interesting)

      by MobyDisk ( 75490 )
      rpm supports multiple architectures out of the box, and knows how to install them to the proper location. Apt does not. This is actually very frustrating because as a Fedora users, I prefer using apt to yum. But for Debian users, you aren't supposed to even be able to have 64-bit and 32-bit binaries co-existing on one system.

      This article on FC4 [lwn.net] had some interesting information.
      • But for Debian users, you aren't supposed to even be able to have 64-bit and 32-bit binaries co-existing on one system.
        The 64 bit Sparc [debian.org] port must be an example of my overactive imagination then... it's been running 32 bit userland with specific 64 bit programs for quite some time.

        • Debian SPARC is not a general-purpose 64-bit/32-bit mixed architecture. It is a 64-bit kernel that runs 32-bit applications. There is some limited incomplete support for 64-bit applications. You can't just install a 64-bit .package and a 32-bit package side-by-side on that system and expect it to work.

          Fedora Core supports having 32-bit applications/libraries and 64-applications/libraries running side-by-side simultaneously. The packaging system and the linker know the x86_64 packages from the i386 pac

    • Where libraries are on a linux system should not be handled by each individual application, just move all the /lib64 ones to /lib, and remove /lib64 from /etc/ld.so.conf. As far as I know everything should still work, unless Fedora does something queer with libraries.

      P.S. don't forget to run ldconfig ;)
    • The LSB officially supports at least ia32-64 and ia64 for 64bit architectures. Here is some info on multiple architectures: http://www.linuxbase.org.nyud.net:8090/LSBWiki/Mul tiArch [nyud.net].
  • Since many distros is based on RedHat and Debian they too will inherit the base system and be very similar to LSB of not entierly.
  • Ubuntu (Score:3, Interesting)

    by saterdaies ( 842986 ) on Wednesday September 21, 2005 @11:53AM (#13614241)
    It would be nice if Ubuntu committed to it seeing as though they've become the 10,000 pound gorilla of Linux distributions.

    Note: this isn't anti-Ubuntu. I run Ubuntu.
  • Mandates RPM (Score:3, Insightful)

    by Anonymous Coward on Wednesday September 21, 2005 @11:58AM (#13614292)

    I don't see how a standard that uses RPM as the mandatory package format [freestandards.org] will ever gain enough consensus to be successful.

    What kind of a standard is this anyway? For example:

    Applications are also encouraged to uninstall cleanly.

    Um, that's great. Where's the definition of "cleanly"? Where's the rationale? Where's the implementation notes? This thing reads like a few people got together and jotted down a few notes on what they'd like to see. This ain't a specification. Sure, they go into great detail about the format of the RPM file - but that's already an established format that they don't need to explain.

  • 3.4.x still marked as an unstable ebuild last time I checked.
  • by Anonymous Coward
    Now, THIS move by the "Penguin People"? A smart one...

    Why, imo?

    WELL, it will hopefully STOP the "fragmentation @ binaries levels" that UNIX (the predecessor, inferior currently imo to Linux in many ways) encountered!

    (Which is, imo, the ONLY real reason we are not all running some form of UNIX on our PC's today instead of Windows, Mac, or Linux as OS' etc./et all).

    Kudos on such things happening to the "Penguin crowd", because it's needed imo. Linux 2.6x core is IMPRESSIVE (most impressive) & KDE rocks to
  • ... but they can stifle innovation. I heard somewhere that KDE had problems with freedesktop.org, cos they wanted to do some fairly sensible things - but it wasn't in the standard... Maybe someone else can fill in details...
  • by sjvn ( 11568 ) <sjvn@v[ ].com ['na1' in gap]> on Wednesday September 21, 2005 @12:17PM (#13614438) Homepage
    And, I might mention, I think it matters A Lot.

    http://www.eweek.com/article2/0,1895,1861272,00.as p [eweek.com]

    From where I sit, Red Hat's Drepper

    http://linux.slashdot.org/article.pl?sid=05/09/19/ 1128201 [slashdot.org]

    wants to throw the baby of open standardization out with the bathwater of LSB standardization testing, which could still stand a lot of improvement.

    With open standardization, Linux could go the way of Intel Unix--shudder!

    Steven
  • ... it seems like most expereinced Linux users prefer to get source distributions, right?

    After migrating from Slackware -> Caldera -> SuSE, I am now a happy Ubuntu user.

    Really, except for a few developer tools, just about everything that I need is in the main distribution, and can be trivially installed.

    I actually have a small point here: for developers and experienced Linux users, running ./configure ; make ; sudo make install is no problem, so the exact placement of deployed application files seems

Memory fault -- brain fried

Working...