Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

What is UNIX, Anyway? 218

Lieutenant writes "Technology professionals have loosely used the term "UNIX" since the first person had to explain the difference between the Berkeley and AT&T flavors, so it's not surprising to find as many UNIX standards as there are versions of the operating system. Peter Seebach wades through the wellspring of UNIX standards and sorts them out for you, concluding that the rumors of the death of UNIX are (as usual) greatly exaggerated."
This discussion has been archived. No new comments can be posted.

What is UNIX, Anyway?

Comments Filter:
  • Not a bad article. (Score:3, Interesting)

    by Anti-Trend ( 857000 ) on Sunday March 12, 2006 @04:46AM (#14901539) Homepage Journal
    This editorial definitely seems to be for marketing purposes, being both hosted by IBM and directly confrontational about Microsoft. Still, interesting enough article; it's always tough to be brief and to the point about such a complicated subject. I especially like the author's point about the liquidity of the Microsoft "standard" API which is so touted as a counterpoint to *nix implementation -- DOS, Win16, OS/2, Win32, WinNT, WinXP, .NET, Vista... versus POSIX. Yeah, he's right, it sounds pretty ridiculous when you put it that way. That being said, the article's pretty light on the details. For those rare individuals interested in reading more than TFA, here's a little more info on UNIX [wikipedia.org] and the POSIX standard. [wikipedia.org]
    • by (Score:1) ( 181164 ) on Sunday March 12, 2006 @04:48AM (#14901544)
      For the history of Unix (timeline), read this one:
      http://www.levenez.com/unix/ [levenez.com]
    • by seebs ( 15766 ) on Sunday March 12, 2006 @05:56AM (#14901678) Homepage
      Hosted by IBM just because it's a regular column on standardization. In all the years I've written for IBM, the only edit they've ever made on such grounds is that they changed the word "Belkin" to the name "Company X" in my article about Belkin's packet-hijacking routers. Oh, wait; I think they disliked a couple of comments I made about Verisign once. Mostly, if there's no obvious liability, they don't get involved.
    • by ROOK*CA ( 703602 ) *
      I especially like the author's point about the liquidity of the Microsoft "standard" API which is so touted as a counterpoint to *nix implementation -- DOS, Win16, OS/2, Win32, WinNT, WinXP, .NET, Vista... versus POSIX.

      Good point, I think the most distinguishing factor is marketing, Microsoft has been consistantly been able to map out a clear transition from API to API (as well as inserting a dash of FUD when required), even if customers and/or ISVs knew there was going to be transition pain, Microsoft
      • by drsmithy ( 35869 )
        Good point, [...]

        Not really.

        Firstly, because that list is artificially inflated ("Win32, WinNT, WinXP" are all the same thing - Win32).

        Secondly, because the unix side is just as bad, if you compare apples to apples (ie: throw X and associated libs into the mix - how many widget libraries can you name ?).

        Thirdly, because binary compatibility on Windows is very well maintained. It's not uncommon for those twenty year old DOS and Win16 binaries to run unmodified on Windows XP or 2003 (and probably Vista).

        • by peragrin ( 659227 )
          Um what are you smoking?

          16-bit windows and dos apps only work if they are non-trivial. I can't upgrade our work computers out of windows 95 and windows 98 because we depend on non-trivial software that doesn't run on the NT line. So MSFT is just as bad as everyone else. Try running some of the early versions of word in XP or win2k3 it doesn't work, and the modern versions can't read the older formats. been there. tried that, considered it lost.

          Most linux problems aren't binaries but library and their va
          • by sasdrtx ( 914842 )
            I presume you mean "...apps only work if they are trivial".

            I hate Microsoft as much as anyone, but if you have apps that run on Win95, but not NT-based systems, it's because they violate some fundamental rules of Window's APIs. Win95/98 were notoriously lax about enforcing those rules, which was certainly a big factor in the unreliability of Win95/98.

            • Yep they violate the fact that it requires a Novell netware server to run, and MSFT kept breaking and changing the networking API so much that Novell couldn't keep up with the ever shifting target.

              It's why I give the Samba guys credit. MSFT has purposely broken code in their networking to slow down others from using it outside of windows. Listen to the Samba guys talk about it sometime. Even with XP and win2k3 msft is still at it.

      • The biggest difference between Microsoft's APIs and others, I think, is that at any given time one and only one API is considered the canonical one in which to write all future software. If I want to sit down today and write software to be released in a year or so, Microsoft will tell me what API to write to unequivocally. With "UNIX", although it's getting better, there have been competing APIs for threads, networking, GUIs, configuration, user handling, etc., often co-existing and sharing the "this is t
    • by drsmithy ( 35869 )
      I especially like the author's point about the liquidity of the Microsoft "standard" API which is so touted as a counterpoint to *nix implementation -- DOS, Win16, OS/2, Win32, WinNT, WinXP, .NET, Vista... versus POSIX. Yeah, he's right, it sounds pretty ridiculous when you put it that way.

      Particularly since the "Win32, WinNT, WinXP" part of it are all the same thing, so the real progression is:

      DOS, Win16, OS/2, Win32, .NET.

      • by Anonymous Coward on Sunday March 12, 2006 @11:25AM (#14902399)
        Clearly you were never forced to program anything to the Win32 API.

        There's a common subset of functions available on both 9x and NT flavors of Windows. (With different bugs and sometimes different supported flags, different restrictions on use, etc). Then there's a bunch of functions that only work on NT-based flavors of Windows, not 9x-based. And the opposite is also true. Then XP came along, then Server 2003, each adding a bunch of new stuff to the API that Microsoft (unfortunately) did not go back and also add to the earlier versions of Windows.

        There really are at least 3 distinct flavors of the Win32 API, and you have to be careful what functions you use if you want your program to run on all three of them.

        For an example, check out the documentation for the CreateWindowEx function [microsoft.com].

        If you scroll to the bottom, they describe several of the differences in the behaviour of this function on different versions of Windows ranging from 95 to XP.

        This situation could have been avoided if Microsoft had had the foresight to separate the Win32 API implementation from the rest of the OS so it could be upgraded independently.
        • by Quantam ( 870027 ) on Sunday March 12, 2006 @02:54PM (#14903103) Homepage
          Then there's a bunch of functions that only work on NT-based flavors of Windows, not 9x-based. And the opposite is also true. Then XP came along, then Server 2003, each adding a bunch of new stuff to the API that Microsoft (unfortunately) did not go back and also add to the earlier versions of Windows.

          I can only think of one feature that's available on Windows 9x but not NT, which isn't part of the Internet Explorer toolkit, and it's a very rarely used feature (although it's just the kind of thing I use). Almost universally, the API on NT is a superset of that available on 9x; though it is true that occasionally some small implementation details different between the two.

          Then XP came along, then Server 2003, each adding a bunch of new stuff to the API that Microsoft (unfortunately) did not go back and also add to the earlier versions of Windows.

          Correct. The Windows API evolves over time, adding new and often useful features to new versions, often involving new features of the kernel. In nearly all cases these changes are backwards compatible.

          There really are at least 3 distinct flavors of the Win32 API, and you have to be careful what functions you use if you want your program to run on all three of them.

          Windows 9x, Windows NT, and..? Well, I suppose you could call the ANSI/Unicode versions different, even though the differences between the implementations are usually very clear-cut (i.e. path strings are always handled in certain different ways).

          For an example, check out the documentation for the CreateWindowEx function.

          If you scroll to the bottom, they describe several of the differences in the behaviour of this function on different versions of Windows ranging from 95 to XP.


          That serves as an excellent demonstration of what I've said: the differences are usually minor enough to not be a concern, and that new features are added in a backward compatible way. Take a look: one of those differences refers to a feature that was added in XP (WS_EX_COMPOSITED), another refers to a kernel limitation of 9x, and the third refers to a feature that was added in 2000. Of those, the only "serious" one is the 9x kernel limitation, and even then it's not particularly important.
        • by Anonymous Coward
          Clearly you were never forced to program anything to the Win32 API.

          Have you ever programmed on Unix/BSD/Linux systems? When writing non-trivial applications, there are substantial differences among them. Why do you think GNU autoconf was created?

          Having programmed on both, I can say that Win32 is, and always has been, much more uniform across variants of the system than Unix/BSD/Linux. That doesn't mean it's better, or more consistent on any givem implementation. I generally prefer the Unix/BSD/Linux API

      • OLE
        COM
        COM+
        DCOM
        There are are probably more, but it seems like MS changes APIs every few years.
    • DOS, Win16, OS/2, Win32, WinNT, WinXP, .NET, Vista

      Kind of not true also. One of the main drives behind .NET is that develping in .NET allows applications to move between different Window O/Ss as long as you have the .NET runtime. Microsoft wants you to develop for Vista in .NET just like they want you to devleop for Windows Server 2003 and XP in .NET.

      Also I really don't think Microsoft ever wanted anyone going to OS/2.
      • There was a time when Microsoft's official answer to "What API should I use" was "OS/2". This was before they found out that Windows 3.1 was commercially successful enough that they didn't care that they couldn't figure out how to engineer it.

        As to .NET, I seem to recall someone complaining about compatibility between versions.
        • I'll admit .NET as it is supposed to work and the way it truly does is somewhat disparate at best. Many of the functions rely on under level COM objects so on some O/S's they don't work, for example some things don't exist in 98 so they fail. The core features work on all and I suspect that you could port something that works in .NET on Win 2000/XP to Vista without error as long as it is completely based on .NET and does not create any objects directly from COM. Like Java is supposed to be run anywhere,
  • it's ok (Score:5, Funny)

    by rayde ( 738949 ) on Sunday March 12, 2006 @04:47AM (#14901542) Homepage
    i don't take any reports of UNIX's death as fact without a Netcraft confirmation.
    • Re:it's ok (Score:3, Funny)

      by narkotix ( 576944 )
      ....no but apparently some whacko reckons aix aint made by ibm and its all part of the big plan...u know the krill files

      The IBM Unix variant, AIX, is rumored to have been developed by space aliens
    • Re:it's ok (Score:4, Funny)

      by NitsujTPU ( 19263 ) on Sunday March 12, 2006 @05:13AM (#14901593)
      Have they said anything about Slashdot dying?

      There are 11 whole posts (so far) on a story where geeks get to stroke their egos by showing their ignorance and calling everything in sight a version of Unix.

      Heck, I didn't even see anybody post a *BSD is Dying troll.
  • by Gopal.V ( 532678 ) on Sunday March 12, 2006 @04:55AM (#14901561) Homepage Journal
    As a programmer, that's what I really consider as Unix - sus v3 [unix.org].

    I code for this API and the sources end up being source compatible. But then there are library paths and stuff, which is why even something as homogenous as Linux is forced to create LSB [linuxbase.org] standard. The API standard OTOH, is crystal clear - look at the API tables [unix.org] in terms of availability. And yeah, my project is called Portable.net [gnu.org], so I've put in my time writing portable code for various platforms (even BeOS [dotgnu.info] and SkyOS [osnews.com]). Wish the threading models worked the same, that's all :)

    There is just *nix ... just *nix and VMS - everything else is somewhere in between.
    • by Anonymous Coward on Sunday March 12, 2006 @05:37AM (#14901634)
      I code for this API and the sources end up being source compatible.

      Oh boy, you haven't deployed any code in the real world, have you?

      The total number of conformant implementations of SuSv3 (or even v2) is zero. None. Zip. Zilch. Nada.

      Everything, including the linux/glibc, BSD, and proprietary unix-like platforms, differs from the spec in subtle and complicated ways. SuS and POSIX are paper standards, not things that you will encounter in software. They're fodder for managers and marketing; they have little or no engineering value. And the differences are important to the point where you have to modify the source of your program to support other platforms, once the program becomes sufficiently complicated. As a rule, a complex program with no platform-specific hacks is a complex program that has bugs on some platforms which have not been found/fixed yet.

      This isn't likely to change in a useful manner. Most of the platforms approximate SuS/POSIX as closely as they can without breaking existing applications. Successive revisions of SuS/POSIX become more vague in order to encompass more of the things that happen in the real world. So a good way to look at these two is to consider them an inefficient and fairly inaccurate attempt at documenting the common features of a set of platforms. If this process was completed perfectly, the resulting document would be so vague and cover so many platform-specific hacks that it would be of limited value. Since the documents get updated much more slowly than the software, they will probably never be completed to a satisfactory level of accuracy.
      • by Anonymous Coward on Sunday March 12, 2006 @10:23AM (#14902210)
        Well, that's certainly a negative way to put it, but what if anything could they do any better? It's not like it would be particularly practical or reasonable for unix vendors at this stage of unix history to break backwards compatibility for the sake of future compatibility.

        So, the unix vendors do the next best thing: they make whatever changes they can to bring their platforms to uniformity without breaking backwards compatibility, and they maintain a common standards document that documents the cross-platform compatible functionality. When they inevitably make mistakes in the documentation process, they remove specifications that they cannot implement complatibly in all unix systems.

        The most important point here is the intent of the unix vendors: They are working towards compatibility wherever they can, and they are striving for accurate documentation of the compatible functionality. There's nothing to disparage in their actions, even if they make the occasional mistake -- at least they are improving all the time.

        Even linux developers are known to deviate from the SUS occasionally, but they too do strive to implement the standard wherever possible. Yes, the Single Unix Specification is incomplete and flawed, but it's the best thing we've got.
    • just *nix and VMS - everything else is somewhere in between.

      MSDOS is between *nix and VMS?

  • First Sale Doctrine (Score:5, Interesting)

    by David Hume ( 200499 ) on Sunday March 12, 2006 @04:57AM (#14901564) Homepage
    FTFA:
    A single programmer who wants a copy of the POSIX specification would have to pay US$974 for it. That gets a one-year subscription; you are not licensed to continue referring to the standard thereafter.
    What about the first sale doctrine [wikipedia.org]? Do they really contend that you cannot "refer" to the standard after one year? Do they do a mind wipe? Or is just that your subscription for updates lapses after one year?
    • by Anonymous Coward
      They'll probably try to stop you from using it by applying the Patriot Act. I think in section 3.14.a.2.2.b it says that a terrorist is someone who uses standards documentation without renewing their license.
    • by larry bagina ( 561269 ) on Sunday March 12, 2006 @06:01AM (#14901688) Journal
      You can (legally) get it for free at unix.org and opengroup.org. An individual paying a $974 annual fee for it has more money than brains.
      • More useful than the specification itself is Advanced Programming in the UNIX Environment [amazon.co.uk]. This is my absolute favourite reference for UNIX programming. Not only does it cover the POSIX spec (and SUS and a few others), it also tells you which bits have been implemented, and with what limits, in Solaris, FreeBSD, Linux and Mac OS X. It's slightly out of date (obviously, since it wasn't published today); for example it says that OS X 10.3 doesn't support most SysV IPC mechanisms which, while true, is not p
  • UNIX is not UNIX ! Hmm wait... no sorry I heard that or something close somewhere else.

  • old paradigms (Score:5, Insightful)

    by jonastullus ( 530101 ) * on Sunday March 12, 2006 @05:06AM (#14901582) Homepage
    isn't unix:

    - everything is a file
    - every file is a stream of bytes
    - do one thing and one thing well, Keep It Simple Stupid
    - human readable/editable config files
    - principle of least privilege
    - services as daemon processes
    - clear separation of kernel and userland (although this one is debatable)
    - multi-user environment (despite the name)
    - remote access facilities
    - console/automation oriented, powerful shells
    - ./configure && make && make install

    ?

    well, that's just a few things that come to my (linux/bsd slanted) view of what (a modern) unix is...
    • Re:old paradigms (Score:5, Insightful)

      by grahamlee ( 522375 ) <graham@iamlUUUeeg.com minus threevowels> on Sunday March 12, 2006 @05:35AM (#14901628) Homepage Journal
      You've used a couple of Plan 9 and Sprite paradigms, some things which never applied to Unix[*], a load which apply to operating systems in general and an implementation artefact of GNU autoconf. I really hope that's not Unix....

      [*]"least privilege" - MACs would predate setuid() if that were the case. For instance
      • yes, the "./configure" bit might have been a bit misplaced. but i was less "defining" unix and more trying to capture what frequent users of unix systems would have in common. and the "./configure" idea factored into it for me, although this might well be a GNU invention.

        don't many programs under BSDs do it like that too, BTW?

        i don't quite understand your MAC commentary. if i had meant MAC i would've said so.

        The principle of least privilege requires that a user be given no more privilege than necessary to p
        • Re:old paradigms (Score:4, Interesting)

          by grahamlee ( 522375 ) <graham@iamlUUUeeg.com minus threevowels> on Sunday March 12, 2006 @06:17AM (#14901710) Homepage Journal
          To take the specific point of MACs, if UNIX was about giving you the least privilege necessary to get your job done, then the concept of setuid (which gives you *all* the privileges available) would never have existed. Tools like sudo, solaris profiles, SEDarwin/SEBSD and the like have come up to try and plug this privilege leak but fundamentally, Unix has a binary privilege model. You either have none, or you have them all. More generally, I think it's hard to fundamentally sum up Unix (without using one of the technical definitions, such as "something which implements SUS"); when it comes down to it it's a C language API and a set of tools which implement that API, running a multiuser multitasking OS. I think a good description would be "an OS that one person can grok"...
          • ...the concept of setuid (which gives you *all* the privileges available)...


            The setuid bit on an executable file gives you the privileges of the owner of the file. It is mostly used as setuid-root, but doesn't have to be.

            setuid(2) is a different matter of course, because you need to have uid 0 for it to work at all.
            • It is mostly used as setuid-root, but doesn't have to be.

              I use setuid all the time, but never as root. In particular, running game servers, I always have crond check to make sure all is well, and restart if needed as a regular user. If there is ever a buffer overflow issue with the game daemon, at least the access would only be as the user, not root.
    • You failed to mention:

      Heirarchical file system

      attach new filesystem anywhere in the old one

      networking (UUCP, Ethernet, mail, etc)

      User name based login accounts (with 8 char limit :-)

    • isn't unix:

      - everything is a file


      No. Not everything is a file in Unix (exceptions started piling on as hacks for originially unintended devices, etcetera started piling on), that's why there is Plan9 - where everything is a file - from the original creators of Unix at Bell labs.

      http://cm.bell-labs.com/wiki/plan9/plan_9_wiki/ [bell-labs.com]
  • by Quirk ( 36086 ) on Sunday March 12, 2006 @05:12AM (#14901588) Homepage Journal
    In some cases, existing practice in a field reflects a decision a college student at Berkeley made at 3 AM.

    "There were only two things to come out of Berkeley in the 60's, LSD and Unix. I doubt that is a coincidence."

  • IBM (http://www-128.ibm.com/developerworks/power/libra ry/pa-spec13/?ca=dgr-lnxw01UnixStandard [ibm.com]):
    Our apologies The IBM developerWorks Web site is currently under maintenance. Please try again later. Thank you.
    Coral Cache (http://www.ibm.com.nyud.net:8090/developerworks/p ower/library/pa-spec13/?ca=dgr-lnxw01UnixStandard [nyud.net]) :
    Error: 500 Internal Server Error Server CoralWebPrx/0.1.16 (See http://coralcdn.org/ [coralcdn.org]) at 216.165.109.81:8090
    Makes one smile :-)
  • by deblau ( 68023 ) <slashdot.25.flickboy@spamgourmet.com> on Sunday March 12, 2006 @05:12AM (#14901590) Journal
    "The nice thing about standards [wikipedia.org] is that there are so many to choose from." -- Andrew S. Tanenbaum, author of Minix.
  • The Spirit of UNIX (Score:5, Interesting)

    by murdie ( 197627 ) on Sunday March 12, 2006 @05:13AM (#14901591)
    Probably the oldest standard that people still refer to is AT&T's 1985 System V Interface Definition (SVID).

    I routinely use printed Seventh Edition (Bell Labs Research) UNIX manuals, even when writing C for Linux. It also helps one remain blissfully ignorant of the 'cat -v' option and similar excrescences. Also the Tenth Edition UNIX manuals. I have to remember the changes introduced by Standard C and the like, but it's convenient to have the essence of the modern-day manual in printed form. Of course, there are some people out there who delight in using Fifth, Sixth, Seventh etc Editions on PDP-11s etc - see the PDP-11 UNIX Preservation Society, http://minnie.tuhs.org/PUPS/ [tuhs.org]. I wish I had a larger garage! How much would a PDP-11/40 cost me now, anyway?

    Peter Salus' book "A Quarter Century of UNIX", Addison-Wesley, 1994 (corrected 1995), ISBN 0-201-547771-5 is a good informal UNIX history.

    "Those who do not understand UNIX are condemned to reinvent it -- badly."
                                                      -- Henry Spencer
  • Answer (Score:5, Funny)

    by ceeam ( 39911 ) on Sunday March 12, 2006 @05:51AM (#14901663)
    Unix is not GNU.
  • by mumblestheclown ( 569987 ) on Sunday March 12, 2006 @05:51AM (#14901666)
    Unix hater's handbook [simson.net]

    it's funny AND true.

    / seriously thinks UNIX like systems need to go the way of VAXen.
    // well, actually not so much the systems themselves, but the assinine UNIX mentality of "harder is better" and "more documentation eliminates the need for good design.", which set back Computer Science departments and academia 15 years behind industry.
    /// fortunately, one of the unintended side-effects of Linux is that the mentality, at least amongst Linux users, is slowly, ever so slowly, fading away.

    • by MROD ( 101561 ) on Sunday March 12, 2006 @07:15AM (#14901826) Homepage
      // well, actually not so much the systems themselves, but the assinine UNIX mentality of "harder is better" and "more documentation eliminates the need for good design.", which set back Computer Science departments and academia 15 years behind industry.
      /// fortunately, one of the unintended side-effects of Linux is that the mentality, at least amongst Linux users, is slowly, ever so slowly, fading away.


      Hmm.. yes, in /// you say that Linux programmers are going away from //. They are, they're just not doing the documentation. ;-)
      • by AlexMax2742 ( 602517 ) on Sunday March 12, 2006 @07:25AM (#14901854)
        Hmm.. yes, in /// you say that Linux programmers are going away from //. They are, they're just not doing the documentation. ;-)

        Which is why we have BSD.

      • by mumblestheclown ( 569987 ) on Sunday March 12, 2006 @01:45PM (#14902864)
        I think it comes down to this: when a user's input results in some unexpected output or if the user was unable or found it difficult to tell the computer what he wanted to do, the UNIXine (and this applies to GNU stuff, Linux stuff, and BSD stuff equally) response for many years was "the user made an error" or "the user's lack of knowledge is the core of the problem."

        This attitude was (and to a great degree still is, though somewhat less than before) is the single most cancerous and evil mode of thinking in computer science, and yet it went widely accepted ("unchallenged" would be wrong) in Unix circles and associated hanger-on CS departments for years. The correct attitude should have been "if users are making the same mistakes and being tripped up in the same places over and over again, then clearly the fault lies with the tools themsleves."

        Now, I'm sure if I go through the usual examples of this theory, I'll get back the usual result: some unenlightened idiot telling me that EMACS and/or the CLI are faster at the end of the day and therefore better, and that the problem is simply "more training." Thankfully, in 2006, I hope I don't have to explain why this mode of thinking is outdated (well, never right in the first place) nonsense, since most of you have finally woken up to these facts:

        • Usability and speed are orthogonal to each other. You do NOT need to give up speed to gain more usability, and vice versa. The trick is something called GOOD DESIGN. Bad design simply trades off one for another. Good design at least imporves on one front without diminishing another.
        • A long manual is a hallmark of bad design. Did you need to have a manual to start using, say, a web browser? No. Why should, say, a text editor be any different?
        • i) The UNIX philosophy of "make tools small and atomic" is not necessarily bad from a deep technical standpoint, but this doesnt mean the user necesarily has to directly interact with those tools and ii) one doesn't have to be a "Windows for Dummies" esque user to benefit from well built tools. There are lots of real life examples of progress in this, from the steady emergence of (still often highly flawed, but far better than what was before) high-level languages/environments like PhP, Perl, Gnome, KDE, and so forth. There is ABSOLUTELY no reason why I can't be a UNIX guru and haven't the slightest idea what the command-line arguments to 'tar' are off the top of my head.
        Bring on the 'yesbuts...' from the dinosaurs and self-annointed high priests...
        • You make a few good points, but seem to have a misplaced faith in designing out problems. Let's take a look at the 'real world', which has been around a lot longer than the computer world, and see whether good design has triumphed to make the world around us work without danger and reliance on human memory or judgement.

          Let's look at powertools, cars, airplanes, guns, knives or other things that modern people use. All of them have measures designed to stop people from hurting themselves and require no t
  • Maybe it would be easier to see what Unix is by pointing out the weaknesses, reading "The Unix Hater's Handbook" for instance:

    http://web.mit.edu/~simsong/www/ugh.pdf [mit.edu]

    Which, despite the name is not a mindless bashfest and is interesting.

    --Plan9/Inferno and Lisp Machine advocate--
    • It is a mindless bashfest. Nevertheless, it is interesting. There is some truth in their madness. But they themselves admit that it's over the top and to be taken with a grain of salt. At least the book, I'm not sure about the mailing list/newsgroup.
      • I'll disagree - it's a series of articles by people who worked with Unix (back then) and have other systems to compare it to, I consider many of the articles surpass the atyppical +5 posts here on slashdot^_^
  • by layer3switch ( 783864 ) on Sunday March 12, 2006 @08:29AM (#14901960)
    Yup. UNIX isn't an OS. It's a trademark and a standard. And Linux is a kernel, not an OS.

    http://www.unix.org/ [unix.org]
    http://www.kernel.org/ [kernel.org]

    Also Windows aren't OS. It's an opening constructed in a wall or roof that functions to admit light or air.

    Lastly Apple is not a company. It's a god damn fruit. Why is that ESPECIALLY MacOS users don't seem to get that Apple Computers are PC!?!? Try to ask a MacOS user this. "Do you have a PC?" I bet, 99% of them will say "No, I don't have PC, but I have a Mac." WTF??
    • If Linux is a kernel, where is Solaris/Linux (in the same misbegotten naming scheme as GNU/Linux)? That makes no sense, of course, since the operating system *is* the kernel plus whatever runs on it. Linux is Linux, GNU is GNU, and Solaris is Solaris. Name/modifier is crap/shit. Just say no to crap/shit.
      • Linux is Linux, GNU is GNU, and Solaris is Solaris.

        In which case, what is Nexenta [nexenta.com] making? They're using Debian GNU/Linux but with the OpenSolaris kernel - they themselves can't decide between "NexentaOS" and "GNU/OpenSolaris". The result doesn't appear to fit your taxonomy, but still works really well. I think we'll see more and more of these rematches of userland and kernel appearing. Easy assumptions about the superiority of $KERNEL or $USERLAND are way overdue for a challenge.

      • > If Linux is a kernel, where is Solaris/Linux [?]

        I don't know, I don't care, I wouldn't be surprised if somoene was working on it, but I wouldn't be particularly interested in the project. I've been using GNU since the mid/late-eighties, and it's the part I really care about. I started with GNU on DOS and OS/2 and XENIX and SunOS, and preferred it because it gave me the highest level of portability I could find at the time. Frankly, I no longer care what kernel I'm using, as long as I'm using a GNU-b
  • still an amazing OS (Score:4, Interesting)

    by yagu ( 721525 ) * <yayagu@[ ]il.com ['gma' in gap]> on Sunday March 12, 2006 @10:07AM (#14902157) Journal

    I've been working with Unix/Solaris/SunOS/Linux/AIX/AUX/BSD/ATT Unix, et. al. now for over twenty years. I mostly love the environment, I'm self-taught, and never have stopped discovering new and cool (and sometimes amazing) things about how Unix works.

    I've pretty much always always been able to sit down and immediately be productive in a Unix environment. Things are stored and arranged in a surprisingly consistent way (not always in the same places, but one of a few organizations (/etc vs. /usr/etc)), and for those hard to find arrangements you need only know "find".

    Considering how many different Unixes there are it's actually impressive how compatible and consistent they are across the Unix universe. It's only my opinion, but I find adapting and adjusting to the Unixes far easier than the various versions of Windows.

  • "With rare exceptions, porting hassles between UNIX systems are long forgotten."

    Yeah right. We're down to complaining about porting apps between versions of the same distribution of linux and here's a guy claiming that porting hassles between UNIX systems ar long forgotten. Come on, you can't claim that with a straight face even if you are working for Microsoft.

    It's not like your 1993 binary of wolfenstein will work out of the box on win XP but the chances of binaries from that era doing something are a lot
    • Last week I installed Erlang/OTP on an IRIX box. The build system didn't work with IRIX make and the code didn't compile with the MIPSPro C compiler. Building GNU make was fairly easy. Building GCC took most of a day; it would configure fine and then fail in the build step with an uninformative error. Eventually I resorted to building it without C++ support. Next, building Erlang was fun. The configure script (well, actually a configure script invoked by a recursive make, which took a while to find) w
  • heheheh I'm beginning to think that finding a mention of Amiga in articles covering aged, long-standing, or break-through technologies or philosophies, or just places of honor in computing history, is almost like a "Where's Waldo" using web pages as the pictures :)

    (Uh, are we a cult yet?)
  • Linux vs UNIX (Score:3, Interesting)

    by argoff ( 142580 ) on Sunday March 12, 2006 @01:43PM (#14902857)
    In all fairness, it all came from the same tradition - but when AT&T took back the copyright on their original UNIX implementation - that's when it started to seriously fragment into AIX, HPUX, APUX, DGUX, Solaris, and BSD's. Evolution slowed down drastically and left the UNIX community wide open enough for Microsoft to drive a train thru. To compensate, the UNIX community tried to force thru all these standards initiatives (renember CDE?, Motif), but they always failed to stem the tide.

    Then Linux came along, and started to undo the damage that the copyright fragmenting caused to begin with because it was under the GPL, and ever since then it has been the beginning of the end for Microsoft and Linux has taken off in the server space and now it's getting ready to attack the desktop. Moral: free markets are about freedoms and not markets. When you have freedoms the markets will take care of themselves, but when you sacrifice freedoms for markets - you will eventually loose both.
  • isn't it? that what I was told...
  • A lady struck up a conversation with me on an airplane.
    • Her: "And where are you going?"
    • Me: "I'm going to San Francisco to a UNIX convention."
    • Her: "Eunuchs convention? I didn't know there were that many of you."

    From http://rinkworks.com/stupid/cs_comeagain.shtml [rinkworks.com]
  • So true (Score:2, Funny)

    One of the best quotes I've ever heard was from a colleague of mine,

    "Unix isn't."
  • So What is UNIX?! (Score:3, Insightful)

    by znx ( 847738 ) <znxster@@@gmail...com> on Sunday March 12, 2006 @06:48PM (#14904028) Homepage
    I think that the whole discussion can be summed up, just as the article says, with:
    "We reject kings, presidents and voting. We believe in rough consensus and running code." -- Dave Clark

    So in answer to "What is UNIX?", UNIX is code that runs based on general agreement of the masses. This is why it will not die, even LSB is discussed in the article and rightly so, it falls into the same category. A loosely held standard that defines what the general masses of Linux distributions use.

    No hard and fast standard would ever survive in the *nix world, ever system is unique to its purpose.

    Nice article, IBM churn them out and every so often a good one turns up.

It is easier to write an incorrect program than understand a correct one.

Working...