Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software

What Makes A UNIX System UNIX? 417

ian asks: "Since there are now so many different flavors of UNIX out there (Linux, Free BSD, AIX, Solaris, AT&T UNIX, etc...) what do they all have in common that lets these all be called UNIX? Programs written for one flavor of UNIX typically cannot be ported to another without considerable effort. The features offered by the different implementations vary widely: some are more secure than others, some cluster better than others, some offer journaling file systems, some are more robust. The differences between the different kinds of UNIX seem to be as great as the differences between any particular implementation and other OSs. Could one port all the standard command line utilities to NT, clone one or two of the popular shells, set up the directory structure in the standard UNIX layout and call it Microsoft UNIX?"
This discussion has been archived. No new comments can be posted.

What Makes A UNIX System UNIX?

Comments Filter:
  • by Anonymous Coward
    Sorry, in NT, the GUI is just too intertwined with the underlying OS. It's a Windows/Unix hybrid. Neither one nor the other. Too many things are running with the equivalent of root that shouldn't; thus destroying the advantages of process independence and protection of privelege from non-superuser's tasks inability to severely mess up the running OS. That MS actually teaches its MSCE students "not to enable the screensaver" in NT only proves this. Simple Unix services like being able to telnet in to a shell and do remote administration with even the the stock OS (no, add-on remote admin tools don't count).

    A/UX, on the other hand, was a true Unix OS at the core with the "Macintosh" shell running on top of it. I really liked A/UX. Whatever happened to it? When did Apple officially abandon it?

  • Therefore the following are official 'unix' flavors:

    Unix.
    Linux.
    Ultrix.
    Irix.
    Miltics. (variant spelling, but has the same 'x' sound)
    Xenix.
    even A/UX.

    Because the following have no 'x' sound, they are mere Unix wannabees or early protozoan forms thereof, and not True Unix Clones at all.

    Windows NT.
    SunOS/Solaris.
    BSD/NetBSD/FreeBSD.

  • NT qualifies under UNIX98 branding as a Unix system when running the compatibility overlay previously known as Interix/OpenNT. Covered in this ZDNet article [zdnet.com]. MVS has also qualified for Unix branding, IIRC.

    Not that I think of NT as Unix.

    What part of "Gestalt" don't you understand?
    Scope out Kuro5hin [kuro5hin.org]

  • Wasn't Linux descended from the MINIX source code?

    No. Linux was written from scratch. The Minix FS was the first filesystem supported, which accounts for the mistaken impression in your post.

    -Doug

  • Programs written for one flavor of UNIX typically can be ported to another by simply recompiling them.

    I would add "written ... with portability in mind". It's easy to be gratuitously nonportable, especially if all you know is one flavor.

    -Doug

  • Formally speaking, what makes a UNIX system is adhering to the spec which defines UNIX.

    A draft of this is available online and makes a handy reference [unix-systems.org].

    The Single UNIX Specification covers not only the library and system call interfaces, but also the shell commands and utilities, including the command language formerly known as Bourne. ;)

    Of course, what we actually understand as UNIX is deeper; one cannot understand what UNIX is outside of the surrounding computing culture.
  • The Icons ran an early QNX????

    Yup. I've got one. Hardware is an 80186-based system (more embedded computer) with some kind of weird-ass token ring network. I contacted QNX for information about these systems, as I have done with Unisys but neither was able to help. Is there anyone here with hardware information on those old Unisys ICON computers? I have a few I'd like to play with...

  • These I believe were ICON 2's (80186 computer, Arcnet-type network, diskless. Color screens with blue background.

    The teachers watched us *very* closely with those things. I remember getting in shit for trying to get to a prompt...

  • IIRC, Windows NT only complies with POSIX.1, and it doesn't do a great job at that. POSIX.1 is a very very small part of POSIX. I would not consider POSIX.1 itself to be enough to warrant something to be "POSIX compliant", but I'm not associated with POSIX, so my opinion wouldn't really count much. Oddly enough, though, Windows NT is probably closer to POSIX.4 than Linux is (though this may have changed somewhat with RT-Linux).
  • They also recomend you only use a 16 bit color, display adapter (640 * 400); because it was written by some NT Guru.

    Clarification: Unaccelerated 16-color (4-bit), 640x480. In other words, base VGA.

  • After the official rights to Unix were given to the OpenGroup by SCO, any operating system that could pass the POSIX tests could be legally branded. This is a change from the bad old days when only products licensed form the official copyright holders could be called Unix - Solaris was licensed from AT&T originally, so it could call itself Unix; HP-UX was implemented independently and could not call itself Unix. Thankfully, those days are over.

    As for making WIndows NT into a Unix, it's already been done; for a while it was being sold as Open NT. Essentially, Windows NT has the capabilityto have different subsystems; the 16 bit Windows 3.1 compatability stuff is one, the Win32 level another. By adding a complete POISX subsystem, Windows NT can be considered Unix. Bill Gates was once quoted "In some ways, NT is a Unix". For more on the history of Unix, I recommend the book A Quarter Century of Unix by Peter Salus, the USENIX bookworm. It has an excellent explanation of the geneology of Unix and Unix-alike systems.


  • 1.The file system interface. By this, I mean inodes, ugo/rwx permissions, and a single hierarchy rooted at "/".


    A number of non-UNIX OSes have borrowed this paradigm (INMOS's Helios and Be's BeOS are two examples; many real-time OSes also borrow from UNIX here).


    2.There is one user (root) that has full access to the machine; all other users are limited to a small "sandbox"

    Some secure unices eliminate the omnipotent root user and compartmentalise privileges further.

    Other indicators of an operating system's UNIXness would be:

    • The system call API. The more it looks like POSIX, BSD or SysV the more UNIXy something is. In general, if it doesn't have UNIXlike calls for file operations, process management, &c., it's not a real UNIX.
    • The everything-is-a-file paradigm as mentioned by another poster; under UNIX devices are files, accessed with file I/O calls, mmap() and ioctl(). Lesser systems such as Windows and MacOS have a bizarre custom API for each set of operations (raw disk I/O, sound I/O, console I/O, etc).
    • Further from the field of system design and into the realms of abstract philosophy and user interface, a fundamental characteristic of UNIX is that you can perform complex tasks by using many simpler components in cooperation (i.e., shell scripts and command pipelines). Contrast this with Windows, where the norm is huge, monolithic applications, each with a defined range of operations.
  • Look at the texinfo files distributed with said packages. 'info autoconf' and 'info m4' should bring up the respective manuals.
  • The way I recognize if something is Unix, is through its boot sequence. If I can see the familiar, kernel, init, Sys V, etc., then I feel right at home. Granted the GNU utilities can be added to NT, but the low level interface of NT is just plain different and looks like some green alien from outer space. Where is /dev, /etc? How do you easily change the services by writing to a text file? Can I pipe stuff around between devices and files? I just can't imagine NT as a Unix, except for a superficial shell and utilities.
  • "POSIX" is a good start towards being a flavor of UNIX, but it's not enough.

    Via its Interix purchase, Microsoft now has the technology to make Windows NT/2000 totally POSIX.1 and POSIX.2 compliant, complete with X Windows! A free alternative to that is Cygnus's Cygwin environment, which is a complete port of the GNU tools (POSIX.2), along with an API layer that translates POSIX.1 calls to Win32.

    But just use a Windows NT box running Interix or Cygwin. It's obviously not Unix.

    To be Unix, the following have to be true:

    1. POSIX has to be the best way to get something done on that system. If you have Interix or Cygwin installed on your Windows NT box, you still don't spend a whole lot of time trying to find a POSIX tool where an equivalent native Win32 tool exists.

    2. POSIX.1 and POSIX.2 don't cover issues like "changing a user's info in /etc/passwd". Despite the lack of a standard, all Unixes work similarly in this regard. But even if a Unix-like user info mechanism was emulated on my Windows NT/Cygwin box, I would still use NT's User Manager, if only because Windows NT has user features that don't have an direct analogue under Unix.

    These guidelines might seem unnecessarily exclusionary. But, just look at other OSes with POSIX layers. I've already shot down Windows NT with addons, so what about BeOS and MacOS X? Both of these are/will be POSIX.1 and POSIX.2 compliant. But like the WinNT/Interix combo, they fail both the guidelines above.

    In all these "POSIX-but-not-Unix" systems, the Unix functionality is secondary to the "native" functionality. Most of the time, there's another way to do something, a way that makes more sense on that system than the Unix way.

    --

  • The definition of what a "UNIX" system is set by the Open Group because they own the trademark registration. There is a specification and certifcation process that goes well beyond the existence of a few shells and command line tools.

    There are UNIX 95 and 98 specifications along with delineations between server and workstation class machines.

    You can view version 2 of the specs on line at this URL:

    http://www.opengroup.org/online-pubs?DOC=0079087 99

    API tables can be viewed here:

    http://www.UNIX-systems.org/apis.html

    They are useful for distinguishing between what is BSD, SVR4, POSIX, and modern UNIX.

    If you read some of the specification docs, it states what C-language system calls must be implemented and draws the boundaries between what features must exist and what features are up to the discretion of the implementor.

    Note that it is possible for systems that are not traditional UNIX to get the certification. I think DEC did this with OpenVMS. The Interix product also has this certification.

    There is an effort to bring UNIX and POSIX closer together. Information can be found here:

    http://www.opengroup.org/austin/
  • I see them on my Slowlaris box, but not on my Linux box.

    ...and you probably won't see them on a BSD box, either:

    % ifconfig -a
    fxp0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500

    ...

    % ls -l /dev/fxp*
    ls: No match.

    The /dev entries for devices are a DLPIism; they weren't in SunOS prior to 5.0, and there are a number of UNIXes that lack them.

  • The basic cause of the backspace problem is a different ASCII code for the key in DEC terminals (where it's 0x7F) and others such as Sun systems (where it's 0x08).

    The original character used to "[delete] the character to the left of the cursor" in UNIX was...

    ...'#'.

    Yup, '#'. It was what Multics used, just as Multics used '@' to erase the line, and UNIX from Bell Labs followed in its footsteps, the fact that Multics largely ran the terminals in mode where echoing of characters was done by the terminal (in fact, as I remember, you didn't have a choice about it on at least the IBM 2741 Selectric-typewriter-based terminal; the special option to allow the host to turn off echoing worked, as I remember, by the terminal mechanically blocking the Selectric typeball from hitting the ribbon and the paper) but UNIX ran the terminals in a mode where echoing was done by the host nonwithstanding.

    DEL (0x7F) was typically the interrupt character, to send a SIGINT to the currently running program; on CRTs, people may have chosen BACKSPACE (0x08) as an erase character - at least it would be echoed as a backspace, even if it didn't actually remove the character from the display, or work well if you were erasing a TAB character.

    Some folks made the tty driver more like those in DEC's operating systems, where DEL erased the most recently typed character and either erased it from the screen on a CRT or echoed the erased characters inside backslashes on printing terminals, control-U erased the line (possibly erasing the entire line from the screen), and control-C was the interrupt character; BSD did so, and that tended to make DEL the erase character (even on Suns; as I remember, on Sun keyboards until the Type 4 keyboard, the big key on the top row sent DEL, not BACKSPACE; the Type 4 went more PC-like in what I remember being in part an attempt to make the PC users they hoped would pick up on the Sun386i happier).

    So the basic cause of the backspace problem, in the sense of BACKSPACE (0x08) not being the standard erase character, was that the AT&T folks emulated Multics and the Berkeley folks emulated DEC. The problem of the big key on the top row of the main keyboard not erasing the previous character is a result of it sending (or, on workstations/PCs, not being interpreted as) DEL on some terminals. (Digital tended to make it send DEL on their terminals because that's what their OSes used as the erase character; I forget what other older terminals did, but some later terminals may have made it send BACKSPACE either because, well, that's where the backspace key goes on a typewriter or because that's where it goes on a PC. I guess the PC has it as a BACKSPACE key because the original IBM Personal Computer was made by a company that also made typewriters :-))

  • If the codebase can be traced back to AT&T then you can call it Unix

    If you're speaking in the strict legal sense, then, once upon a time, AT&T did, as I remember, impose such a restriction on the use of the trademark - except that it was much stricter, i.e. you had to have made minimal changes to the source code, just enough to make it run on your hardware. (That's why "Sun UNIX 4.2BSD Release 3.x" became "SunOS 4.x" - Sun, and Berkeley, had made rather a lot of changes to the code base that had nothing to do with making it run on Suns, and we figured that'd keep AT&T from yelling at us.)

    However, the UNIX trademark is now owned by The Open Group, and anything that passes one of the UNIX test suites (e.g., the current one, the UNIX 98 test suite) can, in theory, be called "UNIX" even if it lacks any AT&T code whatsoever.

    If, however, you're thinking of the general "look and feel", I don't care whether the code is AT&T-derived or not - if it looks like a UNIX system when I use it or develop code for it, I'll call it UNIX (even if that upsets either The Open Group or the "Linux is not UNIX" crowd).

    Binary compatabilty would be such a waste of system resources.

    Depends on the type of "binary compatibility". Many UNIXes (or "UNIX-flavored OSes", for the benefit of those who piss and moan about thinking of Linux as a UNIX) include the ability to run binaries for other UNIXes running on the same instruction set architecture, and this capability can come in handy if some program is available only in binary form and you want to run it on an OS other than one for which its binaries are available.

  • IIRC, what makes a brand of Unix, Unix... Is the source code has to descent from the original Unix for the PDP-7. So such variants as Linux are *NOT* considered Unix. As well, I believe the word Unix is copyrighted, and owned by AT&T (somebody tell me if I'm wrong?).

    OK, you're wrong.

    UNIX is a trademark of The Open Group; it used to be a trademark of AT&T.

    At least at one point when it was a trademark of AT&T, to be able to use that trademark for your software it had to be based on System V (which contained not a line of code descended from PDP-7 UNIX, given that said PDP-7 UNIX code was almost all if not all PDP-7 assembler code; it may have been philosophically influenced by it, but, well, so was Linux and the userland code put atop it...), with the only changes being those necessary to port to the hardware on which it ran.

    However, now, you can get a license for the UNIX trademark if you pass one of The Open Group's test suites, even if there's not a line of System V-derived code in your OS.

    And I consider Linux to be a flavor of UNIX, in the sense that, when I log into a Linux system, and when I develop code to run on (among other platforms) a Linux system, it feels as much like using or developing code for some AT&T-derived UNIX as using or developing code for one of those AT&T-derived UNIXes feels like using or developing code for another of those AT&T-derived UNIXes.

    I tend to consider the sine qua non for being "real UNIX" to be the administrative interfaces, in that UNIX-compatible or UNIX-like environments atop other systems don't change the way you administer those systems, so it's still significantly different from UNIX, but administering a Linux system feels much like administering an AT&T-derived UNIX system (heck, on most Linux distributions, the rc files are more like System V than are the rc files on the ultimately AT&T-derived BSDs...).

  • There's a standard called POSIX which defines just what the core of UNIX is, including the C library, file handling, the sockets interface, that sort of thing.

    There is a set of standards called POSIX, and the core POSIX API standard, IEEE 1003.1-1990, sockets are not specified.

    There's another 1003.x standard group that is, I think, working on standardizing a low-level network programming API, but I don't think it's a final standard yet (and it may get swallowed up by the Austin Group work mentioned in the next paragraph).

    The Austin Common Standards Revision Group [opengroup.org]

    is a joint technical working group established to consider the matter of a common revision of ISO/IEC 9945-1, ISO/IEC 9945-2, IEEE Std 1003.1, IEEE Std 1003.2 and the appropriate parts of the Single UNIX Specification.

    and it appears that standard will contain a lot of stuff not in 1003.1, including networking interfaces such as sockets.

    The point here is that there's more to UNIX than just POSIX; there are API not standardized by POSIX but that are (more or less) common to many UNIX-flavored OSes and that are important for many applications.

  • Linux however(my favorite) is not Unix because its kernel is different. The scheduling is different, the threading is different as well as the interaction between user space and kernel space and other things.

    ...but those things are the same on all "UNIXes"? I'd be extremely skeptical of such a claim; I suspect that you'll find a fair number of kernel differences between, say, AIX 4.x, SunOS 5.x, and Digital/Tru64 UNIX, for example.

    But, for all practical purposes, [Linux] looks like Unix, it feels like Unix, it smells like Unix and sounds like Unix.

    ...which is why I call it a UNIX, even if it's not AT&T-derived (especially given that a fair bit of AT&T code has, I suspect, been rewritten or replaced in the kernel and userland of even AT&T-derived UNIXes).

  • Didn't
    one already do that and it was called "Xenix"?

    Nope. "One", i.e. Microsoft, didn't "port all the standard command line utilities to NT, clone one or two of the popular shells, set up the directory structure in the standard UNIX layout and call it Microsoft UNIX", they took V7 UNIX and ported it to various platforms and added the usual set of enhancements ("usual" in the sense that pretty much everybody with a version of UNIX they sold did so; that flavor of "embrace and extend" was hardly unique to Microsoft).

    (...just in case anybody in the audience doesn't think Microsoft ever sold a Real AT&T-Derived UNIX. They most definitely did....)

  • NT qualifies under UNIX98 branding as a Unix system when running the compatibility overlay previously known as Interix/OpenNT. Covered in this ZDNet article.

    They don't seem to be listed on The Open Group's page listing UNIX 95-branded products [opengroup.org], but I don't know if that page lists everybody who got the brand (that being the brand the ZDNet article says they went for).

    It is, however, from the stuff on Interix on the Interix Web site [interix.com], a lot more UNIX-compatible than is the native POSIX subsystem on NT.

    MVS has also qualified for Unix branding, IIRC.

    Well, OS/390 did [opengroup.org], but "OS/390" is just the latest in the series of names assigned to various descendants of OS/360, MVS being an earlier such name for the descendant that's now OS/390.

  • From what I remember, either correctly or incorrectly, NT was based off of VMS.

    I've heard the claim that it's VMS-derived, but I've not heard any evidence sufficient to make me believe that claim. At least some stuff is VMS-like internally (the I/O subsystem, according to the Inside Windows NT books, resembles the I/O subsystem described in VMS internals books), but that could be nothing more than the result of Dave Cutler being, I think, in charge of the development of both.

    When MS decided to slap the GUI onto it around 3.x it became a good deal more like the NT of today.

    "3.x" was the first release of NT - 3.1, to be specific. I guess Microsoft wanted to give it version numbers that resembled the version numbers for Windows OT, so they started with 3.1 rather than 1.0.

    wouldn't this allow NT to trace its roots somewhere back to a true Unix?

    Given that VMS wasn't "a true UNIX" (as in "had an API and a command-line interface that wasn't particularly like that of any UNIX"), I'd say it wouldn't.

    Speaking of which...would Mac OS X be considered a true Unix (I'm not sure if Darwin passes on the POSIX stuff or not)?

    If

    1. it exports a UNIX-compatible API as one of the APIs (I have the impression that you can get at it, although you may have to work harder to do so than to get at the MacOS Classic, Carbon, or Cocoa APIs);
    2. can be convinced to give you a UNIX-compatible command-line interface (which I also have the impression it can be convinced to do);
    3. has a UNIX-style administrative interface (which I also have the impression it does, down at the bottom - the XML-based configuration files are a bit different, but I suspect other UNIXes differ enough in their configuration files that this may not be sufficient for me to deem it not UNIX);

    then I'd consider it a UNIX.

  • Solaris was licensed from AT&T originally, so it could call itself Unix; HP-UX was implemented independently and could not call itself Unix.

    Umm, as far as I know, HP-UX, at least on the 68K and PA-RISC machines, is ultimately derived from AT&T UNIX (there was one other box they made with a UNIX built atop some special kernel they did, on a processor that was neither a commodity processor nor a PA-RISC processor - but even there I think most of the userland stuff, at least, was probably derived from AT&T UNIX), as is Solaris (and Solaris is, as far as I know, far from just being vanilla SVR4).

    As for making WIndows NT into a Unix, it's already been done; for a while it was being sold as Open NT.

    And now it's being sold as Interix [interix.com].

  • Can I change the runlevel of w2k?

    I doubt you can, even with Interix installed...

    ...but you can't do that on the BSDs, either, unless you've installed a System V-style init on them (which may require you to write one, or port one from a Linux system, say), unless you include changes between single-user and multi-user mode, which is all you get with the traditional init, to be changes to the runlevel (which I don't, given that the notion of multiple run levels first showed up, at least in an AT&T UNIX release that AT&T sold publicly, in System III, not in the original Research UNIX).

    Does it have virtual consoles (Ctrl-Alt-Fx)?

    I suspect not, even with Interix...

    ...but I suspect you can't do that on a headless Linux/BSD/Solaris/SCO/... system with a dumb terminal on a serial port as a console, either. (There may be equivalents for dumb terminals, though.)

    I agree that NT+Interix probably not UNIXish enough for me to think of it as Real UNIX (although for many purposes it may be Good Enough), but those particular criteria are a bit too restrictive, in that they rule out systems that I suspect most would think of as Real UNIX. (And I think of Linux distributions as being Real UNIXes even though their code largely can't "traced back to AT&T or BSD UNIX".)

  • Ever try to remove/rename a NTFS file while it is open in some process?

    (grin) try this on HP-UX with a running executable ... you get "segment busy".

    That actually dates back to V7 UNIX; it's not an HP-UXism. As I remember, the rationale was that if you did that, and the machine crashed before the program running that image exited, you'd have an "orphaned" file...

    ...but I consider that a lame rationale, as

    1. the same applies to a file you have open, but they didn't prohibit that (probably because programs depended on that as a way of, for example, creating temporary files that get removed when the application exits, even if it exits because it's killed);
    2. it makes it a pain in the neck to install new versions of software if the old version is still running (you can't just unlink the old version and install the new one, you have to rename it and then clean it up later);
    3. with a reasonable file system salvager run on a reboot, e.g fsck, or a file system that remembers what was unlinked but not removed and removes the files on a remount, it's not an issue.
  • 1. no DLLs

    Yeah, calling them ".so"s instead makes a big difference. :-)

    Presumably by "no DLLs" you mean something other than "no dynamically-linked libraries", given that most modern UNIX systems these days do have dynamically-linked libraries.

    Given that, to which particular feature or features of Microsoft's implementation of dynamically-linked libraries are you referring?

  • Try searching for "OpenDK" in Slashdot comments - and then go to the second page on the "DNA Testing to solve history's mysteries?" page, or whatever it's called.

  • by Utter ( 4264 )
    OpenGroup has set up a standard called UNIX98:
    http://www.opengroup.org/prods/xxm0.htm

    Interestingly, Linux is not a fully compliant UNIX system as you might think.
  • Huh? Yes. CDE is a royally inefficient pain in the ass, and it's not exactly performing voodoo under all those pretty pictures. I never use it. dtterm is my friend.

    --
  • It's a well-kept secret that the NT Executive has a single-rooted hierarchy, containing device nodes, named pipes, and filesystem roots.

    Yes, but unlike Unix where Everything Is A File, in NT only some things are files. There are also various mysterious base system objects which you can apply an ACL to, but does not exist on the filesystem.
    --
  • As others have said, the GUI stuff is not required to run a UNIX system.

    However, inclusion of CDE and Motif are required as part of the "Single UNIX(tm) Specification", so you could consider the windowing systems as part of UNIX(tm). This is one main reason that free Unix clones will never be certified -- CDE is not free software, and nobody has any interest in cloning it.
    --
  • I don't think that this is definitive of the 'spirit' of UNIX, but the idea of treating all things as files is a characteristic shared by all UNICES.

    /etc/passwd is a file.
    /dev/null is a file.
    a socket is a file.

    All things are interacted with in the same manner, and this consistent abstraction, along with a common API and tool set, are what makes it easier to go from one flavor to another.

    I don't think any OS, except maybe MULTICS (don't know it) which is UNIXes daddy, did this before.

  • posters have already answered the question about what a Unix really is (it has much to do with POSIX compliance) but I'm wondering something. From what I remember, either correctly or incorrectly, NT was based off of VMS. When MS decided to slap the GUI onto it around 3.x it became a good deal more like the NT of today. NT 3.51 was the first fully 32-bit MS platform IIRC. If I'm not having a neurological meltdown wouldn't this allow NT to trace its roots somewhere back to a true Unix? I'm not trying to advocate NT, I'm just wondering if I'm full of shite. A real server OS doesn't need a GUI. Speaking of which...would Mac OS X be considered a true Unix (I'm not sure if Darwin passes on the POSIX stuff or not)?
  • I mean, it's the differentiator between Unix systems and pretty much everything else....

    Also, think of the user experience. For the most part, end users of a unix system experience the same behaviour. cd is cd is cd, cdup is cdup is cdup. cp, mv, vi/pico/emacs are all there, etc. etc. It's only on the back end that the Unices vary so widely. I've moved my old website from HPUX to Sun to Linux to OpenBSD, with others in between and the only major changes were due to differing security/CGI settings and the path to perl...
  • Eunuchs are males who have select I/O ports reconfigured early in their development such that they may maintain a stable state throughout their lives and therefore be highly compliant as Opera components and supporting systems.

    Non-eunuchs develop normally, but loose the ability to be Opera compliant (except at a very bass level). Furthermore, non-eunuchs gain the ability to really dick people over, as witnessed Feb. 17th.
  • I once attempted to compile sshd under cygwin. Unfourtunately, I didn't manage to get it working for some strange reason I've now forgotten... However the client works fine...
    --The knowledge that you are an idiot, is what distinguishes you from one.
  • Isn't cygwin a certified UNIX? (For those who doesn't know - cygwin is a libc and POSIX lib set and set of traditional UNIX and GNU tools for Windows).

    Anyway, I would say that a requerement for a UNIX is the two basic UNIX philosophies; pipes and the file tree. The previous being that everything is done in small programs linked to each other to do complicated tasks, and the latter being that all communication with the outer world is done through the filesystem through device files.

    But if the latter is a criterion, most unices aren't unices - the network interfaces are rarely real device files.

    Another aspect of UNIX may be that everything is done in human readable format - you rather patch and recompile a source file than you binary patch or relinks.
    --The knowledge that you are an idiot, is what distinguishes you from one.
  • Anyone remember the ixemul Amiga shared library? Provided a pseudo-POSIXy layer (but.. er... lacking fork()...you could use vfork() though...) on top of AmigaOS (and later BeOS)

    Pretty much all of the GNU tools, and X windows, that are commonly found in a base linux install were then compiled into a distro called GeekGadgets to run on top of the ixemul.library on top of AmigaOS...

    A similar approach could, I suppose, be used on top of virtually anything.

    It used to live at www.ninemoons.com/GG/ [ninemoons.com]

  • Exactly.

    And (Windows NT && cygwin && bash && grep) != Unix

  • Uh, with the exception of grep, those aren't unix tools, they are GNU tools. And GNU's Not Unix :-) And Cygwin ain't a unix tool under any definition.

    sh and csh *are* unix tools because one can count on them being installed by default on every unix. Ditto for vi.
  • with proper rights can call it unix. Unix is a trademark, I believe.

    As to something being written for one unix being 'difficult' to port.. this is not really true. Most things these days port rather easily.
  • The unix philosophy has allowed me to do things that no one has ever really been able to do in NT without breaking out Viual C++. I mean seriously; things like sed, awk, grep, emacs and other things are god sends for being able to do almost anything you want.

    NT can surprise you. I once wrote a long, complicated shell script that would install and upgrade objects in databases. All our clients would use it. It used all kind of shell tricks, and tools like sed, awk and grep. It was developed on Solaris. Then I had to port it to HP. Which required some changed. (I remember that grep under Solaris had options not supported by HP's grep; but they both claimed to be POSIX compliant). That program was later ported to NT. It required one change: a different directory than /tmp was used as scratch place.

    There can be a zillion reasons to hate NT (I would never use it myself). But lack of standard Unix tools isn't an appropriate reason - they have been ported.

    -- Abigail

  • On a different note, UNIX has a "philosophy": everything is a file,

    Except of course that not everything in Unix is a file. That's Plan 9. If everything was a file, you would not have open and pipe and socket with friends. If everything is a file, you'd have one API. Unix doesn't.

    -- Abigail

  • For the most part, end users of a unix system experience the same behaviour. cd is cd is cd, cdup is cdup is cdup. cp, mv, vi/pico/emacs are all there, etc. etc.

    $ which cdup
    $ which emacs
    $ which pico
    $ uname
    Linux
    $

    Does that make Linux a Windows variant?

    -- Abigail

  • I expect that the 'UNIX' tools you used under NT were the GNU tools.

    Wrong.

    Had you used the GNU tools on Solaris and HP, you'd be better off for compatibility.

    That was not an option. And even if it was, I'd prefered to use the out-of-the-box solution than to be forced to keep sources around for several years in the off chance someone might demand them.

    -- Abigail

  • Further from the field of system design and into the realms of abstract philosophy and user interface, a fundamental characteristic of UNIX is that you can perform complex tasks by using many simpler components in cooperation (i.e., shell scripts and command pipelines). Contrast this with Windows, where the norm is huge, monolithic applications, each with a defined range of operations.

    So, if Microsoft would start porting their software to Linux, does that mean Linux is no longer a Unix?

    -- Abigail

  • stability can of course be achieved in other
    operating systems than Unix.
    And a system does not have to be stable to
    be called Unix.
    It is just a common trait for most Unixes, that
    they are very stable.
    Userexperience isn't everything either.
    It could be argued that MacOs X is a variant
    of Unix, because of the kernel, and Posix-compatibility.
    But the user experience would probably be very
    different.
    What makes Unix Unix, is probably conforming
    to standards (Posix), and the basic architecture.
    The philosophy ("everything is a file").
    You could emulate the Unixinterface trough a sort
    of virtual machine in Windows NT.
    Windows NT would then feel like Unix, but it
    really isn't. The virtualOS probably IS :-)

  • If everything is accessed as a file, it's probably a Unix. No special hidden "registries", no extended invisible attributes, just files.

    Ethernet on /dev/le0, /dev/eth1, etc...

    Entire physical or logical disk drives accessible under one filename (/dev/hda, /dev/sda, /dev/c0t0d0s0 (or whatever))..

    Serial ports as a file, printers as a file, etc...

    That's UNIX. Accept no substitute.


    If that's the case, then Linux is not an unix....the networking interfaces have - traditionally - not shown in /dev under Linux.

    Just not to confuse anything....

  • >you mean apart from the fact that CE's kernel is based on NT's

    Wrong. If I recall my history correctly, when faced with a new set of processors and a new type of system (in terms of memory hierarchies, I/O capabilities, etc.) MS tried two different approaches in parallel. One was to port NT, and one was to write a new OS originally called Pegasus. The latter approach won out.

    I got this information from the intro of a book called Essential Windows CE Application Programming, by Robert Burdick. It may therefore not be totally authoritative, but it seems a little more believable than an unsubstantiated claim on Slashdot. ;-)
  • I'm talking really old QNX here, atleast 10 years ago. There was only 1 system with hard drives, and that was the Icon 3, all the rest were diskless and mounted the Icon 3's drives as local via NFS I think, and each hard drive seemed to have a logical [x] number, and then nodes seemed to have an x number also, so you ended up with [logical drive]node:/path I *think*..

    -- iCEBaLM
  • Yup. I've got one. Hardware is an 80186-based system (more embedded computer) with some kind of weird-ass token ring network.

    The icons were the definitive network computer.. We had farms of them in our schools around here (belleville quinte area). There were 3 types, the Icon 1 was really old looking and clunky, the Icon 2 was newer looking but seemed to have the same hardware, and the Icon 3 was nothing but a (3|4)86 with a "Unisys Icon" sticker on it.

    The Icon 3, running QNX, would serve the OS and apps to the other Icon 1 and 2's, which were diskless, on the network, but I remember it being a 10base2 network, not token ring...

    I had great fun with these, I eventually found out how to get to shells, and I'd go into other students directories and check out their homework :) Little did I know then that this was my first experience with unix...

    Its interesting to note back then however that these early versions of QNX did NOT have a flat file system, they were segmented into logical drives, [1]1:/path if I remember correctly.

    -- iCEBaLM
  • As much as I agree with you :-) I have some additions.

    A Windows UI blitz-quiz:

    - When do you press Ctrl-/ and when Ctrl-A to select all items in a list?
    - How do you explain that to a *user*?
    - What will happen to a file when you drag it from one folder to another?
    - How do you explain that to a *user*?
    ...
  • Yes.

    My job is being part of a team who manages over 400 sun servers. Do we have monitors hooked up to each of them though? No. We use a serial terminal server to get consoles on them. Sure the use of a GUI is nice from time to time, but to do maintenance on a system it is definitely not necessary, and I think that is the main reason the person who listed out possible criteria for unices stated that a windowing environment was not required.

    siri

  • You could say that Unix systems are those that mostly conform to a Posix standard, I suppose. Unix systems have some things in common (off the top of my head):
    • Multi-user, with different access permissions for each user. By contrast, you can log in as anything you want in Windows 98 and still have access to everything.
    • Separate address spaces. Some simpler OSes provide separate threads all running in the same address space. Separate address spaces are really implied by the multi-user requirement.
    • Monolithic kernel. The kernel is not only responsible for managing processes and IPC, but also the filesystem, scheduling, virtual memory, etc. These things are typically in "server" processes in a microkernel architecture.
    • Priority-based scheduling, with priority boots for interactive processes. One of the distinguishing features of Unix when it was developed was that it was good at handling interactive applications by detecting their usage pattern and reducing their scheduling latency by boosting their priority.
    • File semantics. The permission structure: read-write-execute for user, group, others. Reference counted deletion: delete a file while someone's using it, and they get to keep using it; the file disappears when they're done. Directory structure: single tree-shaped namespace (modulo hard-links) with devices mounted on certain branches. Symbolic links which act just like the actual file they name (as opposed to Windows shortcuts, which don't).
    • Virtual memory. This is usually invisible to the user, so it wouldn't matter anyway. But unix also provides such things as mmap whose semantics would be very hard to duplicate without a real virtual memory system.
    There are probably lots of things I have missed, but maybe that will get things rolling...
    --
    Patrick Doyle
  • 1) Microsoft did have a version of Unix: Microsoft XENIX. In August of 1980, Microsoft announced the Microsoft XENIX OS, a portable operating system for 16-bit microprocessors. It was an interactive, multi-user, multi-tasking system that ran on Intel 8086, Zilog Z8000, Motorola M68000, and DEC PDP-11 series.

    2) Linux is not Unix. It is a free Unix-type operating system.

    Chris Hagar

  • In that case, Windows NT is *nix. It has a POSIX compliant subsystem. (MS had to write a POSIX subsystem in order to comply with arcane govt. software purchasing guidelines)



    --GnrcMan--
  • That's a good point, but occasionally there is just one incompatible device, with drivers only available for windows OS's, and that can be the only reason not to use a *nix. If I've got, bad example here, but lets say i've got a weird scsi interface that has only drivers for NT. I can't use *nix then can I?

    Sometimes there are reasons to give up a bit of stability and speed.

    --
    linuxisgood:~$ man woman

  • Actually, under your definition, BeOS too is a UNIX. (The GUI can be removed as it is just a part of the app server.) I certainly doubt you would call BeOS a UNIX (you better not!) Actually, aside from the GUI part, even NT is a UNIX. I think there is a design thing, treating everything as a file, having a fairly non-modular system (Linux, modular, yea right)(Even a microkernel like MACH can fit this definition because they usually just have a big BSD system server.). In the end, its one of liniage. Is it mainly a derivation of a classical UNIX, or is it just influenced by UNIX design?
  • Actually that close of a connection between the OS and the language does not "feel right." for a lot of people (I'd go so far as to say the majority.) Anything that tightly restricts the evolution, expandibility, and scope of two systems by tying them to each other does not "feel right."
  • Could one port all the standard command line utilities to NT, clone one or two of the popular shells

    This has been done [cygnus.com]. Trust me, it doesn't make NT feel very much like Unix -- it just makes it a nicer place to work (at least, for someone familiar with the Unix command line).

    For me, there are two main areas that distinguish "Unixness":

    1. The file system interface. By this, I mean inodes, ugo/rwx permissions, and a single hierarchy rooted at "/".
    2. There is one user (root) that has full access to the machine; all other users are limited to a small "sandbox"
    Note that no simple user-level gloss that's going to make an NT box have these features.

    -y

  • > Solaris and IRIX are pretty attached to their windowing systems.
    > I wouldn't count this one as a meaningful criterion.

    Perhaps, although I wouldn't know because I've never sat at either a Solaris or IRIX system that used a graphical terminal. You can do a great deal of useful work by in either one by attaching with telnet or ssh and never going through a graphical window. (Granted my experience with both is limited but I've always ssh'd into them rather than going through an attached console.)

    In my mind this is another thing that separates Unix from NT. Granted there are ways to attach to NT remotely (e.g. Terminal Server and VNC) but they're not the same thing.
    --
  • From what I remember MS added a set of POSIX utilities to the NT resource kit so they could claim POSIX compliance for whatever government contract they were trying to land. But, they are pretty minimal. (Presumably Win2K has these as well, but I don't have a copy at hand to check.)

    The Cygwin command line utilities are much better and make the Windows environment a little more comfortable, and let you do useful things like ls -1 | grep 'txt$', but it still ain't Unix.
    --
  • Well, the properties of hardware have increased runtime efficiency to high levels (compare 486 to Pentium III). Therefore most people won't notice the difference. The problem was with the old and slower machines, where the speed difference was noticable, not he newfangled boxen. I can still see the difference on my box (P200 MMX)
  • <i>remember my first time on Unix, I was in grade 9 in high school. The OS was actually called QNX on a 8086 machine called the Icon that was made here in Canada.</i>

    The Icons ran an early QNX???? All I remember about them is that my elementary schools had 3 Icon boxes and that I was *really* frustrated in grade 7 because the BASIC interpreter was incompatible with Commodore 64 BASIC (I was pretty naive back then :) )

    Dana
  • Actually, having worked with NT systems for 3 years, I can say that hardware/driver issues cause most BSODs I have seen. Bad/flaky/subpar RAM is the most frequent cause of intermittent BSODs. I've got 25+ Proliant 1600R NT servers in my network, all running in 800x600x64K with a Cirrus video card, and not a single BSOD from any machine. Ever. So I'm a fool?
  • >It was developed on Solaris. Then I had to port >it to HP. Which required some changed. (I >remember that grep under Solaris had options not >supported by HP's grep; but they both claimed to >be POSIX compliant).

    POSIX compliancy has nothing to do with all utilities being the same. They are compliant about a sub-set of options. They share their own features. What's your point?
  • >I suppose if M$ was to actually provide a near
    > full implementation of a unix shell,
    > filesystem, and command line utilities, it
    > could be argued that NT would be, indeed, a >UNIX.

    Yes, by people who don't understand the difference between UNIX(tm) and Unix-like, along with the fundamentals of what a Unix like OS does.
  • Actually it's more correct to say that uid 0 = power.
    You can easily change a single field location in /etc/passwd and rename the uid 0 account to god or whatever and get equivelent actions.
  • The unix philosophy has allowed me to do things that no one has ever really been able to do in NT without breaking out Viual C++. I mean seriously; things like sed, awk, grep, emacs and other things are god sends for being able to do almost anything you want.
    Linux has been able to allow me to use shitty hardware that NT just plain wouldn't run on (well ok the new versions of NT I had a CD of NT 3.1 and it worked on my machine but I took it off 2 hours later). A GUI interface is nice but it dosn't make up for the lack of efficiency and power that make NT evil.
  • You know this is almost really funny. People have always said that native code is much faster and that there are certain advantages to it. Why would everyone start using Tcl, perl, and Java all of a sudden? All the things that I do with so called "platform independent" languages makes the thing dog slow. Case in point gimp. I can tell a big difference between native executables and perl on my system. When I do something perl related that is even built in for the gimp takes rougly 2-3 minutes of hd grinding. Also things like dpkg which also relies heavily on perl takes up a lot of time as well. Tcl is in the same boat. Java is not exactly something that I use on a regular basis nor are there many apps for linux that actually have java as their main code included in almost any distribution at all. All java stuff I have seen runs dog slow and this is stuff that is supposed to be highly optimized.
    I think that C++ is closest to what you could consider to be "crows platform" than almost anything: there is a ANSI standard, and a compiler for almost any system. I routinely write and run programs between win32 and linux all the time and never see a problem. Plus my code is much faster.
  • Ah, evangelism. It's so...evangelistic.

    Maybe Sun doesn't want to "get it" yet, because they're still making lots of money with their _increasing_ market share! (besides which, Linux is terribly immature compared to Solaris, HP-UX, and so help me, AIX)

  • Different flavors of unix share a lot of
    the same design elements. Sure, you can
    stretch Windows so that it superficially
    resembles Unix, but you cannot recreate
    the startling formal elegance of a unix
    system just by ading ls.exe, grep.exe,
    and awk.exe to your c:\winnt\system32
    directory.

    The thing about Unix that is most clear,
    that sets it apart from other OS's, is
    its well thought-out design. It was
    noticably not designed so that any
    newbie could use it. I'm sure that Microsoft
    came to realize the high cost of 'newbie usability'
    when it had to resort to releasing another
    successor to Win98 instead of merging the
    9x and NT OS lines with the release of 2000.

  • by elflord ( 9269 ) on Saturday March 25, 2000 @09:13AM (#1173171) Homepage
    I'm not clear on whether high-level porting really is that hard -- if it's from one UNIX to the next. Of course, porting something like GCC that requires a lot of low level code is nontrivial.

    Provided you are working high level, cross-platform APIs ( which should behave the same on all UNIXs ), and you don't try to code to several different APIs [1] then I don't see why it should be so hard. There are differences in the way different UNIXs handle many things [2] but there are toolkits like Qt and glib that take care of the low-level portability stuff and provide their own data-types and functions.

    Just for fun, I grepped for #ifdef directives in krn and there were hardly any ( and the ones I saw weren't about portability issues ).

    [1] this is why the vim code is so complex -- they are simoultaneously coding for motif, gtk, athena, ncurses, and win32 without using any high level portability tools ( because there's none that are portable across all target platforms ).

    [2] An example -- many C++ compilers still have either nonfunctional or incomplete STL implementations. Another example -- some UNIXs ship with different DBMs ( ndbm,dbm,gdbm ).

  • by V. Mole ( 9567 ) on Saturday March 25, 2000 @09:00AM (#1173172) Homepage

    Well, not totally, but there are so many errors and misconceptions in the question I'm not sure how to answer.

    Since there are now so many different flavors of UNIX out there (Linux, Free BSD, AIX, Solaris, AT&T UNIX, etc...) what do they all have in common that lets these all be called UNIX? Programs written for one flavor of UNIX typically cannot be ported to another without considerable effort.

    Wrong. Well-written programs can be easily ported from one unix to another. The problem is that many programs are written by people who have a relatively poor understanding of how to write portable code, and no idea how to distinguish the C standard, the POSIX standard, stuff that's not POSIX but almost universally available on Unix and Unix-like systems, and stuff that is specific to a particular implementation.

    The features offered by the different implementations vary widely: some are more secure than others, some cluster better than others, some offer journaling file systems, some are more robust.

    None of which has anything to with writing portable code. For example, the C interface to the file system (fopen(), fprintf()) and posix interface (open(), read(), write()) make the underlying file system transparent. Of course, if you're writing a tool to manage a cluster, it's going to be implementation specific.

    The differences between the different kinds of UNIX seem to be as great as the differences between any particular implementation and other OSs.

    Sorry, you've obviously never actually had to write programs that worked on a wide variety of systems. I used to work on a large product that ran on a variety of Unix systems, OpenVMS, and NT. IIRC, the only difference between the Unices was different options to mmap() and dealing with an old version of SunOS that didn't implement POSIX signals quite correctly. (Of course, there were many differences to the system headers, but those are supplied by the vendor, and are there precisely so that you don't have to worry about internal differences.) FWIW, the OpenVMS version was pretty easy, because even the the system calls were totally different, most of the concepts mapped pretty well. NT, on the otherhand, was a complete pain in the ass.

    Now obviously, one *can* write code that is specific to a particular Unix implementation. And noone will argue that at the admin level, things are aren't chaos.

    Could one port all the standard command line utilities to NT, clone one or two of the popular shells, set up the directory structure in the standard UNIX layout and call it Microsoft UNIX?

    No, because you haven't changed the syscall interface. Cygwin comes a lot closer, but was also a lot more work than what you've proposed.

  • by platypus ( 18156 ) on Saturday March 25, 2000 @06:44AM (#1173173) Homepage
    You can find many of the popular unix tools for windows at http://www.gnusoftware.com [gnusoftware.com].

    You'll find bash, grep, cygwin, emacs ...

    They've got some nice things there, look at geoshell [geoshell.com]'s screenshots, you wouldn't believe it's windows.
  • by DragonHawk ( 21256 ) on Saturday March 25, 2000 @03:08PM (#1173174) Homepage Journal
    Anything that tightly restricts the evolution, expandibility, and scope of two systems by tying them to each other does not "feel right."

    You are completely correct in that statement, but it has nothing to do with C or Unix. The evolution, expandibility, and scope of the two systems in question has been in no way restricted. Unix has continued to expand and evolve well beyond the original scope of the C language as it eventually was defined in the ANSI standard. Likewise, C is considered the most portable language ever written, with implementations for just about every platform that has a compiler, and it has continued to evolve and be expanded past the original Unix API. It has even been expanded to include object support in C++, which was in turn used to write BeOS, a fact I'm sure you are no doubt aware of, given your handle.

    So, if you have a legitimate complaint, by all means voice it, but otherwise, keep the FUD to yourself, K?

    (Note to moderators: Before moderating this down as "Flamebait", check out this guy's posting history. He makes rabid Linux supporters seem tame by comparison.)
  • by Brento ( 26177 ) <brento@@@brentozar...com> on Saturday March 25, 2000 @06:35AM (#1173175) Homepage
    Who gives a rip about the command line utilities - just make NT as stable as Unix, and you can call it Grandma Pearl's Home-Brewed Operating System with Extra Apples, for all I care!

    Seriously, while you're at it, you should ask exactly what Windows is. It's in the same boat - there's several flavors of a single OS that really don't have much in common. Windows CE and Windows NT don't share much except a start button, when it comes down to brass tacks.
  • by GauteL ( 29207 ) on Saturday March 25, 2000 @06:38AM (#1173176)
    This is just right out of the top of my memory,
    so I could be mistaken, but as far as I remember
    you have to pay someone (The Open Group?)
    for the right to call something Unix.
    It involves certification.
    I've heard that it needs to have evolved from
    the UNIX-codebase, but this doesn't make very
    much sense, as it would be impossible to
    write a new Unix from scratch, but still conforming to the Posix-standards, and being
    source-compatible with the tradional Unix-variants.
    Perhaps someone can enlighten me here, but
    would it be possible to pay that organization
    for certification of Linux?
    Not that I think it would matter, because Linux
    is now bigger than Unix anyway, and it seems very
    important for other Unixvariants to include some
    sort of compatibility layer to be able to run
    Linuxbinaries.
    I did think it mattered 2 years ago though.
  • by jonathanclark ( 29656 ) on Saturday March 25, 2000 @12:53PM (#1173177) Homepage
    Recently having ported software between Linux, FreeBSD and NT, I have developed my own idea of what a ``UNIX'' is. It's simplicity most of all. Everything is a file, and most calls that work on file descriptors work on _all_ file descriptors. You can make a select() call on a file, a pipe, and a socket. This is impossible on NT, because files are HANDLEs, sockets are SOCKETs, HANDLEs from anonymous pipes cannot be used for select() like calls (only named pipes there). Why this diversity in the API ?

    Being a long time Unix and Windows developer, I don't see any advantage to the developer to having one ioctl() that works with all devices. While it makes sense on the kernel (one api to export), it is total nonsense for the end developer.

    type 'man ioctl' : There is absolute no useful information given. It's a generic function and the developer must explictly know what kind of device he is talking to and somehow find the right parameter values to pass in to make it do what he wants. ioctl documentation can never be complete.

    I would argue that it's better to have more functions that are strongly typed and can be documented seperately. If the developer want's abstration, let him build it into his application - don't force it on them.
  • by Inoshiro ( 71693 ) on Saturday March 25, 2000 @02:55PM (#1173178) Homepage
    Windows has many "features" that are consistant across the various implementations:
    • Features no one understands
    • Backwards compatibility with software written 20 years ago
    • CP/M-like file system hierarchies
    • Physical representation of hardware over that pansy "abstraction" stuff
    • A browser in every DLL and app via IE
    • Closed source for increased security

    But don't just go by my word. Go spend a few K on a Windows NT server licence, hardware, and documentation, then play with it. You'll know it's Windows because it feels "right," and because it flashes its monitor a very pretty sky blue to let you know it wants your attention. Surveys say that 60% of everyone's favourite colour is blue.
    ---

  • by TummyX ( 84871 ) on Saturday March 25, 2000 @07:07AM (#1173179)
    Windows CE and Windows NT don't share much except a start button, when it comes down to brass tacks.

    you mean apart from the fact that CE's kernel is based on NT's, and that Windows CE basically supports 95% of the Win32 APIs?
  • by Money__ ( 87045 ) on Saturday March 25, 2000 @08:07AM (#1173180)
    from the knowing-your-OS dept.
    Money__ asks: "Since there are now so many different flavors of Windows out there (1.0, 3.1, 95, 98, NT, 2K , etc...) what do they all have in common that lets these all be called Windows? Programs written for one flavor of Windows typically cannot be ported to another without considerable effort. The features offered by the different implementations vary widely: some are more secure than others, some cluster better than others, some offer journaling file systems, some are more robust. The differences between the different kinds of Windows seem to be as great as the differences between any particular implementation and other OSs. Could one port all the standard GUI utilities to GNOME, clone one or two of the popular GUI features, set up the directory structure in the standard Windows layout and call it WINDOWS/UNIX?"

    Hmmmmmm
    _________________________

  • by Heretik ( 93983 ) on Saturday March 25, 2000 @06:34AM (#1173181)
    I suppose if M$ was to actually provide a near full implementation of a unix shell, filesystem, and command line utilities, it could be argued that NT would be, indeed, a UNIX. The one thing I wonder about is the kernel. UNIX kernels are just that.. UNIX kernels. I'm no kernel hacker by any means, but seems to me for a kernel to be a UNIX kernel, it has to have a certain structure and interface, which makes it a UNIX kernel. SO in that respect, NT could never really be a UNIX.
  • by codealot ( 140672 ) on Saturday March 25, 2000 @06:59AM (#1173182)
    ...though some (including Microsoft) have tried. It may pass POSIX compliance tests, it may run UNIX software, but there are certain differences that will be difficult or impossible to overcome.

    First is the security issue. Unix has the concept of a superuser (root). NT has role-based security. "Administrator" is not special other than having more rights than user accounts, including the right to manager user rights.

    The key differences are that an NT Administrator cannot open any NTFS file regardless of modes, as a Unix superuser can. Also the Administrator account cannot arbitrarily pose as other users. There is no working setuid() call. (POSIX requires setuid to exist, but it doesn't have to work... it may return EPERM unconditionally. That is just what Microsoft's POSIX subsystem does.)

    Second is the executable file format. Most modern OSes have standardized on ELF. NT is one of the holdouts, still using PE (a variant of COFF). Shared libraries on NT (i.e. DLLs) are loaded and relocated by the OS, not in user space as ELF shared objects are. And DLLs have annoying limitations, like requiring data symbols to be imported/exported.

    Third, file semantics on NT are tailored for Win32. Ever try to remove/rename a NTFS file while it is open in some process? You can't. On Unix, linking and unlinking of open files is permitted, and many utilities depend on that behavior.

    While it may be possible to certify NT as "POSIX-compliant" (or even get Unix branding, I don't know), it will never work truly like a Unix system. There are just too many core differences.
  • by yebb ( 142883 ) on Saturday March 25, 2000 @07:06AM (#1173183)

    The philosophy is a result of more than twenty years of software development and has grown from the UNIX community
    instead of being enforced upon it. It is a defacto-style of software development. The nine major tenets of the UNIX
    Philosophy are:

    1.small is beautiful
    2.make each program do one thing well
    3.build a prototype as soon as possible
    4.choose portability over efficiency
    5.store numerical data in flat files
    6.use software leverage to your advantage
    7.use shell scripts to increase leverage and portability
    8.avoid captive user interfaces
    9.make every program a filter

    The Ten Lesser Tenets

    1.allow the user to tailor the environment
    2.make operating system kernels small and lightweight
    3.use lower case and keep it short
    4.save trees
    5.silence is golden
    6.think parallel
    7.the sum of the parts if greater than the whole
    8.look for the ninety percent solution
    9.worse is better
    10.think hierarchically
  • by Paul Komarek ( 794 ) <komarek.paul@gmail.com> on Saturday March 25, 2000 @07:07AM (#1173184) Homepage
    I can't believe I didn't see this while scanning the comments--maybe I missed it. A huge part of UNIX is the filesystem. The UNIX filesystem has got to be among the most beautiful device abstraction layers ever built. I don't need to know how many drives are in the machine--or I can add extra drives, change the physical location of data, and keep the same paths and filenames.

    I can access my printer as a file, my serial ports as a file, my memory as a file. Heck, I can mkfs /dev/ram and make filesystems in files on my filesystem. Then there are links, especially symlinks. And networked filesystem types that look local to the user (slow with lots of latency, and some race conditions, but local). You can even access the kernel via the filesystem.

    NTFS and VFAT don't have symlinks and the hardware is up-front and ugly (ever try moving an app from C: to D:?). I'm sure there's a lot more, but these things alone are horrible. What happened to device abstraction, that I have to track my hardware when naming files?

    -Paul Komarek
  • by Master Switch ( 15115 ) on Saturday March 25, 2000 @12:47PM (#1173185) Homepage
    Hmmm, you ask a tough question, but here's a go at an answer.

    UNIX encompasses more than just a set of API's, and user shells. UNIX has more to do with an OS design philosiphy, than with any particular implimentation. At the outset, UNIX strives to provide an environment made of small utilities that can easily be used together to accomplish a task. This is true from the kernel design, to the typical C API's, out to the actual user environment. From a kernel perspective, most UNIX's have a virtual file system, virtual memory system, and a process swapper. There is much more than this, but these are the three main components. By utilizing features of each of these basic parts, it is possible to implement inter process communication(Pipes, shared memory, etc), security(mostly via the VFS, since most everything in UNIX is seen as a file), and Time sharing(via the process swapper).
    The next layer would be the C libraries that most UNIX's come with. While not directly tied to the OS itself, most C libraries will heavily reflect the OS they live on, since they are the common gateway to make sys calls, the main way to call system functions. It is with system calls that a program can interact with files, and access system resources. The OS will handle the scheduling and access control from behind the scenes. The C library is a way to ask for resources.

    But, what most people think of when they think of UNIX, is the user environment. Unfortunately, this is missleading. Everything from Windows to OpenVMS, to OS/390 MVS can be made to look and smell like UNIX, but that doesn't make them UNIX.
    What the user sees is a simplified(if you can believe that grep, sed, and awk, ad nauseam are simple :)) playing field, made of lots of tools that easily allow the user to interact with files, and hence, the computer. Since this is the layer a user spends most of his or her time with, this is what people have come to identify when they say UNIX.

    The moral of the story is that UNIX really is a "handle" that is used to encompass an approach to OS design. UNIX is a way of viewing how things should be shared, and managed. That is why UNIX runs on almost every platform. UNIX is really a memme that dictates how system resources are controlled, and not one of how a system should be architected from a hardware perspective. UNIX is a way for multiple users to access a single system, in quasi-realtime , interactive fashion. It lays out such things as scheduling, Access Control architecture, and resource presentation. If an OS follows this design philosophy, than that makes it more a UNIX, than what it's user shells look like.
    That is why Linux is a UNIX, without having any blessed UNIX C code in it. It's an architecture built on the UNIX philosophy. Just remember to never let a purist hear you call Linux UNIX though. That, unfortunately, is a matter of religion, best left for another discussion.
  • by int69h ( 60728 ) on Saturday March 25, 2000 @06:45AM (#1173186)
    IMHO the biggest thing all flavors of Unix have in common is the "everything is a file" design. So the answer to your question about porting all sorts of things to NT and calling it MS Unix is no. Everything in NT is an object.
  • by ctj2 ( 113870 ) on Saturday March 25, 2000 @09:11AM (#1173187) Homepage

    I've been working with Unix now for 20+ years. I've worked in the kernels of Cray Super Computers and been driven batty by a BSDish system running on a no name box for a network logger. The equipment has ranged from a 386 that took 5 minutes to boot Linux to SGI super computer cluster. And there are a few things that they all had in common. These common "features" are, in my not so humble opinion, what makes something "Unix".

    1. A command line interpeter, and a way to get to it. Be that a TTY login, rlogin, telnet, or XDM giving me an xterm.
    2. Pipes. The ability to trivially feed the output of one program to another. Where the default input is "stdin" and the default output is "stdout" (See IBM/CDC/Cray Not unix OSs to see how each program required you to define an input and output file)
    3. A tools based approach to solving problems. Rather than loading a file into a spreadsheet and then clicking away for a few hours. Instead we have "sed ... | awk | lpr"
    4. File based access to devices. /dev/* is a very powerful access method. No special code is required of fsck but access to the special device and a user land program can do a filesystem repair. Just think of it this way, 'dd', 'cat', 'fmt', 'awk', 'sed' all can write tapes.

      For that matter, the user can't tell the difference between an IDE cdrom and a SCSI cdrom.

    5. A method of attaching extra storage as part of the existing filesystem. "mount /dev/da7s1 /usr/homes/diskhog" vs "Yo, Mr Hog, please use g: for your files now"

    While API is important, and I wouldn't want to give up my X windows, or bash/tcsh I've run on Unix boxes where those are different or missing. In some cases my tools of choice were missing and I had to bootstrap 'gcc' into place and from there I added the extra tools (sed/awk/tcsh/bash) that made my life comfortable.

    Unix is a style. A way of solving problems. We don't think in terms of "monolithic (Office)" programs that do almost what we want, but instead we think in terms of solving a piece of the problem, chipping away till there is nothing left. And in the process we find that we have a dozen new tools that people will use in ways we never considered.

    PERL was never intended to be a webserver, but the tool is so powerful that it now is. That and much more. Every tool that Unix has is used in ways that its creator never intended. Microsoft Office will never be more than what it is, What You See Is ALL You Get.

    I leave you with this quote then: "Like a precious metal, UNIX is being beaten and molded into whatever forms are most pleasing to the owners." -- Lee A Butler

  • by mattdm ( 1931 ) on Saturday March 25, 2000 @07:17AM (#1173188) Homepage
    The technical answer is found at http://www.unix-systems.org/ [unix-systems.org]. Check out What is Unix [unix-systems.org], the FAQ [unix-systems.org], and the register of products [opengroup.org]. Nothing else is technically Unix.

    What is Linux, then? Well, it's "unix-like", despite the FAQ's claim that this term is a trademark violation. Personally, I'm sceptical of the continued validity of the "UNIX" mark (once something enters common usage, it can't be a trademark), but there've been no court challenges, so it stands.


    --

  • by Oestergaard ( 3005 ) on Saturday March 25, 2000 @07:10AM (#1173189) Homepage
    Others have already answered the question with reference to POSIX, but I just thought I'd thow in my two Euro as well...

    Recently having ported software between Linux, FreeBSD and NT, I have developed my own idea of what a ``UNIX'' is. It's simplicity most of all. Everything is a file, and most calls that work on file descriptors work on _all_ file descriptors. You can make a select() call on a file, a pipe, and a socket. This is impossible on NT, because files are HANDLEs, sockets are SOCKETs, HANDLEs from anonymous pipes cannot be used for select() like calls (only named pipes there). Why this diversity in the API ?

    Running a daemon simply requires you to start up the executable with an "&" if it doesn't support forking to background by itself. I have a routine that does this on UNIX as well as on NT. The UNIX code is perhaps 5-10 lines, and the NT code is roughly 150. On NT you must specifically talk to the service control manager, ask it to ask your program to ask it to ask the program to do all kinds of strange things. Horrible, and inflexible to the extreme.

    The system calls on UNIX take a minimum necessary number of arguments, and perform some action fairly much as you would expect it to. An average system call on UNIX probably has, say, 3-4 arguments. I haven't counted the average on NT, but I'd guess it's 6-7 arguments there. This is not because NT is more advanced, but because IMO the design of Win32 is flawed.

    As an example, to start up a program on UNIX, you'll write something like (dumbed down):
    if(!fork())
    exec("my_program", arguments, environment)
    Two system calls that perform a very clear and simple task each, taking 0 and 3 arguments.

    The similar code on NT is something along the lines of a call to CreateProcess() which takes 10 (ten!) arguments. These arguments are: program name, command line and environment as on UNIX. Besides, you'll have to pass arguments such as current directory, process security attributes, thread security attributes, filehandle inheritance flag, creation flags, etc. etc. My favorite is that you have to pass an argument specifying the Window Size (!!) of your new process... I nearly pissed my pants when I saw that.

    UNIX system calls are fairly consistent. Usually they will return either a 0 pointer, or the integer -1, if the call fails (depending on whether the call returns a pointer or an integer). On NT the calls return either INVALID_PROCESS_HANDLE, INVALID_FILE_HANDLE, INVALID_SOCKET, FALSE, 0, -1, or whatever the failure-return-code-of-the-day was when some call was last rewritten. Error codes can be retrieved from errno on UNIX, while on NT they come from GetLastError() usually (with some exceptions I've happily forgotten about), unless of course it was a socket call that failed, in which case the errorcode can be retrieved from WSAGetLastError(). Sigh.

    The point I'm trying to make here is, that to me UNIX stands for a _fairly_ consistent and well designed API. Win32 is neither well designed nor consistent, and porting command line utilities to NT is not going to make it UNIX, not by a long shot.
  • by raph ( 3148 ) on Saturday March 25, 2000 @10:53AM (#1173190) Homepage
    A more serious followup. While the tone of this comment was flip, the content was actually serious.

    The basic cause of the backspace problem is a different ASCII code for the key in DEC terminals (where it's 0x7F) and others such as Sun systems (where it's 0x08). Unix is configurable (using the stty command) to deal with either, so that it didn't really matter too much.

    A modern Unix-like[1] system is a blend of many different threads of development, with both Sun and DEC well represented. So, the Linux console follows the DEC convention, while the xterm's in most distributions follow Sun (Debian is the major exception here).

    So, in a system with the backspace problem, you have many factors coming together:
    • A mixture of traditions from many different branches of Unix development
    • A chain of concern for backward compatibility reaching back to the days of the VT-100
    • Configurability to make it work most of the time

    And, no from-the-ground-up reconsideration of the system, during which the backspace problem would certainly be fixed.

    If that doesn't define Unix, I'm not sure what does.

    [1] Yes, I know it's an "abuse of the trademark." The Open Group can stuff it.
  • by raph ( 3148 ) on Saturday March 25, 2000 @08:14AM (#1173191) Homepage
    You know it's Unix when the backspace key often performs an action other than deleting the character to the left of the cursor.

    When (actually if) this problem is fixed, the system will have changed so much from Unix that it probably wouldn't be recognizable to Thompson and Kernighan.
  • by GeorgeH ( 5469 ) on Saturday March 25, 2000 @07:56AM (#1173192) Homepage Journal
    A unix system is like an orgasm. When you have one, you'll know it.
    --
  • I think the most important thing to realize is that no single thing defines what makes Unix be Unix instead of Just Another Operating System. Sure, there is POSIX, but POSIX doesn't cover everything. Sure, there are the "standard" Unix shell tools, but those can be ported. You can't nail Unix down in any easy definition. Yet, Unix inevitably feels like The Right Thing to those who know it. If you like cliches, Unix reminds me of the old line "I don't know art, but I know what I like."

    That being said, I'd like to touch upon a few things that make Unix what it is:

    The design of Unix is driven by synthesis. You don't create a specific tool to solve a particular problem; you break it down into smaller, general problems and write general tools to solve them. You then combine those tools to solve the original problem -- but you can continue using them afterwards.

    This leads us to The Unix Philosophy [hex.net]. Anything you call "Unix" or "Unix-like" will adhere to it. However, the Unix philosophy is a set of design goals, not a system definition. Something can follow those goals without being Unix. So that doesn't cover everything.

    Let's start with the filesystem. As others have said, a key element of Unix is the single filesystem. Unix must have a root filesystem mounted at /, and cannot function without it. It is more then not being able to do anything useful without the programs in the root; it is the fact that the Unix filesystem is a large part of the Unix API.

    Additional filesystems are spliced into this single presentation, not mounted as separate trees. System hardware is abstracted and presented using file system entries. These are things that cannot be done if your OS doesn't support them. Then you have the organization of the files in the Unix filesystem. Programs in /bin, configuration data in /etc, devices in /dev, temporary files in /tmp, "user files" in /usr. None of these are mandated by the kernel or the utilities, but they are definitely old friends to a Unix hacker like me.

    Unix processes also behave in a certain way. Process spawn overhead is low and context switching is fast. Signals and exit codes are used for IPC. fork() and exec() are separate system calls.

    Unix treats text as data and data as text. Configuration files are generally human-readable. You can "cat" a binary file without the OS doing end-of-line manipulations. Any particular meaning of a character (^D for EOF, e.g.) comes from the terminal driver, not the I/O mechanism.

    Lastly, Unix was implemented in C, and C was designed to implement Unix. Contrast this with other OSes, where the language you're programming in and the system library are generally completely separate things. This synthesis (to borrow from the Mazda ad campaign) "just feels right".

    While I'm on the subject, I'd like to address two things that Unix explicitly isn't:

    Unix is not a trademark. I'm sure The Open Group doesn't agree with me, but Unix was around before they were, and will continue to be around long after they are gone. They control who can legally put "UNIX" on their product, but that is a matter of layers and money, of which Unix cares about neither.

    Unix is not a particular source implementation. There are very Unix-ish things which have not one line of AT&T or BSD code in them, and there are things totally not Unix which contain BSD code. MS-Windows is one of them.

    I forget who said it, but if you're looking for one line answers, then this fits best:

    "Unix isn't so much an operating system as it is a painstakingly compiled oral history of the hacker culture."
  • by RNG ( 35225 ) on Saturday March 25, 2000 @06:42AM (#1173194)
    While there's probably no definitive answer to this question, I would suggest that a loose definition of UNIX would include the following:
    • POSIX compliant system API
    • Availability of traditional UNIX shell tools
    • Ability to run without a windowing system
    • Modular design
    The question is like asking "What's a car?" BMW, Mercedes, Honda, GM and lots of others fit the mold without there being a 'definitive' car. I think the same applies to UNIX: there's lots of variants, based on the same basic (and time-tested) design. I've done some software porting across different UNIX systems and in my opinion, the differences between these systems were blown out of proportion by the marketing machine from Redmond (and others). There are differnces, but they're manageable ...

    Also, while this is not as true anymore as it was a few years ago, I would say that an important part of the UNIX philosophy is the fact that most configuration parameters (and other useful stuff) are stored in plain text files (yes, Solaris and AIX don't do this anymore, but the basic idea is still there). You could port all the shell utilities to NT only to find out that they're basically useless as everything is stored in some binary file that you can't really process with your trusted text tools. ASCII is portable and accessible; a fact that UNIX really drives (or at least should drive) home ...

  • by codealot ( 140672 ) on Saturday March 25, 2000 @08:52AM (#1173195)
    It's a well-kept secret that the NT Executive has a single-rooted hierarchy, containing device nodes, named pipes, and filesystem roots. It even supports mount points, sort of, via a concept called reparse points.

    Of course none of this does us any good, because the Win32 subsystem does its best to hide whatever elegance lives in the NT kernel. You can get glimpses of it... for example, try opening the file \\.\PhysicalDrive0 for reading (don't write to it, that's your raw disk sectors!).

    You get the feeling that NT could have been something better if they didn't have to make it compatible with Win9x.

    Oh, and BTW, NTFS on Win2k does symlinks too.

IF I HAD A MINE SHAFT, I don't think I would just abandon it. There's got to be a better way. -- Jack Handley, The New Mexican, 1988.

Working...