Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
GNU is Not Unix

Dr. Dobbs' Journal On Hurd 190

wiredog pointed out an article that's currently running in Dr. Dobbs that talks about Hurd [?] , what it is, and what it is meant to do, as well as what's cool about it. The article starts off slow, but then gets into some good info.
This discussion has been archived. No new comments can be posted.

Dr. Dobbs' Journal On Hurd

Comments Filter:
  • by Anonymous Coward
    Thanks in advance.
  • by Anonymous Coward
    HURD would have been a really cool system, if Richard Stalman had got his priorities right in the first place and written the GNU Kernel before he wrote all the system tools! No, GNU did it the other way around, and now they spend what, 5 years? righting a kernel?!

    We all know full well what happened. Linus Torvalds waltzed right in, wrote the kernel for them singal handedly, and wrapped the GNU tools around it. Linux pre-empted HURD by a good 5 years, and it has (Quiet rightly) become a mainstream Operating System.

    Starting the HURD just seems more like sour-grapes on the part of Stalman to be honest. When he realised how popular Linux and Linus were becoming, he decided to try and steal their thunder. Sadly, HURD is too complex and too late to make an impact. So now RMS resorts to calling Linux "GNU/Linux" to try and get back a little of the glory that Linux has.

    HURD? I wouldn't bother.
  • by Anonymous Coward
    Hey! Mach is already in use for OS X and has been in use in the for quite some time on NeXTs.
  • by Anonymous Coward
    There was talk recently about moving Hurd to the L4 microkernel, L4 is aparently runs faster and has more developers.

    It would be a big job, and its just a discussion so far.
  • 1) Speed problems are irrelevant in the long run. Moore's law will overcome them. (This assumes that the basic design is good, so you don't have to keep piling on more and more complexity to correct basic flaws - Like DOS and Windows.) 2) You might be right about that. Breaking a big program written to solve a complex problem up into a lot of little modules doesn't give you a simple program, it gives you a complex program with more of the workings exposed. Computer Science professors will love that because it makes it easier to teach people about it, but it doesn't necessarily make the program any better. The advantages I see to the microkernel are (a) it should make it easier to get new volunteers to sign on to write or improve one daemon. (b)You can choose not to load daemons you don't need. This doesn't mean much to a workstation or a full-purpose server, but it might really slash the overhead in a specialized application that doesn't need so many services. E.g., for a PDA you can dump the standard keyboard and video daemons, replacing them with something appropriate to the limited hardware, and you can lose a lot of stuff that isn't going to be used at all. But PDA's can't afford an inefficient kernel yet. And for a larger specialized appliance, how often will the savings be worth the trouble of using a non-standard, untested load? It will be interesting to see how this works out... Mark Moss
  • Can someone explain the trade offs between the two models? From a very high level (I'm no kernel hacker), they seem like they both solve the issue of maintainability.
  • by Anonymous Coward on Wednesday November 01, 2000 @03:37AM (#658714)

    The HURD is an ambitious project which has had a rocky history, but there remains several black marks against it which seem to me to be fundamental flaws which are inherent to what it is and what it wants to do.

    Firstly there is the fact that it is based upon an implementation of the Mach microkernel, which has been the favourite of OS courses but which has been shown to be rediculously inefficient in real world situations where performance rather than elegence is a major factor. You need to have a fast kernel in any case, and Mach just can't cut it. If the HURD is to succeed it needs to move onto using a more serious architecture rather than some ivory tower toy kernel.

    Secondly, the current implementation of its server system is prone to an inordinate amount of deadlocks and race conditions under heavy loads, partly due to the Mach kernel, partly due to some sloppy coding in some of the IPC code. This means that whilst the HURD is fine for the casual home user, under heavy loads (such as running a webserver), you are likely to get a lot of system lag or even freezing.

    Until these serious flaws are sorted out, the HURD is still in the "hobbyist" category rather than the "real world" one. It's nice to study, but it needs to have a lot more work before it's ready for heavy use.

  • If you want to complain about that, why not point to the microkernel based AmigaOS, which may still rate as the most popular microkernel OS, despite not having been sold in nearly a decade.

  • No Further Message
    The real Threed's /. ID is lower than the real Bruce Perens'.

    --Threed
  • >>Mach has only seen limited maintenance over the last few year.

    ummm, what about Apple's Darwin? which is based on the Mach3.0 micokernel? I realize that Apple is not a huge company, but that still seems like more than limited maintenance to me.
  • Hundreds of comments, and everyone here thinks Stallman hasn't written an OS. I guess no one has heard of emacs (EMACS Makes A Computer Slow...)

    I'm sickened by the current generation of slashdotters and their pathetic post; will no one build a Beowulf Cluster of these things? Where are my GRITS??? :)
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • I've always had a hatred for Dr. Dobbs. In the olden days, it was small, but as time went on, I found more and more of their articles irritating. This article wasn't nearly as irritating at most. My biggest problem with their magazine is that they target the magazine at programmers who work in bullshit fields, ie, programmers who really don't care all that much about what they do. They assume professional == know what they're doing == care about their profession. If Dr. Dobbs was a real programmer magazine, their articles would NOT be predominantly Windows oriented. No respectable programmer prefers to work with, or enjoys working with Windows. Neal Stephenson said it best: (paraphased) The day Microsoft makes a product that interests me is the day that I short their stock, because I am a market of one.

    The Hurd was an interesting project that I tried pursuing but I only found myself abandoning. I had a running Hurd system for awhile and even wrote a few utilities (this is what mount might look like: http://nyct.net/~mbac/_mount.c), it really leaves a lot to be desired. The "micro"-kernel is about as big as the Linux kernel. The ext2fs driver is userspace and (last I checked), can only mount 1 GB filesystems because it mmap()s the partition. It's a clever idea, but the end result is not usable and I'm worried that they'll layer hack upon hack to try to maintain the clever idea and still keep it useful. Every basic system interaction involves a server and a translator. All servers must be explicitly multi-threaded and a running system can only go a few days before you've leaked all of your memory. This system is obviously still very beta, but these bugs are already plaguing the system.

    The programmers that work on the Hurd are very talented and for the system to have as many problems as it does, I worry about how less talented people are going to deal with it. That article and this comment should not be an alternative to actually playing with the system. Give it a try. The experience is worthwhile even if you end up deleting the system 2 hours later. Hey, you might like it and think I'm an idiot and make it badass enough to smash Linux. That'll show us. :)

    Also, the article mentions that the only way to add a new network protocol to Linux is by adding it to the kernel. This is untrue as the raw sockets interface allows you to develop your own protocols. This is how routed, gated, and dhcpd function without requiring explicit kernel support.

  • Yes, but since more OS functionality will be implemented in user space, you can expect to see fewer `system calls.'

    That's an exokernel. In a microkernel, you still have the IPC to the userspace daemon to contend with.

  • by sjames ( 1099 ) on Wednesday November 01, 2000 @05:17AM (#658721) Homepage Journal

    f so why would I use Hurd, If i stick with Linux (a product that works in the here and now) I will eventually get all the advantages that they are talking about anyways.

    Kernel modules are not at all the same thing as microkernel. For example, imagine a system where I (as a non-root user) can load modules at will. You canm do the same. My modules don't affect you and vice versa, so there is not a security problem. Thus, we can each load a different 'module' with different semantics to make a remote FTP site look like a mount point, and both of us will be happily unaffected by the other's choices.

    That is not to say that Linux won't still have it's place. One of the trade-offs in a microkernel design is that 'system calls', which are implemented as IPC, tend to have a higher overhead. Linux will tend to run faster because of that. Also, some of the more interesting Linux kernel code may end up in a thin wrapper to serve as a HURD daemon and vice versa.

    That's one of the big advantages of Free Software, good and useful code get's re-used.

  • Look for the WorkPlace OS project; IBM committed some serious money to building serious systems based on Mach.

    And note that both Digital (now Compaq) and NeXT Inc. released reasonably serious products based on Mach. (Albeit, in the case of OSF/1, something predating the "worship-of-microkernel" days of Mach...)

    It does not seem reasonable to merely characterize this as

    RMS and co got tricked by all the academics into thinking that their new OS would have to be a Microkernel in order to be taken seriously;
    the same was true of IBM, Sun, HP, Digital, and "everyone else" too.

    I'm not sure I'd go along with

    Besides, Linus has been an excellent ambassador for Free Software.
    either. Linus hasn't claimed that "portfolio;" he hasn't gone out evangelizing in that area; he seems to take a much more "pragmatic" standpoint in, in fact, suggesting on occasion that "free software" isn't of such crucial importance. Don't take that too far in the wrong direction, but I'm skeptical of the "ambassadorship"...
  • Apparently I didn't word that carefully enough:
    The problem is with the ingrates down the line that don't give credit where it's due
    whilst loudly demanding their piece of the action.

    And as for the "GNU code being only a fraction," it happens to be the fraction through which everything else happens to get activated. No GLIBC means no user space.

    XFree86 isn't crucial to the system in the same way; many of us have systems that are well and useful despite not including X at all.

    And it is most interesting that you chose to ignore the fact that I indicated that RMS makes himself look ungracious when he demands credit; it's as if you want to imply that I didn't recognize that...

  • The "L4" folks seem to have gotten at the performance issues more pointedly than anyone else; they built microkernels tiny enough that they could actually do useful experiments and metrics.

    And it turns out that it's harder to get good performance than anyone thought, easier to throw it away, and I expect it's pretty easy to throw away reliability when adding additional components...

    As you say, "loadable kernel modules" are likely to be good enough a whole lot of the time.

    The flip side is that if Hurd gets "usable," some of the special facilities like translators may provide a slick test bed to try out new things that would be neat to add to Linux. Performance may suck, but an AMD "Sledgehammer" should make Hurd not too unusable :-).

    There may not be an Oracle port, but it might provide a good place to prototype:

    • Namespaces (of "Plan 9" style)
    • Cool filesystems
    • A successor for NFS
    • A CORBA implementation that pretends to be part of the OS kernel

    The OS that I would sort of like to see get more attention is EROS; unfortunately it's so different that it is unlikely to be self-hosting any time soon. It's not Unix, and its merits would be discarded by pretending it were.

    I would suggest that a whole lot of the reason for the "death of OS research" is the giant shadow of Redmond; when Microsoft was pushing "NT Everywhere," research groups were running scared. It may be coming time for them to poke their heads out of the ground again...

  • At the time that Hurd efforts started...
    • Mach looked to be the "way of the future."

      It wasn't until Microsoft pulled Raschid and other critical researchers out of CMU, and IBM's WorkPlaceOS project failed that the "glow" came off.

    • Linux was still just a "hack" for the 80386.

      At present, Hurd only runs on IA-32, but that hearkens back to the "immense aura of failure" surrounding Mach. Mach has only seen limited maintenance over the last few year.

    • As for the "inappropriate ordering," be vastly aware that in order to make a kernel self-hostable, you need to have the whole toolchain including compilers, init , BinUtils, FileUtils, and such.

      If you have no compiler and no other such tools, you can't build the kernel, you can't run the kernel, you can't use the kernel.

    No, they got the order straight.

    The problem isn't with RMS trying to steal the glory from Linus for building a kernel; it's not with Linus stealing the glory from RMS when he built a kernel using the tools RMS helped build.

    The problem is with the ingrates down the line that don't give credit where it's due.

    It is fair to say that just about everything at the layer sitting on top of the Linux kernel "comes from GNU." Between GLIBC (whether version 1 or 2), GCC, and BINUTILS, the layers that make Linux useful all do come from FSF efforts. It certainly does look less than graceful when RMS "demands credit;" that doesn't mean it's an outrageous state of affairs for him to think he can expect some credit.

    And the notion that Hurd is the all important be-all end-all project of the FSF is pretty silly; the people that want to participate are participating, and it is not evident that the FSF is spending big bucks or otherwise big efforts into its development...

  • RMS and co. got tricked by all the academics into thinking that their new OS would have to be a Microkernel to be taken seriously. You see, the Hurd was started before Linus started working on Linux, but it's advanced architecture made it much harder to debug.

    In waltzes Linus, some kid that wasn't worried about elegance, but instead wanted software that he could actually use, and all of a sudden it was possible to actually use the GNU system without any proprietary software. This was almost certainly a blow to the Hurd, but it has also been a tremendous win for Free Software. I am sure that even RMS would agree that it is better to have a completely free system with Linux than to have to continue to host GNU development on some other proprietary system. Besides, Linus has been an excellent ambassador for Free Software.

    That being said, you can't hardly blame the FSF for finishing the Hurd. When finished it will have some very cool features, and they have put enough work into it, that it doesn't make sense to chuck it all out the window.

    Now, I am not saying that RMS isn't interested in some of the credit for Linux's success, as that is clearly part of his goal. On the other hand, without the GNU tools Linux could not have been written, and it wouldn't have been useful when it was done, so he does have a point. Besides, RMS is clearly mostly interested in highlighting the fact that Linux systems are Free Software. To him this distinction is a very big deal.

    Feel free to call it "Linux," I generally do, but you should do some research before you malign people you don't know.

  • Thanks for a very informative post. I was aware that pretty much the entire industry fell in love with Microkernels, but I didn't realize that so many of them had actually sunk money into such osen.

    I suppose that you are even right about Linus not being an ambassador for "Free Software," except possibly by example. I consider myself an advocate of Free Software, so it is a little disheartening to think that the creator of Linux might be disqualified simply because he is "too pragmatic."

    Oh well, line me up with the rest of the zealots...

  • HURD to me is more of a research kernel. It's there to see if a Microkernel setup can go from a toy OS setup to a real OS setup. To my mind it is the alpha for the next operating system after Linux, or when they decide that adding functionality to the kernel isn't going to work anymore and they need to start moving stuff into user space and trimming the kernel down. When they do that, they can look over the work that HURD has done and the things that it has learned in the process. I see it as research for overhead issues and how to make a microkernel fast enough for real use, and how to debug and optimize a microkernel setup. The lessons learned with HURD will help elsewhere.

    The sad fact is I don't see HURD taking off, ever. It doesn't really offer enough significant advantages over Linux to convince people to move. The advantages are more from the development perspective than from a user perspective. Transparent FTP isn't quite enough to get people to move over, for example. Something like EROS (www.eros-os.org) is more likely to be the successor to Linux because of the incorporation of significant functionality into the kernel that could not be easily installed into Linux.

    In short HURD is a neat idea and the lessons that are learned from its development will most likely be harnessed elsewhere (e.g. we shouldn't try X, the HURD people had all these sorts of problems when they tried that approach).
  • Once again, please? What is the text missing before "This make kernel modules faster"?

    Running things from within the kernel is faster because you don't have to do any context switches. A context switch is when the CPU swaps in a different process. This includes restoring all the CPU registers and doing some record-keeping. That's one of the major reasons that microkernels are generally slower -- drivers and file systems have to be swapped in before they can do their work.

  • There are other servers on MacOS X as well, but the BSD server has the same memory space as the microkernel itself. The others do not.

    At least that's how it goes, AFAIK.
  • Interesting points. I will look into the HURD documentation. It can do really amazing things that will never be possible under linux. Can it really do things now, or is this still vaporware? I ask only because I have been hearing about HURD since I started with Linux in 1995, but Linux is running my computer, not HURD. I'll check it out. Later

    "Fat, drunk, and stupid is no way to go through life."
  • by Johann ( 4817 ) on Wednesday November 01, 2000 @03:55AM (#658732) Homepage

    I disagree. The point about how there is no simple way to add new features into the kernel is a crock:

    When new OS functionality is required -- for example, support for some new networking protocol -- no framework exists in a monolithic-kernel-based OS for placing this functionality elsewhere, so it is simply added to the kernel.

    I am not a kernel hacker, but according to Linus' essay in Open Sources [oreilly.com] , the kernel design is quite modular, thank you.

    As a user of the kernel, I understand the use of modules and this seems to be a modular way to add features. AFAIK, the module feature of Linux is and interface.

    Bottom line: how accurate is this article?

    "Fat, drunk, and stupid is no way to go through life."

  • by Johann ( 4817 ) on Wednesday November 01, 2000 @04:11AM (#658733) Homepage
    From Open Source (pg 108): Linus Torvalds
    With the Linux kernel it became clear very quickly that we want to have a system which is modular as possible...

    With the 2.0 kernel Linux really grew up a lot. This was the point that we added loadable kernel modules. This obviously improved modularity by making an explicit structure for writing modules. Programmers could work on different modules without risk of interference. (emphasis mine).

    This would seem to contradict Mr. Dr. Dobb's 'eggspert' on the Linux kernel.

    "Fat, drunk, and stupid is no way to go through life."
  • The HURD does suck, and maybe you should wonder why it does.

    The fact is, microkernels are a fraud [tunes.org], and the HURD, by embodying the advertised model, is both the victim and the accomplice of this fraud.

    Conceptually, microkernels are an abstraction inversion: they force you to hand-implement run-time modularity using a non-modular low-level language, instead of auto-implementing modularity by compiling a modular high-level language into non-modular binary (see: SPIN, Fox, ML/OS, Squeak, and more).

    Modularity is a source-level concept. Enforcing it at the binary level is DUMB and EVIL. Binary level should be efficient. Linux got it right (as far as a C kernel can).

    If you want dynamically reconfigurable kernels, don't bother with HURD and microkernels, they do not provide any specific advantage for that that "monolithic" kernels don't have. If you really want dynamism, use dynamic languages (LISP, Smalltalk) or at least modular languages (Modula-3, SML, OCaml), not C.

    BTW, VSTa is a free software microkernel that has actually worked for years, yet never attracted any specific interest: who cares how it works underneath? What matters is the high-level features it actually provides. In practice, nobody cares about the puny features provided by microkernels.

    -- Faré @ TUNES [tunes.org].org

  • by jjr ( 6873 )
    This is what competion is about even in the open source world choices help stream line everything. I have the choice to use what I want and how I want it.
  • RMS has said that if the Linux kernel were available at the time that the Hurd started, the FSF wouldn't have bothered with it. But since a lot of work has already been done on it, he wants to see it finished.

    But the resources dedicated to the Hurd are not huge (if they were, it would probably be further along).

  • It doesn't have an equivalent number of coding hours because people aren't as enthused about working on it.

    They aren't as enthused about working on it because they realize it's got unrealistic political bullshit wrapped up in the technical design.

    -
  • So, by your logic, OpenBSD gets a lot less coding time and thus must have unrealistic political bullshit wrapped up in the technical design?

    "If A then B" does not in any way imply "if B then A".

    I suggest you consult an elementary logic text before pursuing this discussion any further.

    -
  • by Syberghost ( 10557 ) <syberghost@@@syberghost...com> on Wednesday November 01, 2000 @04:55AM (#658739)
    Kind of like the Linux-kernel was in the beginning then? :-)

    Sure; but how many years old was Linux when it was like that?

    Hurd isn't "in the beginning" in age, just in capabilities.

    -
  • What do you think the g in glibc stands for? Keep in mind that even libc5 was a fork of the gnu C library. I'll bet GNU code is run every time you type ls or cd, and when you run your shell, and....
  • By that same reasoning, what does that make FreeBSD, OpenBSD, NetBSD and the BSD-du-jour?
    --
  • Writing the kernel with what? Using a non-free editor, a non-free compiler, a non-free libc and other non-free tools? Would that be regarded as "free" by anyone?

    Pardon me... how did they write that editor? The compiler? The tools? They wrote them using "non-free" (for Stallman's definition of "free") versions of the same tools.

    Bootstrapping's always been a problem, even if (particularly if) you're willing to start from scratch. <sarcasm>And, hey - even if Stallman and crew had started from scratch... how in the world could you expect them to write a "free" OS and tools on top of a "non-free" BIOS?!? </sarcasm>

    "Non-free&a mp;quot; software pretty much had to be, and was, used to develop the first versions of Stallman's "free" software. The fact that HURD came later than the tools wasn't because of any need for "pure" tools to build the OS. I'd bet it was because working on the tools was more fun and more immediately rewarding than working on an OS. Now, saying "Linux was OK, it got us to the point where we could work on the real OS" - that's nothing more than a huge steaming heap of "not invented here" attitude using free software as an excuse to shield a bunch of fragile egos.

  • Even the Doctor knows this. He said it in his article. I think that if it were ready or at least close then it would be worth mentioning. The difference here is that Linux is available and has been usable for computer geeks for over 4 years. I've been using it since 96. The hurd still does not have an easy install of itself.

    If they want to attract developers or users or anything, then someone needs to make it easy to install. Like there needs to be a hurd distribution.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • by stx23 ( 14942 ) on Wednesday November 01, 2000 @04:21AM (#658746) Homepage Journal
    Officially, it's Debian GNU/Hurd.
    which is an anagram of hung and buried. Coincidence or what?
  • by DamnYankee ( 18417 ) on Wednesday November 01, 2000 @03:38AM (#658747) Homepage

    When I first heard of HURD back in, what, 1992 or 1993, I thought it sounded like a great idea. But now it's the better part of a decade later and the thing still isn't out of ALHPA testing!

    I'll stick with Linux, thanks...


    I was thinking of the immortal words of Socrates, who said, "I drank what?"

  • For NT 4.0 they basically moved everything but the kitchen sink into Kernel space, which helped increase the speed of the system at a huge cost in stability.

    So NT became less stable as it became more monolithic? The problem with microkernel is not stability, per se. It's "performance".


  • Apple moved the BSD kernel into the same kernel space as the Mach microkernel. This means that they don't have the context switching overhead that traditional Mach based systems have.

    Do you have a link supporting this assertion? I really doubt Apple moved FreeBSD into the Mach kernel.


  • You do know that the HURD was started before the Linux kernel, don't you?

    Personally, I find development of the HURD much more interesting than development of the Linux kernel. I personally don't care too much about how popular the operating system becomes (either Linux or HURD) but I think it's fascinating that work on a design that actually shows some innovation is taking place in the public sphere.

    You basically seem to have a big problem with Richard Stallman. Say what you will about him, but I can guarantee that you wouldn't be using Linux right now if it weren't for him.

  • Well the Hurd, Hurd, Hurd,
    Hurd is the word!

    Have you heard about the Hurd?

    Well the Hurd, Hurd, Hurd,
    Hurd is the word!

  • You really should learn some history before you open your mouth. Linux is a kernal not a full OS. Linux is very nice, but would be pretty useless without X, or the GNU tool set which include all the C libraries. Linux without GNU would be unusable.

    Of course, XFree86 has nothing to do with GNU.

    So maybe we should be calling it GNU/Linux/XFree86? Perhaps LiGNUxX?

    For that matter, none of it would be the way it is without the C programming language or the TCP/IP standard (the standards, not the code), and it wouldn't have grown so quickly without the "killer app" Apache service, so maybe we ought to call it C/TCP/IP/GNU/Linux/XFree86/Apache?

    Personally, my Linux system wouldn't run the way it does without Perl, Vim, Netscape, and xterm (which together consume more cycles than any GNU code), so I suppose I ought to call what I'm running Perl/Vim/Netscape/xterm/C/TCP/IP/GNU/Linux/XFree86 , right?

    Things like short, catchy, popular names aren't important. It's only fair to acknowledge everybody's essential contribution, it's not as if one of these contributors is insisting on the lion's share of the credit when these are all about equally important. Oh, wait...

    --------
  • I do not count megabytes, I count the number of components that are GPLd.

    While GPL was originally designed for the GNU project, that doesn't mean that every project under the GPL is a GNU project. The popularity of the GPL is a combination of the obnoxious "thou shalt have no license before me" mutual-incompatibility clause of the GPL and Stallman's "only GPL is free" propaganda. But only projects originating from the FSF are GNU projects, by any reasonable definition, regardless of whatever GNU references are made in the names of the projects.

    KDE is by no means part of the GNU project, because it wasn't initiated by the FSF (it was, in fact, opposed by the FSF, and a pointless duplication-of-effort/division-of-resources encouraged, due to it being less than pure GNU/free).

    The GNU components are not so hard to replace, they were, after all, simply cloned from earlier proprietary versions. The reason they are used is that it doesn't make sense to rewrite them; they work fine and the only reason to do so would be out of spite (or perhaps pure frustration about all this GNU/Linux garbage; while I've heard lots of talk in this direction, it's easier to ignore irritating rantings than to shut them up). If someone wrote a nice public-domain printf formatting function back in 1980, and it got used in Linux, would you think he'd be justified in insisting that Linux be called "printf/Linux" because "you can't run Linux without printf!" ?

    The main credit for those cloned Unix tools should go to the original designers, not mere GPL-cloners, and ample tribute is already paid in the choice of name.

    All this "GPL/Linux" crap is one more attempt to make out the FSF, and therefore Stallman, to be the root of all software freedom. It's nonsense. Stallman's a bit player with a big mouth and a talent for attracting attention. Free software has been around as long as computers have been made in standard models, previously with the superior freedom of public domain. If the FSF GNU/free thing really was a moral issue, rather than an attempt by Stallman to make himself famous, he wouldn't care about the name (and the FSF webpage wouldn't be full of references to his personal life).

    --------
  • by TheDullBlade ( 28998 ) on Wednesday November 01, 2000 @07:06AM (#658758)
    Alix (parts quoted from the GNU homepage)
    The GNU kernel was not originally supposed to be called the HURD. Its original name was Alix--named after the woman who was my sweetheart at the time.

    A heartwarming sentiment, but not all was roses for GNU/Stallman:

    It did not stay that way. [... we] redefined Alix to refer to a certain part of the kernel--the part that would
    trap system calls [...]

    Clearly he was feeling smothered by the relationship, and expressed his discomfort in his code. If he couldn't control his love in real life, he could at least manipulate the code-surrogate he created to show his true feelings.

    Ultimately, Alix and I broke up [...] and this made the Alix component disappear from the design.

    Torn up by losing the love of his life, GNU/Stallman destroyed 90% of the kernel code in a drunken fit of despair-driven rage. Thereafter he insisted on a complete redesign, being unable to cope with any reminders of this tragic romance.

    And that's why we run Linux.

    --------

  • by hey! ( 33014 ) on Wednesday November 01, 2000 @04:52AM (#658760) Homepage Journal
    Isn't Apple's OS X Server based on Mach 2.5?

    I haven't heard of any major problems with it.

    -Matt
  • If you have ever read about OS, this article is a waste of your time. To summarize this article, it simple states that Linux is a monolithic kernel and the Hurd is a microkernel. Then it proceeds to spiel on what monolithic and microkernel are, as if it came out of any OS book. It doesn't really tell you what Hurd is meant to do! It tells you about the functioanlity of a microkernel. Worst of all, it doesn't tell you what Hurd is now doing and not doing yet. I am not impressed. RIP Hurd.

  • by brokeninside ( 34168 ) on Wednesday November 01, 2000 @04:44AM (#658762)

    One place to read the debate is here. [oreilly.com]

    The quote from Linus you are likely remembering is this:

    True, linux is monolithic, and I agree that microkernels are nicer. With a less argumentative subject, I'd probably have agreed with most of what you said. From a theoretical (and aesthetical) standpoint linux looses. If the GNU kernel had been ready last spring, I'd not have bothered to even start my project: the fact is that it wasn't and still isn't. Linux wins heavily on points of being available now.

    However, Linus had some other points as well, like multi-threaded filesystems on a microkernel being a hack:

    >A multithreaded file system is only a performance hack.

    Not true. It's a performance hack /on a microkernel/, but it's an automatic feature when you write a monolithic kernel - one area where microkernels don't work too well (as I pointed out in my personal mail to ast). When writing a unix the "obsolete" way, you automatically get a multithreaded kernel: every process does it's own job, and you don't have to make ugly things like message queues to make it work efficiently.

    My perception is that Linus' view at the time of this exchange was that Microkernels have a theoretically superior design but monolithic kernels have a practically better design. I don't know if Linus' views on this has changed in the nine year interval since this debate.

    have a day,

    -l

  • There's more to a microkernel than just modules. Sure, Linux has pluggable modules, but they run in kernel-space. HURD's use of "servers" is much different.
  • Take for instance the "problems" with some of the previous versions of the Linux kernel:
    • multi-threading wasn't a good as it could have been
    • scaling to large numbers of processors wasn't as good as it could have been.
    • the tcp/ip stack wasn't multithreaded

    In a microkernel architecture, if the tcp/ip implementation isn't as multithreaded as you want, you are free to grab any tcp/ip server around that fits your criteria.

    If your microkernel doesn't scale to the number of processors you want, just pop in another microkernel and away you go. No need to change the rest of your servers (assuming your old microkernel and your new microkernel use the same interface).

    Essentially, a microkernel is slower, but more flexible. A monolithic kernel is faster, but less flexible.

    Same old story...

  • Other companies poured money into Mach too, including Intel (for their supercomputers) and KSR (I think). Guess what --- all those projects are dead. Except for Next, I guess, which lives on in OS X.

    When I arrived here at CMU in '94, there was a joke going around that Mach was destroying everything it touched. (We'd abandoned it by then, heheh.)

    The reason microkernels are unfashionable and people have abandoned them is because, by and large, they suck. Note that there are in fact much better microkernels out there, such as IBM's L5 and K42 (both open source, I think). The big picture is that in 99% of cases, people either don't need the extensibility, or a Linux-style kernel module is good enough. It's just not worth engineering for the other 1%. After ten years of trying to find alternative extensible OS technologies that don't suck, the OS research community has mostly realized this fact, and extensible OS research is dying.
  • Mach was developed on RISC machines (IBM RTs, then MIPS). Microkernel performance sucked.
  • > microkernel modules can be anything

    WRONG. This is classic microkernel propaganda.

    > For example, while Linux has module interfaces
    > for supporting different file systems, there's
    > no way you can load a module for a real-time
    > scheduler.

    There is no way for a microkernel to plug in a module for real-time scheduling unless it is designed to allow different schedulers to be plugged in. The fact that "it's a microkernel" doesn't make everything magically pluggable.

    No matter what kind of kernel you've got, you can plug things in if and only if the kernel designers built in the right hooks and interfaces for the functionality you're trying to plug in. Whether the plugging in is implemented with kernel modules or user-level servers is largely irrelevant.

    (User-level servers can be more robust and may require less privilege, but very few people care about that.)
  • NT is not a microkernel OS. There is a HUGE amount of code running in kernel space, much more than (say) Linux.
  • It wasn't the greatest example, but microkernel modules can be anything, and are generally low level. For example, while Linux has module interfaces for supporting different file systems, there's no way you can load a module for a real-time scheduler. Any scheduler changes (and there are a lot of people interested in Linux real-time scheduling) would have to be made in the actual scheduler code itself. Similarly, things like memory management which are typically low level monolithic parts of the Linux kernel could each be in their own module. This is what the Hurd is - lots of processes running providing the OS services.



    Linux modules, on the other hand, have more in common with device drivers than microkernel architectures. Which is the way Linux wants it, of course...


  • I'll just use Linux, thanks :-)

    Cool! I've never been any good at using kernels without any software on top.
  • by isdnip ( 49656 ) on Wednesday November 01, 2000 @06:00AM (#658776)
    While I'm not the final expert on these matters, I suspect that the opposite of what you suggest is true. CISC machines tend to be much better at context switches. It does depend on the machine... but the VAX (about as CISC as you can get) could task-switch in a handful of cycles, while some RISC machines take hundreds.

    RISC machines are in general really fast at big monolithic tasks, like number crunching, not at task-switching. So if you go to a microkernel that needs to do a lot of context switches, RISC performance will probably be bad.

    If the microkernel had better message-passing within it (I'm told Chorus is good at this), then the frequent task switching becomes unnecessary and performance improves. But Mach got it wrong.
  • Let's see, we have one group of people who knows nothing about operating system design arguing that monilithic kernels are superior because they vaguely remember hearing that Linus Torvalds said that monolithic architectures are better than microkernels.

    On the other side, you have a group of people who knows nothing about operating systems arguing that microkernels are superior because of some kind of elegance factor they don't understand, plus they are annoyed that Linux has become successful (or at least that it has gone mainstream).

    This is right up there with goofy junior high arguments about the PlayStation 2 vs. Dreamcast that are based on sound bites picked from biased gaming news sites. Most amusing to watch!
  • The problem is that compared to a pared down monolithic kernel, the savings aren't that good. If at all. NB: I'm making things up as a go along, so add salt to taste, and don't hesitate to flame and correct:

    I think Mach needs something like 4 meg of runtime memory to manage IPC (?). 'Thing is that since it is completely dynamic, each kernel metacomponent (like a filesystem driver) needs to have a very generic interface. These add up. Then you need to configure the various objects at boot time and store that in ram. A statically compiled kernel can use standard compiler tricks like dead code elimination (especially if you are allowed to make the whole program assumption) to axe out huge swaths of unneeded code.

    The real value of microkernels is stability. You can use a buggy driver, if you need to. On a production system. The driver can crash without killing the kernel. Just add a heartbeat monitor and you get life-support for buggy drivers. The boon for developers of said drivers is obvious.

    Now is where I start speculating: I'd like to see a pyKernel. Write low level performance critical parts of the code in C, but write the rest of the kernel in python. An example: the code that performs context switch is obviously in C, but the code that implements the policy is in python.

    I know there's a LISPos project, but IIRC it got stalled early on in bickering and overreaching (If you run code from a safe language, you can get rid of expensive process separation and run them all in one memory space). Any other projects out there?
  • Stupid arguement. HURD has been more or less dormant for a long time. Even if it started in the early 90's, it certainly hasn't been developed actively for all that time. Development of the HURD only picked up steam recentely; far more recently than Linux did. So give it an equivilant number of coding hours and see what results!
  • If you count the number of hours actually spent developing HURD compared to those spent developing Linux, you'll see that HURD has had FAR less development time. Sure it started earlier, but that doesn't mean its been actively developed since the early 90's!
  • That't not totally true. On the BeOS at least, kernel calls are made directly, the kernel is mapped into each process's address space and the only overhead is that of the ring change (no context switch required.) Calls to various servers are usually batched. For example, the Interface Kit will collect all drawing functions you call and send them in one big batch to the app_server. The resultant overhead from sending the message is far outweighed by the actual functionality offered by that one message. QNX's Photon works much the same way. According to QNX, drawing through Photon, which uses messaging, is about as fast as making calls directly into the graphics driver. It introduces more latency, but in terms of raw throughput/cpu usage, it is about the same. When you look at the real-world performance of these two systems, you'll notice that they are faster than most UNIX's, microkernel or not.
  • If 50% of the code in your HelloWorld program is from Microsoft, then damn yes you should call it Micrsoft/HelloWorld!
  • So, by your logic, OpenBSD gets a lot less coding time and thus must have unrealistic political bullshit wrapped up in the technical design?
  • by Spasemunki ( 63473 ) on Wednesday November 01, 2000 @04:51AM (#658786) Homepage
    I think that there is a little bit of a difference. The loadable modules that Linux uses right now are primarily device driver style modules, and are loaded into kernel-space when they load. What the article is talking about is the inability right now under Linux to add significant system-level features (the example given was networking stacks) outside of kernel space. So there is not a way to extend the kernel in Linux without directly fooling with kernel code, one way or another. This hurts the maintainability of the system, as the amount of stuff lying around inside the kernel grows and grows. Additionally, the monolithic style (according to the writer) lacks a nice strong abstraction barrier that makes it possible to alter components individually without breaking something in the system. If the line between where things like basic kernel services and things like device interactions or networking is not clear in the code, than the odds are that if you alter one of them, you're going to end up having to alter them all. In an ideal world, the layer of abstraction insulates the rest of the system from changes. You want to rip out the old memory management code entire and put in a new one? Great. As long as you provide the same interface (API calls & c.), nothing else should need to be altered. This is of, of course, an ideal situation. Systems that are designed that well are few and far between, but in general a little extra abstraction is going to save you maintenance headaches in the long run, which can be very important on a system like Linux, which is going to go through multiple iterations and have significant modification done to it on a semi-regular basis. Generally, this is a long term problem and not an immediate one. No one is going to complain next week that non-modularity is ruining their life (okay, the guy who wrote the article), but in 5 years, when the code base has swollen more, and when advances in hardware make the performance ding due to using a more modular solution less noticable, than there may be some harried kernel hackers bemoaning the complexity (and ugliness) of a system that has grown without good abstraction and without a good non-kernel space module system.

    "Sweet creeping zombie Jesus!"
  • It's Debian GNU/Hurd, in the same way as it's Debian GNU/Linux.

    By the way, people on the Hurd mailing lists are talking about leaving the Mach kernel. L4 is being considered as an alternative. Nothing has been settled yet, as far as I understand, but it's quite interesting to follow the conversation.
  • First I wrote: Another thing: I don't see how the GNU project can "steal" glory from the success of Linux since the success of Linux is dependent on the GNU project which provides the tools and the license. Calling the running Linux system (kernel and user applications etc.) "GNU/Linux" is the least people could do

    Then you wrote: So we should call all apps compiled with MS compilers "Microsoft/HelloWorld"?

    Do we call all applications compiled with GCC 'GNU/MyApp'? We're talking about a whole operating system with applications and documentation, not just a single application.

    When someone writes an operating system similar to Windows in architecture using Visual Basic and ports the Microsoft Office suite and all the "Visual" tools to it, that system (OS and tools) will certainly be called MS/Something (or maybe "MS-something").

  • You said: Me either, but what makes you think I use predominantly (or any) GNU utilities? Quite frankly GNU/KDE/XFree/Linux is a little too bulky for me, so I'll just stick to calling it Linux.

    You are using predominantly GNU utilities. That's a fact. Each time you type ls or cd you execute GNU code. The bash shell is GNU code.

    KDE and XFree is just a tiny part of the whole system when compared to the GNU utilities and the amount of other GPLd applications. It might be the part that shows most, but they're not essential to the system.

  • You said: You do realize that there are BSD versions of ls and cd, don't you? If you were as familiar with my system as as you claim to be, you'd also realize that I use tcsh and not bash.

    The everyday commands distributed with most Linux systems are GNU utilities (I don't know about those utilities on e.g. BSD systems and if that is what you run then you are most likely to be correct when you say that they're not GNU). Bash is the default shell on most GNU/Linux system, tcsh is not. I believe that I am correct when I say that most GNU/Linux users use bash (wasn't there a poll about this?).

    You also say: Combined KDE and XFree take up more room than pretty much anything else on my system. So much for them being a "tiny part".

    You simply can't have a running Linux environment without GNU! *That's* my point. I do not count megabytes, I count the number of components that are GPLd. Please note that KDE is *one* of those components.
  • You: Pardon me... how did they write that editor? The compiler? The tools? They wrote them using "non-free" (for Stallman's definition of "free") versions of the same tools.

    Maybe they did it so that they immediately could start to develop the rest of the tools using their own free, reliable, extensible, portable compiler?

    You: Non-free&a mp;quot; software pretty much had to be, and was, used to develop the first versions of Stallman's "free" software.

    Of course. But only enough to develop the compiler.

    You: I'd bet it was because working on the tools was more fun and more immediately rewarding than working on an OS.

    There is nothing whatsoever wrong with that. Programming should be fun and rewarding.

    You: Now, saying "Linux was OK, it got us to the point where we could work on the real OS" - that's nothing more than a huge steaming heap of "not invented here" attitude using free software as an excuse to shield a bunch of fragile egos.

    Ok, so let's say that everyone in the GNU project have tiny fragile egos. Do you have an ego big enough to explain in exactly what way this should stop them from continuing to work on the Hurd?

  • You: While GPL was originally designed for the GNU project, that doesn't mean that every project under the GPL is a GNU project.

    I don't believe I made that claim. What I did say was "You simply can't have a running Linux environment without GNU!" and I still think that this is a valid claim.

    If you are a Linux user (and therefore also a GNU utility user and user of a vast amount of GPLd software), you should think more than once before saying the GNU project is nonsense.


  • You: Again, despite it's use on most Linux systems, I see no reason to start calling Linux GNU/Linux. You seem to have a difficult time understanding that point.


    I see no reason to stop calling it GNU/Linux.

  • You: Let's face it, you were trying to be a know-it-all and you made a few assumptions that you shouldn't have.

    Sorry but you're wrong. Mastodon Linux (whatever that is) would not be at allif it wasn't for GNU. And yet again, I did not say that every GPLd thing was a part of the GNU project.

  • by andkaha ( 79865 ) on Wednesday November 01, 2000 @04:27AM (#658800) Homepage

    You wrote: HURD would have been a really cool system, if Richard Stalman had got his priorities right in the first place and written the GNU Kernel before he wrote all the system tools! No, GNU did it the other way around, and now they spend what, 5 years? righting a kernel?!

    Writing the kernel with what? Using a non-free editor, a non-free compiler, a non-free libc and other non-free tools? Would that be regarded as "free" by anyone?

    I don't know if RMS is involved personally at all in the Hurd effort today (I would be happy if he was). I don't see why people attack him... The GNU Hurd is a perfectly "legal" and quite interesting free Unix project. I find it hard to believe that it's supposed to be a form of "revenge" on the success of the Linux kernel. I think it's rather a separate effort to supply a highly modular and portable Unix kernel. I don't know if the Hurd is supposed to do the same kind of impact as the Linux kernel did (is doing).

    Another thing: I don't see how the GNU project can "steal" glory from the success of Linux since the success of Linux is dependent on the GNU project which provides the tools and the license. Calling the running Linux system (kernel and user applications etc.) "GNU/Linux" is the least people could do.

  • It *may* invalidate it?! LOL!

    Perhaps you should find out a little more about the history of the HURD. It was always (and still is) a fundamentally more ambitious project than Linux - perhaps too much so for the time (time when it was started ;). IIRC waiting for Mach4 wasted a lot of time, and I suspect there were many other problems.

    To suggest that "Stallmann and his crazy comrades have produced little more than hype" shows just how ignorant you are (gosh, I wonder why you're an AC ;). RMS wrote GCC and EMACS himself. I doubt you have any comprehension of how hard he, and his "crazy comrades" have worked, although you almost certainly reap the rewards of their labour.

    best wishes,
    Mike.
  • by Mike Connell ( 81274 ) on Wednesday November 01, 2000 @04:43AM (#658802) Homepage
    > Starting the HURD just seems more like sour-grapes on the part of Stalman to be honest. When he realised how popular Linux and Linus were becoming, he decided to try and steal their thunder.

    Ah, there's nothing like the smell of revisionist bullshit in the afternoon.

    http://www.gnu.org/gnu/initial-announcement.html
    Announced in 1983.

    How exactly does that square with your little theory when Linus didn't post the first Linux sources until around 1991 (v0.02)?

    Mike.
  • I don't understand why the FSF is throwing so much energy into working on HURD. The Linux kernel is already GPL and therefore complies with the FSF goals. Why is the FSF working on this when they already have that component to help make a free operating system?
  • Hurd daemons are run in user space.
    Kernel modules are run in kernel space.

    This make kernel modules faster than Hurd Daemons.

    What are the good points ?

  • by bug1 ( 96678 ) on Wednesday November 01, 2000 @04:01AM (#658808)
    When/how did microsoft show that microkernels dont work ?

    Have you ever heard of QNX, its a microkernel, it works.

    I read that the drawbacks to microkernels is that they are a bit slower due to less code operating in privilidged mode. But microkernels are more scaleable i think.

    (i am not an expert)
  • Kind of like the Linux-kernel was in the beginning then? :-)
  • by alangmead ( 109702 ) on Wednesday November 01, 2000 @06:24AM (#658817)
    There are two major differences between OS X and the Hurd.

    The first difference is that OS X is a single server and the Hurd is a multi-server. That is, on OS X all the Mach stuff communicates with is one large Free-BSD kernel with its hardware dependent stuff ripped out. The Hurd on the other hand has each system call handled by a separate thread of execution.

    The second difference is that Apple moved the BSD kernel into the same kernel space as the Mach microkernel. This means that they don't have the context switching overhead that traditional Mach based systems have.
  • by MonkeyMagic ( 118319 ) on Wednesday November 01, 2000 @03:59AM (#658821) Homepage
    Linux demonstrated years ago the Monoliths work. MS showed Microkernels don't.

    Care to elaborate? Actually there was nothing wrong with the NT microkernel design originally. 3.51 was fast and I believe fairly stable. As far as I can tell the instability came from the UI which has been allowed to contaminate the architecture.

    Linux has indeed shown that Monolithic kernels can work well, but there are too many who believe that Linux is the peak of OS design and that it can't be beat, so what's the point in even trying?.

    Well in n number of years, we'll all be talking on kuro5hin about the days when Linux was a really good OS for the hardware of it's day and wasn't slashdot great before IBM bought VA. In other words don't stop looking just because what you've got works.


    Never attribute to malice that which can be adequately explained by stupidity.
  • Very nice. Sadly, it was implemented on one-of-a-kind hardware, so nobody can run it.

    Some of the ideas from Alpha were reused in Spring, and some of the ideas from Spring were reused in Java, and some of the ideas from Java were reused in .NET, but along the way, the protection stuff got lost.

  • by Animats ( 122034 ) on Wednesday November 01, 2000 @08:31AM (#658824) Homepage
    It's a performance hack /on a microkernel/, but it's an automatic feature when you write a monolithic kernel - one area where microkernels don't work too well

    That's a valuable quote. It's a real objection to the way microkernels are usually done. It's not really a problem with microkernels, though. It's a problem with processes as a primitive.

    Most operating systems today have the "process" as a primitive. A "process" is a collection of address spaces, some kernel state, and one or more "threads". Interprocess communication typically looks like an I/O operation. When process A "calls" process B, process B sees something that looks like an I/O completion and has to find a thread to service it. One thread in B is enough to make it work; many are required to make it work fast. Allocating and managing those threads adds another level of scheduling complexity to service tasks.

    The problem comes from the fact that what you want is a subroutine call between processes, but all you usually get is an I/O operation. If the kernel offered a safe way for a thread in one process to call into another process, the problem Linus points out would go away.

    A better way to think about this is to think of "objects" rather than "processes". Think of the CORBA object model, but with all the objects in different address spaces on one machine. All that's needed is a way for a thread in one address space to call through some kind of call gate into another address space. The operation required looks like a system call, or a Multics ring crossing, except that it's between peers. The L4 crowd has been getting close to this, and if they ever do L5, they'll probably do it.

    x86 hardware almost supports this; if you abuse the segmentation and "task gate" hardware enough, you can get it to support inter-address-space calls without going through the kernel. Maybe. Sort of. It's not quite the right tool for the job. The latest SPARC CPUs are supposed to have hardware support for this, put in for the Spring project.

    That's the direction microkernels ought to go. The end result would be a system that works like CORBA/DCOM/etc objects, but much faster and safer. Current implementations of "big object" systems either turn off protection or run slow.

  • Bottom line, how accurate is this article?

    Accurate enough. In the Hurd, things like the TCP/IP stack (is this implemented in the Hurd yet?) are userspace programs. In Linux, it's kernel space. You theoretically could rewrite Linux like that, but it would take a massive amount of coding and you'd wind up with.. a microkernel!
    ---

  • Compare:

    Siqnal 11
    User ID: 210012

    to

    Signal 11
    User ID: 7608

    What we have here is a troll pretending to be a different troll. Let us hope the recursion stops here.
  • The problem for the code factories is the core apps are truely becoming commodities.

    Test editors are already there. EVERBODY gives them away for free.

    Word processors are there in theory, but the propriatary wars continue and so file formats keep them from going commodity long past when they should. This can't be maintained forever as soon every bloody feature that everyone wants will be in every bloody word processor.

    Spreadsheets are a dime a dozen. Again, only propriatary file formats and scripting languages maintain the propriatary houses at the moment.

    StarOffice, for all its warts already comes damn close on these two, others are coming up from behind.

    One more generational click and XML/HTML based apps will be as viable as lower level coded apps are now.

    Whole new ball game then.

    Sure, MS and its ilk arn't just going to roll over and play dead, but as their core products become mature beyond extending, and this WILL happen, they'll be more and more up against it when it comes to fighting open source.

    If nothing else people will still buy and use Office, but it'll be hard for MS to get more than the price of a game for it because the value will no longer be there, and if MS really thinks that people will 'subscribe' to a word processor when they can get one for free that resides in one tenth of one percent of their HD they're smoking something.

    And just think about what speeds quadruple current will do for the emulation crowd!

    My money is on the horse that says the speed of boxes 3 years down the road is going to pretty much change everything.

    Sure, the uber geeks are still going to go for elegance over speed and user functionality, and the propriatary houses are still going to pushing 'functions' of no more value than slowing your computer down to 286 speed, but the raw power of the system will even out the bumps in the road and prevail.

    The only real fly in the ointment would be a new killer app, but for the first time in PC history I don't see one down the pike.

    The spreadsheet was a killer app, but we were all waiting for that one ahead of time. Same for the WYSIWYG word processor and 3D games. All killer apps when they came out, but all apps that we forsaw and eagerly waited for.

    I listen to music on my PC. I watch TV, movies and video tape on my PC. I play games in multi dimensionality, I can run my house from my PC. My PC is networked against my other PC's, and against the world.

    All we're eagerly awaiting NOW is bandwidth.
  • by kfg ( 145172 ) on Wednesday November 01, 2000 @06:42AM (#658837)
    Which isn't the average user. It's an interest piece for geeks. Period. It dosn't pretend to be anything else. The article ITSELF claims that Linux is fully functional now and the HURD isn't.

    So, everyone out there saying " HURD sucks," why yes, you are right, it does and the article even says so.

    "Linux can do anything the HURD can do and do it faster," why yes, you are right, * and the article even says so.*

    But the article dosn't concern people who are interested in the current functionality of the OS's, it concerns people who are interested in *thinking about* how OS's work. More specifically it mainly addresses people who actually write and/or maintain kernel code because that's what they LIKE to do.

    If you don't think about kernel architecture and kernel code and just want to boot the system and see what sort of fps you can get in Quake this article wasn't intended for you.

    The article brings up some good and intruiging POINTS about kernels, which is the POINT of the article.

    I have some of my own ideas about the issues involved which the article prodded me into thinking about some more, so it is also a *successful* article.

    The modularity of micro kernel code certainly has some advantages, but from a kernel coders point of view a monolithic kernel's code can be made just as modular with a clear interface between modules. If it is not done that way at present that is the fault of the coders, not the architecture.

    Ok, the Linux kernal code grows and grows, but not all aspects of the code are COMPILED into the user's actual kernel. In this sense the kernal already has modularity of sorts to the end user. If you don't have a USB port you don't compile USB support into the kernel in the first place.

    The key advantage of the micro kernel brand of modularity is that you can add or remove USB support * without recompiling.* This will be of primary advantage to the system *maintainer.*

    When does a Linux box go down? When you have to modify some core function of the kernel that requires a complete recompile usually. With a micro kernel you upgrade from USB to USB2 by pulling one module and plugging in another. Done. No reboot.

    The downside, as we all know, is system performance takes a hit from the resulting extra overhead to support communications between modules.

    But look, Moore's law remains in effect. A few months ago IBM announced they had enough stuff in development in the lab * right now * to keep Moore's law in effect for at LEAST the next ten years. Think about that, and that dosn't take any new, currently unknown and undeveloped technology into account.

    * What is known technology, right now, will keep Moore's law in effect for the next decade.*

    In a year and half that 1.2 gig AMD is going to be a 2.5 gig something or other, and it will be CHEAPER than current boxes. In 3 years it will be 5g and cheaper yet.

    So what are we going to use all that added power for? To get 400 fps in Quake instead of 200?

    No, I'll TELL you what we're going to use it for, (because that's the kind of swell guy I am), we're going to use to run abstraction layers to the entire software architechture to make end use and system maintainability easier, THAT'S what we're going to use it for.

    Will benchmarks for a micro kernal be slower than benchmarks for a monolithic kernel? Yep, sure will, BUT. . .

    They'll BOTH be faster than human perception, just as vi appears to run equally fast to the human user on a 486 and a PIII 900.

    And APPARENT speed is all that matters people.

    Is the day of the micro kernel here? No, and the article itself says so.

    Is it coming? You bet it is.

    Maybe not today. Maybe not tomorrow, but soon, and for the rest of your life.

  • This is still more of a good idea than a usable system, advanced users only need apply.
  • >Isn't Apple's OS X Server based on Mach 2.5?
    >I haven't heard of any major problems with it.

    That is because it is Apple, and any kind of kernel is an improvement!!!

  • by SigVn ( 166099 ) on Wednesday November 01, 2000 @03:58AM (#658847)
    If so why would I use Hurd, If i stick with Linux
    (a product that works in the here and now) I will eventually get all the advantages that they are talking about anyways.

    I am not a kernal hacker....So I admit I do not understand all the issues here, but I do not see any reason to jump.

  • You killed Dr. Dobbs

    Anyone got a mirror of the article? I'd love to read the it but for some strange reason [slashdot.org] the site seems to be down.

  • by onion2k ( 203094 ) on Wednesday November 01, 2000 @03:47AM (#658862) Homepage
    Apologies for the appalling pun. But its relevant. The thing that leapt out of the article for me was 'Hey, this would be great for portable devices'. Write a translator for Bluetooth gear, instant wireless networking. Write another for GSM WAP cellular stack communication.. instant wireless productivity connections. Don't have a filesystem? Chuck it. Added an IRdA port to your device? Plugin a new service and off you go. This would rock.

    Obviously this depends on the general overhead for the microkernel itself, and the resources available, but its a nice idea. A fully tailored OS is a necessity on things that are 'thin-clients'. Sounds like an option.
  • Just another example of this forum's hurd mentality.

    --

  • by BlowCat ( 216402 ) on Wednesday November 01, 2000 @04:31AM (#658877)
    • Date: Sunday, November 5, 2000
    • Time: 1-4 pm
    • Place: MIT campus, Cambridge, Massachusetts, USA
    • Room: 2-105, Building 2

    Building 2 is part of a complex at the intersection of Massachusetts Avenue (Route 2A) and Memorial Drive. As you cross the Charles River from Boston on Mass Ave, it is the first set of buildings on your right. A map is at http://whereis.mit.edu/bin/map?state=0&pri.x=325&p ri.y=137. I don't know where the best parking is, since I walk...

  • Have their been any studies on how a microkernel may perform under RISC processors like StrongARMs, SH3, and MIPS? It's possible that HURD's shortcomings are only in the CISC world. Anyone have any info?
  • Wasn't the same sort of thing applicable to Linux?

    But given time, and work, it has progressed to be a very usable kernel. If the same sort of time and effort is put into the Hurd, will it not also succeed?
  • I haven't really followed this debate since '95, so I may be completely off-base here. I recall at the time, there were a couple big problems with micro-kernel rhetoric:

    1) The performance generally stunk. This may have changed, but I recall seeing user-level network protocol stacks get completely clobbered by monolithic kernels. There were a whole bunch of ways of getting around this (including moving entire protocol stacks into the applications!), but most seemed to fall into the category of "making microkernels run faster by making them not really microkernels anymore".

    2) The assumption that complexity just vanishes if you break everything up into a whole bunch of little servers. If you compare a microkernel to a modern, multi-threaded OS written in a OO language, then the thing you get from having a microkernel is protection from memory-smashing bugs in one service taking down other services. That's good, but not revolutionary.

    A monolithic OS can die because the file system code runs amok. A microkernel-based system with the same class of bug might still be "up", only you can't get to any files, or run any programs. A user won't really care either way.

    A lot of the advantages of microkernels seemed to be predicated on OS development and debugging staying exactly as godawful as it was back in the late 80s and early 90s. I hacked on an Ultrix kernel, so I have vivid memories of how gruesome the whole process is. I am assure by OS guys that things have come a long way since then.

    Also, I should point out that Mach is a pretty huge for a microkernel. It's got everything in it but the kitchen sink. I think the Plan 9 monolithic kernel was smaller than just the Mach microkernel (not even counting all of the stuff you have to pile on top of it to make it actually work).
  • I think some other responses missed the point of your question. The Hurd: - Very small kernel with minimal functionality - Lots of small "servers" (I prefer the term modules) - The kernel can be easily replaced, without affecting anything else - The servers can be very specialized, so only those that are needed need to take up resources - If theres a problem with a low-level component, like the file system server, only that one component needs to be replaced - The kernel has to do a lot of marshalling between the servers, as well as for all of the objects and threads created by apps - The modularized, organized architecture makes it inherently slower Linux: - Larger, far more robust kernel - Low-level needs, like file-system and some networking support, are build into the kernel - All modularity starts at a higher level, like web support services, etc. - A kernel change means a recompile of all low-level services; example: you enhance a certain part of the kernel for your own purposes; someone else updates core functionality in the kernel; you'd have to get that other changed code and compile it yourself, hoping that they didn't break your stuff (this is the main benefit of being object-oriented from the ground up). - Compiling all core services together and not creating interfaces between them, letting them directly affect each other, is inherently faster (MUCH faster) Overall, each of course has its practical uses, each with trade-offs. Personally, on a workstation I like to have the fastest, most efficient kernel possible, like Linux. But if I was working on something more specialized or experimental, like a robot or mobile computer, I'd prefer the modularity.
  • I have understod that Apple's new open source operating system Darwin is also based on the Mach microkenel. They have propably made the necessary changes to it making it usable in a variety of practical applications. Why not make Darwin the base for the new age open source OS?

    It seems that Darwin will have a pretty secure future due to Apples deep commitment and there will also be a large number of commercial applications to Mac OS X available in the future, which could be ported to the open source Darwin. In addition it is pretty compatible with Linux/Unix systems so Linux users would feel at home quickly.

    As OS X is practically Darwin added with Apple's proprietary components, why not make open source versions of these components and voila, a Mac OS X compatible free OS, with a superb UI, easy software installation etc. It would probably be easier than developping Wine for Linux!

    Any insights on these ideas?

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...