Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
GNU is Not Unix

Hurd: H2 CD Images 312

An anonymous submitter sends in: "The Debian GNU/Hurd team released a new Hurd CD Image. Snapshot images are produced at a four to eight week interval and the H2 images are the tenth of the series. The Hurd has grown from one CD image in August 2000 (A1) to four images in December 2001 (H2). These images are snapshots of a developing operating system, so suitable precautions must be taken when making an installation. Similar to other architectures, most important programs reside on CD 1, while the other ones contain less important packages. For the moment, Hurd doesn't support card sound and partition size is still limited to 1 GB. Hurd use the Debian packaging system (dpkg and apt as for Debian linux) , so it is simple to install and update packages."
This discussion has been archived. No new comments can be posted.

Hurd: H2 CD Images

Comments Filter:
  • first hurd post! (Score:1, Insightful)

    by Anonymous Coward
    yay for hurd! Now we have the choice between 20 window managers, 10 editors, and two kernels!
  • he said in the post that Hurd was not far away. This was ten years ago, and we're still waiting.

    maybe if slashdot talks a litle bit more about it more ppl will join and code for it... maybe...
    • by Anonymous Coward

      It probably wasn't this far away... until most of the potential HURD hackers were working on Linux instead.

      It's just as well, though--there was more urgent need for a Free system (BSD wasn't yet) that's readily hackable than for one with an elegant but expensive design.

  • by whirred ( 182193 ) on Sunday December 30, 2001 @03:20PM (#2765359)
    Until Hurd is closer to Linux or BSD in partition size and overall capabilities, it isn't going to pick up much in the way of popularity.

    What they have now is a rather "chicken and the egg" syndrome - it won't achieve popularity until more people start developing for it, and people won't care enough to develop for it until it's more popular.

    However, the biggest drawback to Hurd is probably the fact that the people it might most appeal to (people who don't like linux or bsd style unix purists) are less likely to use it because they won't want to put up with the Hurd philosophy, when BSD is already there.

    Who is going to use it? Linux has all the bells and whistles for people who love the GPL, and the BSD people who like pure unix and freedom (I know, what is pure unix anyway) are going to stick with *BSD.
    • The HURD is completely different from Linux - people will want to use it for the new features it brings. It's as different from Linux as from BSD as well - Linux & BSD are a lot closer than either of them are to the HURD.

      Besides, if it looks and acts (for the most part) like your Debian GNU/Linux system, the entry bar is very low and people are more likely to try it.
      • The HURD is completely different from Linux [...] Besides, if it looks and acts (for the most part) like your Debian GNU/Linux system [...]

        Yes, this seems like a very good idea. I'm very much looking forwards to a completely different the same thing.

        • Um... read it again, Einstein. The subject is 'Hurd vs. Linux.... ' - i.e., the _kernels_.

          Having different kernels doesn't prevent Debian from being vastly the same system on multiple architectures, for example. Different implementations of a kernel aren't going to make huge differences either. The only differences a user will see are some userland tools being different, hardware support being different, etc. - actually using the system will be vastly similar..

          BTW - OT - WTF is up with slashdot and signatures? Someone, please, fix it....
          • This isn't quite true -- HURD has significant
            innovation in the kernel that, with a few userland
            changes, allows for significant changes in the
            way the system is used. Check out
   l/ hurd_toc.html
      • "Besides, if it looks and acts (for the most part) like your Debian GNU/Linux system, the entry bar is very low and people are more likely to try it."

        but if it looks and acts just like Debian, what's the point in using it over Debian?
        • Because the HURD doesn't use the same kernel design as linux. It isnt built to be a monolithic kernel. It is supposed to be a usable microkernel. We'll see how fast that catches on...
        • It looks and acts just like Debian because it is Debian.

          There seems to be much confusion in the philosophical, technical, and intangible Free Software world, I wonder why?

          Debian is an operating system, that is an interface between the user and underlying hardware. If you will pretend with me that MS Wordpad is a reasonable word processor, then there is no reason to have to buy any other software for a basicly functional system. In this context, you can understand why MS had a valid reason to consider Internet Explorer an integral part of Windows... because the Web became significant. Earlier, MacOS included MacWrite, making WYSIWYG word-processing a significant part of the "operating System"... you get it now.

          The Kernel isn't important to the OS, other than that there is something interfacing the higher level systems to the low level hardware - consider Windows NT 4.0 and Windows 95. They have the same basic userland minus a few changes, even though the underlying systems are completely different, i.e. a 32 bit microkernel in NT versus an 8 bit monolithic kernel in 95. Both have 16 bit Windows components integrated.

          Likewise, Debian is a consistent operating system... namely, an incarnation of the GNU operating system. This GNU operating system is the realization of the GNU project, an OS that you can share freely with your friends without worrying about any corporations taking away that right.

          But Debian is Linux! Right? No, Debian uses Linux. Debian GNU/Linux and Debian GNU/Hurd are both GNU, with Debian policy and integrated management utilities. There are slight differences in the userland, major changes in the underlying system... but you don't directly use "Linux".

          Yes, that also means that RedHat is GNU.
          While Redhat uses Linux, that doesn't mean that RedHat "is" Linux. But for the sake of Redhat's marketshare, they are willing to ignore that fact. Wall Street doesn't care that "Guh-new" wants "hackers" to be "Free", or even that it is a clever little recursive acronym. They care that "Linux" is recognizable for investors of software developers, like "dot-com" and "information superhighway". Linux sounds cool, money is cool, that's why were here, put it on the label.

          If you are are using Windows NT, you aren't using the underlying hardware... the HAL, the microkernel components... at the lowest level most users see, your interface is the command interpreter. Usually you use the Explorer.exe interface. With Win 95, without the underlying components, the lowest you'll see is still the text interface, ot the Explorer GUI.

          If you are using a GNU (GNU/Linux) distribution, you aren't using your computer's internal hardware, or the Linux Kernel. If would be very uncomfortable to try using Linux without at least a BASH interface running atop, to accept input from you the mere user.

          The issue isn't that of using the Hurd instead of Debian, Debian will use the Hurd as well as Linux. The issue is that of using The Hurd instead of Linux in your Debian system. Theoretically, the difference can be compared to using Win NT 4.0 instead of Win 95. That is, poorer hardware support, somewhat less simple subsystem configuration, but the promise of a microkernel proving to be more robust than a monolithic kernel like Linux.

    • by Anonymous Coward
      The Hurd is still very much a work in progress. I suspect the developers are not tackling the 1gig partition problem to prevent new users.

      Obviously it isn't a trivial change to fix that (well maybe if it was a 64bit HURD ;)), but it is probably the biggest thing holding people back. If they wanted users they'd fix it.

      Who is going to use it? Developers. In therory anyway, once The Hurd is at a useable level (nothing like the 1gig partition limit remaining), being microkernel based it should experience growth FASTER than Linux. Of course Linux is very mature (ignore the VM trouble!) so it'll take a while to catch up, but hacking the Linux code is a very steep learning curve.

      Hacking The Hurd is something that will be well within reach for CS students.

      Once The Hurd reaches a maturity level close to Linux/BSD only then will people use it. It offers features beyond Linux, or Unix in general.

      But it isn't there yet so don't get download it unless you know you want to.
      • No. What's needed to get people to TRY the Hurd is an obvious way to install it into a working Linux system. At various points it would have been easy (and will be again) for me to leave a 1G partition empty near the start of the disk (say just above /boot). But it wasn't obvious how to use this. (Yes, I know it's possible. But it isn't obvious, and I've never felt like digging when I could, and I can't dig when I'm re-partitioning my disk.)

        This is analogous to the Linux on Windows problem. Linux would get popular a lot faster if it were easier to install a Linux-on-Windows setup. (Well, it may be easy now. I've seen statements to the effect. But now I'm out of disk space on the Win95 machine, and as I don't like the recent licenses... Well, to be truthful, I didn't like the old one either. But I installed it before congress passed that law making digital signatures binding (and conveniently not defining what a digital signature was). I don't know what I'll do if I ever need to reinstall. I might just drop Windows entirely (what I've heard about the XP license is sufficiently frightening that I don't feel any need to pay money to prove that I don't want it).
    • by mbrubeck ( 73587 ) on Sunday December 30, 2001 @04:53PM (#2765614) Homepage
      Who is going to use it? Linux has all the bells and whistles for people who love the GPL, and the BSD people who like pure unix and freedom (I know, what is pure unix anyway) are going to stick with *BSD.

      Time to repost the famous announcement [] again:

      Do you pine for the nice days of minix-1.1, when men were men and wrote their own device drivers? Are you without a nice project and just dying to cut your teeth on a OS you can try to modify for your needs? Are you finding it frustrating when everything works on minix? No more all- nighters to get a nifty program working? Then this post might be just for you :-)
      -- Linus Torvalds, October 1991
      • Your answer is a bit off the mark.

        He didn't ask who is going to develop it, but who is going to use it.

        I'm sure there are still some people that prefer building an OS "from scratch", so there will be developers for The Hurd..
        But if we look at the speed of development of The Hurd, there may not be that many developers interested by The Hurd.. (not a flame, just an observation)

        From the user POV, the situation is a bit different, when Linux was started:
        - Minix users were frustrated because they couldn't do what they want with the OS..
        - BSD future was uncertain because of the lawsuit (I think that it was at the same time).

        So the free OS choice was limited, which is not the case now..
    • Dude, nice attempt to bait the GNU people.

      You imply that people either love freedom or the GPL, but not both. Do we *really* have to have that conversation again here? Unless you're being paid by microsoft, this is just senseless infighting between two groups whose goals are almost totally in alignment.

      It reminds me of a time some friends of mine wouldn't speak to each other. Why? They were both animal rights advocates- but one group thought that it was a good idea to argue that animal testing was ineffective, and the other team thought this was a bad idea because it implied that if testing worked, it would be a good idea. As a result, the movement splintered, while the research advocates ("animal rights opponents") spoke with a unified voice. The internal strategic debate ruined the overall message they were both trying to send.

      The parallel to the BSD vs. GPL debate is striking. It is a fun and important debate to have, but ultimately the harm that comes from ubiquitous closed-source can't-build-on-it software, which satisfies the goals of neither camp, vastly overwhelms the importance of this philosophical discussion. It makes it seem like theologians arguing over how many angels can fit on the head of a pin. If I was Microsoft's head evangelist, I'd be silently funding extremists on both sides trying to create bad political blood between these groups.

      I'm not saying we shouldn't argue, obviously the issues need to be fleshed out. I'm just saying that these arguments ought to show respect for the other side (no more "we're more freedom-loving than you" namecalling), and that they ought to always be mindful of the context they are operating in - discussing the best way to create a body of free software in a world of proprietary de facto standards.

      So I'm begging with all of you, show respect for your adversaries in this discussion. Acknowledge that the point of view held by the other side is understandable even if you believe that it's in error, but most importantly always make a special effort to identify the context of the discussion: that is, how can we best preserve freedom against those who would prefer all software to be proprietary?
  • by Anonymous Coward
    Forget about GNU for a second - what are the technical reasons why anyone would want to use Hurd?
    • by squiggleslash ( 241428 ) on Sunday December 30, 2001 @03:50PM (#2765449) Homepage Journal
      It's a different architecture to Linux. Linux has a monolithic kernel - the kernel is essentially one block of code, containing all the operating system services, the device drivers, etc. Some of the inflexibility of the approach has been alleviated by the use of modules, but those modules still become "part of the kernel" when loaded, sharing the same memory space, resources, etc, so we still refer to the Linux design as monolithic.

      HURD is a microkernel, based on Mach, an early experimental microkernel that, commercially, is the basis for some Unixes (Digital Unix, IIRC) and NextStep. Mach itself is tiny and deals with little more than task switching, IPC, and memory management. Processes run over Mach to provide the major services, so there's a TCP/IP server, and a file system server, etc, together with the device drivers. The whole lot, Mach + services is HURD (actually, I can't remember if HURD actually uses Mach directly or something based on it. Someone else can address that)

      Technical advantages? Depends on who you ask. Mach is not known for being a particularly good example of a microkernel, and Torvalds himself has dissed it. It's just a different approach ultimately. I believe, from experience, that it's generally easier to implement things like real time work in a microkernel because a more monolithic structure requires attention be spent in almost every part of a much larger kernel ensuring everything has a finite latency, but that doesn't mean it's impossible.

      Nothing in the above should be taken as meaning support for either architecture, but opinions on this score are so extreme that I'm expecting to be flamed anyway. Oh well, c'est la vie.

    • by Anonymous Coward
      security. security. security. HURD is less usable right now but will catch up soon. its more secure and less failure prone (crashing a server/translator/device driver doesnt bring down the whole system unlike winxx, linux, bsd or any other OS). its also more secure with user privs allocated to device drivers and the removal of a single root account.
    • by mindstrm ( 20013 ) on Sunday December 30, 2001 @04:09PM (#2765511)
      Because they want to work with something new.
      Because they have some ideas as to how the hurd could be adapted to their purposes.
      The list goes on.

      A lot of people are saying things like 'this will take years to reach the popularity of linux' or 'until it has all the bells and whistles'. Hello....

      Who ever said the hurd was supposed to be ready yet? I don't recall hearing it. The hurd is there for people to work on because they want to, period.

      There was a time when Linux was just as much of an ugly duckling, you know.. where nobody would use it for anything serious. It was something to be tinkered with, nothing more.
      • by Anonymous Coward
        I remember Stallamn saying that hurd was going to be ready for real use in the near future. Of couse I'm somewhat fuzzy on the details because this was in a talk he gave in 1985 or so.
    • HURD is better for massively parallel processors and user-customizability. It is massively multithreaded, has a message-passing-based kernel, and the user can load their own O.S. drivers without violating the integrity of the system.

      It is quite possible going to be the open-source architecture to power the future power-architectures consisting of hundreds of CPUs.

      Another nice thing about the HURD, even if it doesn't do that, is it's an interesting test-bed of new Operating System ideas. That's one reason it hasn't reached 1.0 - everyone keeps testing new things out. There's no problem with that - it brings new innovations to a lot of other stuff.

      Anyway, HURD shows the true power of free software. There's almost noone working on it (comparitively), but it has just as rich an environment as everything else because of the large mass of free software that can be ported. It shows that with Free Software, innovation can happen easily, because the developers can focus on the new stuff, and just use the existing tools to make a complete environment. Think about the uphill battle the HURD would have to go through if they had to write all of the userland themselves, too!
  • Hurry! (Score:1, Funny)

    by JWhiton ( 215050 )
    Careful! Get the Hurd before the stampede!

  • by Zico ( 14255 )

    Linux really doesn't impress me, but if I was into that whole GPL philosophy, it seems like Linux would be an easy choice over Hurd, which seems pretty far behind. Can a Hurd supporter give a couple of reasons why anyone would choose Hurd over Linux?

  • I think variety is good. Keeps things interesting. But what bothers me about HURD is that they promote that it has all of the new things, but read the following:

    > On the negative side, the support for character devices (like sound cards) and other hardware is mostly missing. Although the POSIX interface is provided, some additional interfaces like POSIX threads, shared memories or semaphores are still under development.

    Ah, folks that is the heart of HURD, the advantage of handling shared memories, semaphores, clusters, etc. What the HURD developers should have done is focused on the hard stuff and then I think people would whine less.
  • Card Sound (Score:2, Funny)

    by druiid ( 109068 )
    Hmm, I've never heard of this card sound. Is this some sort of new audio technology. I guess that since linux doesn't support it either it's no wonder Hurd doesn't support it.............

    Okay, I'll shut up now :)
  • Does having a microkernel slow things down at all?
    • Re:speed? (Score:4, Informative)

      by be-fan ( 61476 ) on Sunday December 30, 2001 @04:40PM (#2765584)
      Usually. The problem is that x86 just wasn't designed for microkernels (or operating systems in general, it seems). A system call (which is essentially nothing more than a jmp) takes 40 times longer than a regular function call (on my PII 300 anyway). That's the performance hit for a monolithic kernel like Linux. A context switch (which microkernels do tons of) takes two user/kernel transitions, plus one save of register state (~100 bytes on x86) and one restore of register state. In computer time, a context switch is glacially slow. Now, microkernels circumvent a lot of the slowdown through tricks like buffering commands (batches commands and sends them together in one message), but it still has more overhead than the monolithic kernel method. Of course, given that people think that KDE2 is a usable piece of software (speedwise), it seems that people don't notice speed differences anyway, so the point may be moot.
      • Re:speed? (Score:3, Informative)

        by Breace ( 33955 )
        The problem is that x86 just wasn't designed for microkernels (or operating systems in general, it seems)

        I can smell a flamebait when I read one. Sorry, but that statement is plain silly. ia32 has (as you asked earlier in an other comment) excellent features to support a microkernel (or any OS), such as multiple levels of privileges, extensive protection mechanism and relatively fast context switching.

        A system call (which is essentially nothing more than a jmp) takes 40 times longer than a regular function call (on my PII 300 anyway).

        A jmp?? Don't you want it to return??? Linux uses a software INTERRUPT to do system calls (bad decision in my opinion, ia32 provides fine call-gates that are a lot faster).

        A context switch (which microkernels do tons of)

        A microkernel does not have to do tons of context switches. I think what you are talking about is message-passing kernels. A microkernel does not have to based on message passing. It can use calls, and in fact the ia32 architecture lends itself very nicely to switch between privilege levels quickly, thereby providing protection that a monolythic kernel lacks.

        The prove that a well designed microkernel can be VERY fast is QNX.
  • by markj02 ( 544487 ) on Sunday December 30, 2001 @04:21PM (#2765537)
    I think this may be the way out for the current problems with the Linux kernel. The 2.4.17 Linux kernel distribution is almost 30Mbytes of sources, gzipped. Most drivers, installable file systems, and other kernel functionality, are either in the kernel source or need to be installed from sources. Getting the right ACPI or APM options requires endless recompiling and rebooting. A bug in any one kernel module will usually take down the whole system. None of the major Linux distributions has placed reconfiguration of the kernel within reach of the average user, and even for the experienced user it's kind of a pain. Imagine where Linux would be today if you needed to recompile the entire command line toolkit or all of KDE every time you install a new version.

    Microkernels attempt to give you a much more "UNIX-like" way of making a kernel: a lot of independent little "servers" that talk to each other and are somewhat isolated from each other. A bug in one kernel module will often not crash the whole system, and there is much less coupling between kernel components. Microkernels are not the most efficient way of achieving that kind of modularity, since the memory protection mechanisms they use are more costly than relying on compiler/language support together with dynamic loading, but given that people are going to continue to write lots of C code for the kernel, a microkernel may be the best compromise for achieving a modular, extensible kernel in the real world.

    Well, it's good to see that both the Hurd and the Darwin projects are coming along. I'll certainly give this a try. Its hard for any new kernel architecture to replace something as mature, functional, and widely-used as Linux. But if something like the Hurd turns out to be significantly easier to extend and hack, it may well catch up quickly. Another path to acceptance is that people find that, despite having fewer drivers and less functionality, the functionality that something like the Hurd offers may be easier to configure and deliver to end users in prepackaged form (i.e., without "make menuconfig" and lots of obscure decisions).

    • by be-fan ( 61476 ) on Sunday December 30, 2001 @04:35PM (#2765568)
      I think this may be the way out for the current problems with the Linux kernel. The 2.4.17 Linux kernel distribution is almost 30Mbytes of sources, gzipped.
      A microkernel-based OS would be just as big. Unlike Microsoft, the Linux developers need to develop (and distribute) all of their own drivers.

      Getting the right ACPI or APM options requires endless recompiling and rebooting.
      That's not the fault of the monolithic design. AtheOS, for example uses a modular monolithic kernel, and it can dynamically update kernel components.

      A bug in any one kernel module will usually take down the whole system.
      But Linux is generally as stable as any microkernel, and monolithic kernels like FreeBSD are more stable than any existing microkernel (except maybe QNX).

      None of the major Linux distributions has placed reconfiguration of the kernel within reach of the average user, and even for the experienced user it's kind of a pain.
      And it would be the same on a microkernel. The compiling process isn't complicated, the configuring is. That configuring would still be present on a microkernel system, its just that less compiling would be necessary. Since compiling can be hidden behind a GUI tool, and the kernel only takes a few minutes to compile on modern hardware, this doesn't gain much.

      microkernel may be the best compromise for achieving a modular, extensible kernel in the real world.
      Experience has shown that a moduler monolithic kernel seems to be working quite well.
      • .. of this, though that needs to be qualified by a couple of [or three] points:
        • The make-kpkg package already allows for module-only compilation
        • Messing with a Linux kernel config would demand a complete recompilation, from what I can gather about the process
        • Of course, it's specific to a Linux kernel. Porting it to Hurd may be impossible/undesirable owing to the very same issues that led to H's development in the first place.

        Oh well.

      • by markj02 ( 544487 ) on Sunday December 30, 2001 @06:03PM (#2765827)
        A microkernel-based OS would be just as big.

        Of course it would. But the different parts of it (drivers, file systems, personalities, distributed shared memory, clustering, video support, VM strategies, etc.) could be developed and tested by different developers, independently. With the Linux architecture, almost everything goes through the bottleneck of the Linux kernel developers, and it just isn't working in practice: important functionality takes years to make it into the kernel. It's not for lack of effort or dedication, it's the architecture and lack of modularity.

        The compiling process isn't complicated, the configuring is. That configuring would still be present on a microkernel system, its just that less compiling would be necessary.

        Of course, compiling is "complicated". It a process that involves many steps and is completely alien to many users. It also takes forever. And minor configuration problems result in complete failure. And if the module you want isn't part of the kernel distribution, things only get worse.

        With a microkernel, module installation could be as easy as installing a new command line program. You can still make configuration mistakes, but a lot of the time and effort can be eliminated. And with a really good microkernel architecture, you can also automate the process much more than it currently is.

        Experience has shown that a moduler monolithic kernel seems to be working quite well.

        It functions well. It doesn't "work well" in the sense of being easy to install, configure, or extend.

  • I run it. (Score:5, Insightful)

    by pinkpineapple ( 173261 ) on Sunday December 30, 2001 @04:31PM (#2765558) Homepage
    I installed it on my system on its dedicated spare disk, boot it, run it and update the release from time to time.

    It's not great as for device support but getting there. Drivers have always and will always be a problem for ANY OS (look at MacOS X and *BSD for living examples.) There are other features in the OS itself that make it forth a try.
    If you guys are curious about it, you should definitely give it a try. Some compatibility layer is also provide for Linux drivers and apps. This needs work but what doesn't really.

    The good thing is the upper layers which will provide POSIX compatibility for Unix developers to port their work. Pretty straightforward. The main reason why the distro has grown so largely in a small amount of time.

    I read false assumptions and mistaken comments on this list about what is HURD. It's a kernel like Linux, and it's based on a microkernel architecture. Mach 4.0 happens to be this micro kernel but the architecture is not locked down so this can evolve if needs to be.
    I read also people asking why does HURD exist at all. The answer is pretty simple: Why not? In the ten years it has existed, it should have died many times but it's still here. It's not a commercial OS like BeOS, some it doesn't need to generate streams of revenues to survive. It's just a bunch of code with ideas in it that are still pretty amazing today for it to still occupy developers to put efforts in it.

    After all, we are living in a society that should encourage diversity and growth of new ideas (the US haven't being built with pioneers.) So, I am getting sick and tired of the moronic way of thinking in black & white (binary): Only two alternatives (Linux vs. Windoz) and no space for the others . And why is that? Why not letting people who enjoy using BSD and developing with HURD just do it without being hassled by the 2 main opponents?

    Feeling grumpy because of the rain today.

    PPA, the girl next door.
    • I read also people asking why does HURD exist at all. The answer is pretty simple: Why not?

      That's a pretty rotten explanation and justification of HURD that I've ever heard. The reason why HURD exists can be broken down into two categories: technical and political.

      The technical reason(s) is the philosophy of uKernel design. It boils down the uKernel to the bare bone requirements; memory allocation and time slicing. This allows for greater abstraction of O/S services from system services. One practical benefit is greater ease in O/S porting (just a couple of K of assembler for the memory allocation and time slicing. All other services can be expressed in C code.) Another benefit is that one can prototype radically different design in O/S services without disrupting working implementation s of those services (multiple O/S APIs; Timeshare MVS with Linux with Hurd....). Its also been rationalized that uKernels will work more efficiently in SMP hardware environments, because it easier to distribute all O/S services as threads to different CPUs (better abstraction model in SMP environment).

      The actual reason HURD still exists (sort of) is that RMS is a raving technoideologue that thinks HURD is a purer, ultimate form of an O/S than Linux.

      Me? I like the ideas in HURD, but I want something that works better than M$ now. Wake me up when HURD officially runs on the L4 uKernel rather than MACH. Until that happens, HURD will always run like a dog.

  • by be-fan ( 61476 ) on Sunday December 30, 2001 @04:47PM (#2765599)
    The only reason microkernels exist is limitations in existing protection mechanisms. There are only two levels, kernel and user, and each must be protected from the other. My question is this: what about something like the x86 segmentation mechanism? x86 segments have the cool property that a piece of code has the privelege level of the segment containing it. The nifty thing about that is that there are 4 privlege levels, so that you can have the kernel at the lowest level, less important stuff like the GUI at a higher level, and the app at the highest level. That way nothing can crash a more important component. I was wondering why this scheme hasn't been extend to paging. On every memory reference, the processor could check the privelege level of the page containing the currently executing code, and make sure that the target memory has an appropriate privlege level. This makes things even faster than a mono-kernel, since the only thing that is necessary to do a system call is a simple jump to the appropriate code (which would be dozens of times faster than a standard system call on x86). This shouldn't be any slower than the current way of doing things. The privlege of the current code would only have to be read whenever a page boundry was crossed, and would only reference memory during a TLB fault (which would have to reference memory anyway). The proc already does a protection check on the kernel/user bit on the page table entry anyway, so that scheme could be extended to multiple privlege levels without a slowdown. Am I missing something, or does an existing processor already do this?
    • by Improv ( 2467 )
      It's best to avoid processor-specific functionality
      in large architectural decisions if you want to be
      portable. Besides, it's nicer for modern systems
      to have components that are layered better than
      a cake, so that if I have two very important parts,
      I know that they can't crash each other
    • by hughk ( 248126 ) on Sunday December 30, 2001 @05:24PM (#2765697) Journal
      Unfortunately, this has become the original chicken and egg situation. Hardware designers had made four distinct privilege rings back to the early days (think for example, about the VAX architecture). Under VAX/VMS, RMS (the record orienatted file system) and databases ran in an intermediate mode (Executive). The fourth mode wasn't used. This gave them some extra capabilities without letting them crash the OS (Ok, they ocassionally did, but nor very often). The OS was fairly monolithic, but extremely stable.

      Under VAX/VMS, you didn't jump directly to the code though. The resulting page access fault would take to long to execute. You did a call that triggered a change-mode which went through a dispatcher. However this was relatively fast as no process switches were involved. Also, it meant that the argument list was always passed in a checked form (however, not the contents of the list, that was up to the system service).

      Unfortunately, Unix concentrated on the two levels only, User and Kernel. Some RISC microprocessor designers then decided that all this extra stuff was superfluous so they dumped the support from the MMU.

      So if you design for the lowest common denominator, then ok, you have two levels only. The uKernel makes it difficult because you have to context switch to process requests. If this is a heavily used system service, do you really want to do this? However, modern processors combined with a modern Unix, can context switch pretty fast.

    • by Anonymous Coward
      what about something like the x86 segmentation mechanism?

      It's always a trade-off between security/stability and speed. x86 has these cool segmentation features, but no one uses them because they are slow. The more levels of execution you have, the more time you spend switching between levels. The more bulletproof error checking you add, the more time the CPU spends checking for errors.

      And as someone already noted, for non-x86 architectures you won't even have segment support in the hardware, so it will be slower still there.

      The way most *NIX operating systems protect themselves is to rely on the memory manager hardware. If a process tries to touch something it shouldn't, the OS can get a page fault and trap the error.

      I will be interested to see if HURD works out as they hoped: in principle, it ought to be really secure and really crash-proof. But the reality seems to be that it is taking years to get off the ground; perhaps some HURD fan can explain why.
      • The x86 segmentation mechanism is unavoidable. Every single memory access (even on a P4) goes through the GDT. Anyway, I wasn't talking about using the segmentation mechanism as a whole, just the nifty property that the privlege level of a task is decided by the privlege of its segment (or page, if you ditch segmentation). From what I can see, it should incur no more of a performance impact than the user/kernel checking the processor already does.
  • Now understand I know very little in the approach, but from what I gather, microkernal operating systems run all drivers as services. You can actually kill your keyboard, sound, disk access, etc. driver with a simple kill command. This also allows easy portabliity and scalability because all of your drivers are not IN the kernel code itself, but external programs.

    So tell me, what advantages/disadvantages does this have over QNX? QNX may be closed source, but it is free for home use. I really would like to know how this stacks up against QNX, in which I was actually able to play Quake3, WITH SOUND! Oh, and QNX sets-up and configures everything on my system, AND WORKS, you cant get much better than that.

    Something about HURD that doesn't make sense to me. One gigabyte partitions and FOUR distro cds. Now lets say each CD only uses 512megs. That is two gigabytes. Something here strike you as odd?

    Anyways, I am really not an avid linux person, after attempting to install debian(video setup, ARGH!!), mandrake(VNC would stop the system from finishing boot), and darwin(ok, that was stupid to even try). Things like QNX and Windows 9x/2k just work. So Windows9x is unstable, at least I can get it installed with my eyes shut.
    • IANAKEOAS ( I am not a kernel expert of any sort) but microkernels differ from monolithic kernels is many ways other than hardware drivers running as services. They only pass messages between components meaning EVERYTHING is a server. Want to run Linux programs on a Mach kernel? Just run a Linux server and the software will run just fine. A good example of this is MkLinux and Windows NT. Windows NT runs a Win32 as a server but by the same token can run a POSIX server so POSIX compliant software also runs on it. MkLinux is a Mach kernel running a Linux server to run Linux software. Microkernels in theory are platform agnostic as they only pass messages between various servers. Most people except Linus Tovalds really dig microkernels and have put alot of work into them. Another advantage is you can remain platform agnostic in design yet run servers specific to hardware you're running on. You can go from running on a StrongARM system with 16 megs of RAM to a 32 processor x86 system using a majority of the same code. Notable microkernel based OSes include Windows NT, MacOS X, and of course HURD.
  • by jdavidb ( 449077 ) on Sunday December 30, 2001 @05:39PM (#2765753) Homepage Journal

    We should all use Hurd instead of Linux. Linux numbers disk partitions from 1 (/dev/hda1, /dev/hda2, ...), while GRUB, the Hurd bootloader, numbers partitions from 0. As any self-respecting computer scientist knows, it is more proper to index things beginning with 0. Therefore, Hurd is a superior operating system, and we should all immediately switch to Hurd.

    • But grub runs on linux too!
    • Re:Switch to HURD! (Score:2, Informative)

      by V.P. ( 140368 )
      You can use GRUB with many OSs, not just HURD. I'm using it with Linux right now. Much better than LILO since it can actually read ext2 and other filesystems and find the kernel and its configuration file using a path name, without hardcoding block numbers in it.

      Which also means that you don't have to put up with LILO messing up your system every now and then, or having to run 'lilo' every time you install a new kernel or want to change something in the boot configuration.

      • Actually I'm using Grub w/ Linux, too. Works pretty nice. And I'm planning on upgrading my lilo-based system at work to grub the hard way (I've actually recompiled everything from scratch a la Linux From Scratch []. Then I want to install grub and upgrade to ext3.)

  • most of the people here would have read about the differences between micro and monolithic kernels [almost a holy war in os design] . in reality it has been the case that the micokernel design has been very much an academicia exercise rather than a commercial one. though it might be due to various other reasons, it does show that there is some merit into 1) making things work for a particular case 2) once working, making it work for others RATHER than trying in one go to get a simpler solution.

    i _do_ know that microkernels are much more than what i seem to think of them from the above:) yes the design and the philosophy is very different and surely interesting but practical ?....

    i have taken advanced OS classes and i really do feel that the Mach though it had great ideas WAY beyond its time , was horribly complex and interwoven and so much so that anyone cringes on hearing a system based on the Mach:)

    i think the Hurd is in a good position to prove us all wrong:) as its closely tied with the debian developers [who have done great work till now] and it has been slowly [very:)] progressing....

    best of luck to them:)

  • Beos.

    Except free (as the spoken beer is...), and not quite hitting puberty yet...

    I've scanned all the postings, and I haven't seen any other comparisons, but the descriptions from here and from the web page seem like about the same architecture...minus the extreme multi-threading and the integrated gui...

    At any rate, it sure seems like this would be (yet another) great base to work from for re-building that OS that ain't no more...
    • The most important parts of Beos (in my opinion) where it's multi-threading, it's gui and it's multi-media "things". So why exactly does this sound like Beos? It's not even close in my opinion. And they're not rebuilding it; for as far as I know the [plans for] the Hurd are a lot older than Beos. And I think it's fair to say the Hurd is at the end of it's puberty, isn't it?
    • BeOS was a multimedia demon, and could support multi-terabyte partitions years before any free *NIX.
      HURD has no sound and a 1GB partition limit.

      Say what?
  • The very principle of a microkernel is stupid, and especially so for a free software OS. See for instance this article against microkernels [].

    The basic premise behind a microkernel is that device drivers will be black box proprietary binary code from untrusted third parties, hence require clumsy run-time protection. This hypothesis has been invalidated in practice for proprietary systems, and doesn't even make sense in theory for free software systems.

    There is no need whatsoever for expansive memory protection between modules at runtime. Modularity is great, but at development-time, not runtime. HURD doesn't give you any additional development-time modularity; if anything, it removes it. If you want development-time modularity, drop that stupid C language, and use a modular language, such as Modula-3 (SPIN-OS), SML (Fox, Express), or Erlang (standalone Erlang).

    Microkernels were the latest hype in the 1980's for OS development. They've only ever been hype, and it's sad that GNU people waste their time with such a stupid concept, whereas there's so much more to OS design, including lots of proven concepts, that just await to be implemented in free software (who's gonna implement the lost features from Genera? from Eumel?)

    • If a module runs with limited privileges, security flaws in it can't be exploited to subvert the rest of the system, and sysadmins can safely allow normal users to install (or even develop) special-purpose modules for themselves without risk to any users who don't want to use those modules.
      • Even more important to me, I don't think they can prevent you running your own modules. The reason that is important is education: I learned amazing things on my school accounts by compiling perl, fetchmail, etc. and installing them in my home directory. Just think what I could have learned if I could have simulated a whole operating system! (Which you can do by running a "sub-hurd".)

  • by Mr_Icon ( 124425 ) on Sunday December 30, 2001 @07:24PM (#2766011) Homepage

    If you write programs for linux today, you shouldn't have too many surprises when you just recompile them for Hurd in the 21st century.

    -- Linus Benedict Torvalds, 31 Jan 92 10:33:23 GMT []

    • That whole article is great, my favorite part:

      Andy Tanenbaum:
      >I still maintain the point that designing a monolithic kernel in 1991 is
      >a fundamental error. Be thankful you are not my student. You would not
      >get a high grade for such a design :-)

      Well, I probably won't get too good grades even without you: I had an
      argument (completely unrelated - not even pertaining to OS's) with the
      person here at the university that teaches OS design. I wonder when
      I'll learn :)

      It's like the guy that passes on that one of time opportunity.... something makes you feel small down the road.
  • great.. (Score:2, Insightful)

    by Suppafly ( 179830 )
    Yeh Hurd is great.. unless you want sound or partitions large enough to actually install anything.. Even ms-dos could handle sound and greater than 1 gig partitions..

    I think I'll stick with debian/linux and wait for Hurd to get a little bit more mature
    • Dude ... I couldn't help but notice your sarcasm :-).

      But I think quite a few people agree with you, I have been reading the comments and noticed the same.

      Hurd has potential, but right now there is absolutely no reason to switch from linux, sides the only difference between Hurd OS and Debian Linux is the kernel ... both have apt :-)

  • ...similar to User Mode Linux []?

    I'd be interested in trying HURD out, but I don't want to (a) reboot my machine between HURD and Linux use; (b) buy a new box (my UPS is out of sockets...)

  • It seems like every time I turn around someone is finding a way to bitch, moan, and complain about something. The GPL buys don't like the BSD license. The BSD guys don't like the GPL. *BSD users don't like Linux and vice versa. Some people call a particular OS "GNU/Linux" while others just call it "Linux." Now we get to have the monolithic vs. microkernel debate....all over agein.

    I've pretty much come to the conclusion that most disputes that persist are in fact sources of entertainment or diversion rather than legitimate issues of importance. People get bored and engage in a high-tech version of the dispute from Gulliver's travels where two groups were fighting over which end of an egg should be cracked.

    Let me give you all a little piece of advice. Think for yourself, form your own conclusions. It is not necessary that anyone agree with you, or that you agree with anyone else. Everyone is going to do exactly what they damn well please, including you, so quit yer bitching. Or at least find something more productive to discuss.

    Now don't get me wrong, I'm all for open debates of issues. Its just that when those debates drag on forever and nothing gets resolved then they aren't serving any productive purpose. Instead they create division where none need occur.

    Another thing to remember is that people are going to disagree on things. That is normal and not something to pick a fight over. Anytime I see a group of people in perfect, or near perfect, agreement on something it is a sign that people aren't thinking for themselves. Of course on the other hand when there is a group where no one agrees it is often the case that they are all just trying to disagree for its own sake. Neither situation is a good one.

    Think for yourself and expect others to do the same. Sometimes you'll find agreement with another person. Sometimes you won't. Just because the two of you see things differently doesn't mean that only one of you is right, or that either of you is right for that matter. You've got to call 'em like you see 'em. If everyone were to do that the world would be a better place.


In English, every word can be verbed. Would that it were so in our programming languages.