Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
GNU is Not Unix

Debian GNU/Hurd Preinstalled by UK Computer Maker 136

Anonymous Coward writes "Space-Time Systems in Malvern, England, is now offering computers with GNU/Hurd pre-installed in parallell with the Debian GNU/Linux system. Please see this page for more information." Warning: the Space-Time Systems site loaded slowly when I checked it this morning. You may want to use the (slightly out-of-date) Google cached version for the moment.
This discussion has been archived. No new comments can be posted.

Debian GNU/Hurd Preinstalled by UK Computer Maker

Comments Filter:
  • by Anonymous Coward
    Hi,

    is BeOS a single or multiserver OS?
    The Hurd is multi-server multi-threaded, so the overhead is quite high.
    But then, we did NO profiling and NO optimization so far.
    By introducing new RPCs at strategic places, speed will certainly improve.
    However, don't worry about that too soon.

    brinkmd@debian.org
  • by Anonymous Coward
    Someone ported the Linux kernel to userspace, so you can do this with linux too. There is more overhead this way, but its less than you think, since all the running "kernels" share unmodified code pages.

    By the time HURD is ready for primetime, Linux may have already cloned all its features.
  • No point in mining the UNIX vein further

    There's no reason to run over to or even follow the development of the Hurd.

    Those are inane comments. Using your philosophy, *BSD advocates who use BSDLite4.4 derivative can easily attest that since free *BSD derivatives were available before Linux, that there was no point to even developing Linux.

    That's not to say that Linux is the ultimate OS, because it isn't. It's a total piece of junk in many ways, but that's what you expect with UNIX, and that's where Linux descended from.

    You're quite correct - in *very* many ways, Linux is horribly coded when compared to other OSes (i.e. *BSDs), using temporary patchwork rather than doing the right thing from the beginning. All the more reason for many *nix derivatives to exist - the users/developers can make their own choice.

    Hurd is too close to the UNIX/Linux style for anyone to care.

    As an ex-NEXTSTEP developer who regrets their acquisition by Apple and now a vocal free software advocate/Linux user as well, I am *very* interested in GNU/Hurd and GNU Mach. Its very architecture will give it excellent scalability and security, with all the familiarity of a *nix. Linux is very good for now, but its scalability is finite - GNU/Hurd may be the alternate free OS when Linux tops out.

    Again, having many OSes, a large number which are free or OSD-compliant, is a good thing. Remember, while Linux developers are shaping the present, the Hurd developers are mapping Free Software's future!

    ~AC

  • >The TCP/IP stack is a copy of that of Linux (but
    >the Hurd maintainers are having trouble keeping
    >up with the changes made to the Linux networking
    >code).

    Hmm, does this mean that it is properly called "Linux/Hurd," and that Linus will start interrupting other speakers at conferences? :)

    [duck]
  • --but it hopes to be promoted to major in the near future :)

    hawk, who should really be preparing tomorrow's lectures instead of reading this . . .

  • You have just witnessed a Slash bug. Click on this guy's User Info link for a demonstration. (It's because of the ampersand in his username.)

    --

  • Updated: 23 Jan 1999 matthias

    Says something about progress of that development.

    Why do not they open a start-up, get $1B of IPO money, and actually hire somebody to nake the product.

    I am only partially kidding :(


  • Yeah, but at what cost?

    See, this is my problem with Windows. (Really trying to avoid a flamewar. *please* don't think I'm saying that Windows is to Linux as Linux is to the HURD.) Windows just throws more and more things in, eventually turning the thing into Frankenstein's OS. It seems to me that the cleaner approach is better, especially as we get massively faster hardware.

    Besides, wasn't there a problem with Linux's ability to scale to many (>4, say) processors? Wouldn't HURD's modularity allow it to scale much higher? (This is a serious question. I'm curious.)

  • Living is easy with eyes closed Misunderstanding all you see. It's getting to be someone. But it all works out, It doesn't matter much to me. -- The Beatles

    I'm pretty sure the line is "It's getting HARD to be someone", but maybe I'm wrong.
    Great tune either way.

    Finkployd
  • this looks like the lyrics to a kid rock song

    No, it doesn't have the phrase "my name is kid rock" in it. You can always tell the really intelligent lyric writer by their compulsion to announce who they are in every song.

    Finkployd

  • Troll or not, that is kind of funny.

    Finkployd
  • Oddly enough you are right. Ever heard of the transputer....

    It did what you were talking about right in the chip. And it was a parallel processing chip.

    Sadly it died... Broke my heart...
  • Windows NT was originally based on the server concept that Hurd promotes. However, performance and lack of it forced the designers of NT to move back to a monolithic design.

    Another problem is the lack of interest in this approach. Sure it is cool from a computer science approach, but the reality is that it is not that useful. For example the POSIX system and OS2 system was to be expanded. But because of political and interest reasons those things just were not extended.

    Remember the big multi-CPU support plan for NT? Well time has shown there is no interest. The people want Intel.

    So while the Hurd concept is neat, the long term will be that it is not something that is useful. Unless HURD does things that other OS's cannot do. Then maybe this approach will win over.
  • Clock speed does matter, but its useless to compare the clock speed of a PIII with the clock speed for a G4 because they have been designed differently. It only matters when you are talking about chips of the same type
  • It was called *Q*DOS (not F), for Quick and Dirty OS.
  • The Amiga (maysherestinpeace) got around this by having very little in the way of memory protection (just some semaphore locking) - and using decentralised message passing by reference - Each process could open MsgPorts, which were just structures to hang linked lists of pointers to messages from. Rather than copy a message from one process's memory space to another, a pointer to a message structure was simply passed from one to the other. While the pointer was being used by the other program, your process wasn't supposed to play with the area of memory containing your message. Messages absolutely, positively, had to be replied to, or the system could come crashing down around your ears. The system was blindingly fast, for the time, and was one of the reasons why the amiga had such high data throughput for the time for video work and the like - the whole system was built around messages that could be any size of allocated area of memory, that could be passed from program to program without any "real" memcopys. There was little kernel bottleneck, since the kernel was little more than an interrupt server to order tasks to switch for preemptive multitasking - in the absence of memory protection, device drivers were just normal programs, the data from devices could "short-circuit" path direct to the memory owned by the program using it, since the program just got a pointer protected by a semaphore lock.

    Obviously, this architecture meant that later implementing full memory protection was next to impossible.

    The Amiga system C and 68k macro assembler includes are really fascinating - very well written, but quite different to mainstream systems these days.

  • Here's a link for people interested in developing a GPL'd bios:
    www.freiburg.linux.de/openbios/ [linux.de]
  • I can see HURD be used in larger server workstation with slower adaptation to new hardwares and devices.

    It would seem that part of the microkernel design is to facilitate the exact opposite of what you suggest: is should provide for quicker, easier development of drivers for new hardware as these are individually testable. From what they claim on their web-page it would be much easier then the current "loadable modules" system that Linux uses. It should still be a win-win though as you say. Choice is good and allows competition - I hope that they get 0.3 out soon.

  • Well...do you really care about "market penetration"? Hurd sounds cool - lots of interesting new ways of doing things, it sounds as though its design makes it easier to experiment with and to develop for.

    Likewise for "significantly larger market presence", "fragmentation" etc., who cares? It should be encouraged. You may be right about the company trying to market a distro with a new feature, but what's wrong with that? It'll give more people a chance to play with it at the expense of a fraction of their disk-space.

  • Thanks for the informative post. I realize that the following question is probably easy to ask and complicated to answer (a case of the biggest fool etc.), but why is micro-kernel performance theoretically inferior to monolithic? Is it just that there is too much extra adminatrivia to be done?
  • But aren't all those things necessary with a monolithic kernel too? Isn't there a lot of IPC, signals, etc going on? And doesn't X run as a separate server already? Thanks for the answer.
  • Indeed, the BeOS is a shining example of what a well designed and implemented MK OS is capable of; excellent response, easy (and fun) to write for. Having seen the QNX demo diskette run on an old 486DX, I'd say that system is more proof that MK's can be used as the basis for a fast and stable system.
    However, the great performance problem with MK OS's becomes most apparent when they are heavily loaded; message-passing (and to a lesser extent context switching) takes a bigger and bigger slice of the total system's running time, which means you get effectively much less work. That's where monolithic systems (e.g. Unix) greatly outpace the MK ones.
    I'd say that MK systems are probably more well-suited to workstations and monolithic systems to servers.
  • This is a wonderful piece of work. I don't think I've seen a post with so many contradictory statements on Slashdot yet :)
  • I serious doubt their BIOS is open sourced or GPL'd or whatever...
  • Who knows?

    (I had to make a comment sometime)

  • >I have a dream... that all computers will not be judged by the clock speed of thier processors, but by the file system of thier hard drive.

    If you still like that dream, then reiserfs may send you directly to heaven...
    And save you an investment into Intel or AMD.
  • >What kind of kernel does bad ol' Windows have?

    Well, it doesnt have a kernel.

    Where other systems have a kernel, it has =something=.
    You really dont want to know, what =something= is.

    And forget about your question! Such questions are dangerous!
  • Why did you do this?
    Now the poor guy will have endless nightmares
    (and wake up in the night, covered with cold sweat, mumbling "msd...ooos")!
  • granted i dont think this is an all inspiring post, but you really wasted your moderation point... and it is about the topic...

    what an ass.


    john
  • check out http://prep.ai.mit.edu as well. BTW, this Hurd stuff came out in kt.linuxcare.com's kernel cousins a while back..nothing new here.
  • I am sure you are trying to make a point here, but I'll be damned if i can find it. Basically you are saying that linux is crap, but it's good since that is the way it should be?

    --Bogey
  • am I correct in assuming that the [main] difference that is present with HURD is:

    The Hurd itself is a system of servers that run on top of the GNU Mach microkernel to manage such things as file systems, network protocols, file access, and the other features that are usually managed by the monolithic Unix and Linux kernels.

    The Hurd works in a very different way to Unix/Linux with one of its most important features being translators. These can be modified by the user (as can the whole system), and enable the user to not only seemlessly access files and devices but also networks, so that for instance remote file systems can be accessed, and files edited, with just one command, regardless of whether these files are on a local system, an ftp server or an Internet site.



    If that's all.. would someone explain exactly what's so great about that? Am I to assume that Linux and other OSes require more information to be able to access files not on the local drive(s)/using the local filesystem?

    I've never heard of HURD before.. so, I'm new to this.
  • GNU/HURD

    GNU's not UNIX/HIRD of UNIX-Replacing Daemons

    GNU's not UNIX's not UNIX/HURD of Interfaces Representing Depth of UNIX-Replacing Daemons

    GNU's not UNIX's not UNIX's not UNIX/HIRD of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons

    GNU's not UNIX's not UNIX's not UNIX's not UNIX/HURD of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons

    GNU's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX/HIRD of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons

    GNU's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX/HURD of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons

    GNU's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX/HIRD of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons

    GNU's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX/HURD of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons

    GNU's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX/HIRD of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons

    GNU's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX/HURD of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons

    GNU's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX/HIRD of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons

    GNU's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX/HURD of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons

    GNU's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX/HIRD of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons

    GNU's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX/HURD of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons

    GNU's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX/HIRD of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons

    GNU's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX's not UNIX/HIRD of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons of Interfaces Representing Depth of UNIX-Replacing Daemons
  • From the STS web-site:
    Why Install and Use Debian GNU/Hurd Now?


    To support free software in general and the development of GNU/Hurd in particular. Even those who are neither developers nor computer programmers
    can help the development of GNU/Hurd by using it, submitting bug-reports and trying out new packages when they are released.

    GNU/Hurd has the potential to be the best computer Operating System in the world - better than GNU/Linux IMHO - and the more people who install
    it, use it and aid its development, the sooner this will happen.
  • I have installed Hurd a few months ago and played around a bit with it. It's a fine network machine. What troubles me though is the installation. Although anyone who's into Linux and *NIX in general will be able to install it, it would be nice if we had an installation system in the style of the linux distributions. Has this been implemented in the Debian version of the Hurd yet? Any other distributions under construction?

    //qrash
  • hmm only if the bios, and all the software for the dsps is GPL'ed



  • That sounds a lot like Bill Gates' dream.
    What would be the difference between his and yours?
    A monopoly is a monopoly..
  • If Linux had no competition, the developers would get lazy, and you know it. And it'd only be a hop, skip, and a jump away from having companies like Red Hat, Caldera, etc. start pushing around
    the little guy. Their software may be free, but that doesn't prevent them from trying to eliminate the competition. You don't want Linux to be the only OS in existance.

    Nor do you want one of the BSDs to be the only one in existance.

    Nor do you want Solaris, Be, HP-UX, VMS, OS/2, or Windows NT to be the only one in existance..

    Instead of trying to replace each other, try to learn from each other.
  • That was a brilliant posting by AC/Roblimo of using the Google Cache. I wish all other postings would do this to avoid /. effects on web sites.
  • Linux is a good OS: I run it myself, and it's handling what it's supposed to handle just fine.
    However, there are a lot of problem with monolithic systems, which most Unices, and Linux, are: for one thing, they scale very badly to SMP, since each processor must run it's own OS. Running a microkernel is much lighter, and the server processes could just use whatever processor is available.

    Another thing is that as module as linux tries, and largely succeeds in being, it is still hard to debug, and replace parts. It is doubly true on a true multi-user system, where you have to be root to do anything at all, so normal users can't do anything. Just consider: why do I need root to make ftp.gnu.org seem like a local fs to me?

    So Hurd has a lot of potential. And besides, it's always fun to work on an OS which isn't working...

    In the words of someone we all admire:
    ``When men were men, and wrote their own device drivers''.

    (And of course, Linux and Hurd aren't competitors in the same way, say, Netscape and MS are (or were): the Hurd team freely copies code from Linux when it helps, and I do hope that one day Linus will "cut costs" by copying from Hurd. This is the essence of free software, and this is why free software will win)
  • I think it's the fact that the HURD servers are multi-threaded that makes them hard to debug, not the fact that the HURD is a microkernel.
  • I was had the idea that HURD developement was not that great and that a great deal of functionality was missing. It does not make any sense for people to be shipping HURD on a machine.
  • Could this be the first time a computer maker distributes a machine with %100 GPL'd software?

  • My question is this -

    Could I use HURD with my current hardware (Intex duel Celeron) and video card (Voodoo 3), without having to write my own drivers? Is it stable? Does the kernel panic constantly? It is a microkernel, so many parts of the core OS can crash, and the system will still be running, but will init be dying on me?

    So, in short, to all you 'pure' GNU users out there, how reliable is it? Last time I checked (Which, I admit was a long time ago) there were many unwritten drivers, and massive amounts of work that would be needed in order to bring it to the level that linux is now (with reliability). Has this changed? It would be great to have an Microkernel OS, and that was one of the things that turned me onto GNU in the first place. Could anyone with experiance update us on the present state of the kernel please?

    Waiting for a cooler kernel
    -mafried

  • woopsie! http://www.slashdot.org/software/hurd/hurd.html ... me thinks not


    ... first post?
  • Actually, the folks at Slashdot have done a very good job of explaining that the more posts are made asking for the code, the more the code will be delayed.

    So, the more you flame them, the more they will simply ignore you. Not only that, but they're delaying the source release for all of us who have enough respect to not pester them. So, please, give them the time they need.

    Unless, of course, you're some sort of anti-Slashdot agent, and simply want the code release pushed back. :)

    meisenst
  • I have tried GNU/Hurd and I think it is more usable as MS-DOS ever was! :)
  • Whatever happened to HURD?
    At the very beginning of Linux everybody was just waiting for HURD to be complete and it was supposed to be available very soon.
    I was looking at a copy of the discussion between Torvalds and Tanaenbaum and everybody was just saying that HURD is going to come out soon and this is just to tide over (but they were also saying that DOS and Intel are inferior and not going to take off).

    " The precursor of DOS was FDOS (Fast and Dirty Operating System).
    Microsoft dropped the F and guess what was left? "

  • but it seems the topic has changed very very drastically.

    i dont know why /. complains about these guys, they must make a fortune off of all of our refreshes

  • this looks like the lyrics to a kid rock song
  • trolls should be seen and not HURD

    (im sorry, i couldnt help myself - but at least its kinda on-topic)
  • IIRC, "symmetric" multiprocessing means that the CPUs aren't on a slave-and-master basis but instead run about an equal workload. An early example of S&M MP was certain 68040 Macs. They offloaded Photoshop threads to another CPU in the box, but they couldn't truly multitask and send entire processes to other CPUs. But the Mac is single-user, and IIRC SMP is practical only on multiuser systems.
  • Hahahaha
    No, I know Windows has a kernel: it has had one since 2.x (aka Windows/286 and Windows/386).
    The dos kernel is io.sys (all interrupts are serviced from there for machine-language programs wanting to modify files, etc, the usual stuff a small kernel provides.)
    Windows has... krnl386.exe, which runs the GDI and provides other services such as basic window manipulation, etc. The main WM stuff is in another file (yes, it seriously is a WM, because I killed the WM using WinTop or something and I didn't have any titlebars).

    I think it's probably a microkernel then, but not with servers... the DLLs don't provide services themselves, i think, but simply allow other programs to call them and use their features as though they were part of the program (Dynamic Link Library == dll).
    Not sure about DirectX yet.
  • S&M MP: heheheh
    one processor beats the other and says, "Do my work, you CISC bitch! Render that fscking image!"

    i guess this is off topic, but it's funny

  • What kind of kernel does bad ol' Windows have? I know the average /.'r hates windows, but I have it on my machine, and it runs at almost the same speed (actually, in some cases faster) than does linux. I'm wondering if Windows is a microkernel or a monolithic one...

    It does have drivers that are similar to Linux's modules, but I don't know if they're dynamicaly linked into the kernel or if they run as servers.

    Does anybody have any info on this? How about DirectX? Because dir/s runs as fast on my Windows system as on Linux (on another, almost equally filled Linux partition).


    Versions:
    Windows 98 SE (with no stupid Windows Update patches)
    Debian GNU/Linux (or Debian/GNU Linux if you prefer) potato, latest with apt-get (i do it everyday) and kernel 2.2.13
    4 gb for each, roughly 40% full on both sides.
  • Wrong. Windows NT is a microkernel with a full multi-threaded server setup, similar (at least in theory) to HURD. The POSIX and OS/2 subsystems (proc systems that run on the central kernel) where never expanded because no one cared. You can buy a full POSIX2 compliant layer for NT (i forgot the name..) but it is not too popular, proving my point. NT has been ported to (and released) on x86, Alpha, MIPS, and PPC, but canceled due to complete lack of public support (although Alpha was midly popular, that was more political). There are Win64 versions for Alpha (canceled) and Merced. And who knows what else MS researches, but w2k will be x86 only. NT is a fairly robust and powerful system. Robert (sagei)
  • GNU has so far created nothing but quality free alternatives to commercially available software. If HURD live up to its claims, it will only get better with time. This is a perfect alternatives for people who insisted on using the Mach microkernel unix.

    I can see HURD be used in larger server workstation with slower adaptation to new hardwares and devices. Whereas Linux, on the other hand, will continue to thrive on smaller systems and desktops. This should be a win-win for everybody. Cheers!

  • It is a journal file system. I think you might have took "clock speed doesn't matter" the wrong way. I think he meant going for average processor and give him a good file system. He rather not have the top processor with crappy file system.
  • Has anyone seen someone who purely uses Hurd as an operating system, or can say that they find Hurd more useful than linux or Solaris for anything? I think installing operating systems just to add another operating system is kind of silly. I like some of the technology Hurd brings (object-oriented source, distributed computing, etc.), but why fragment the market further when there are operating systems out there with new when operating systems when others have significantly larger market presence and real utility.

    I guess it's good that they are shipping it with linux as well. Kind of makes you wonder why they are even shipping Hurd anyway. Methinks it's just to be (one of?) the first computer company out there to try to market it.

  • no it does not support multi-cpu systems but it uses standard XFree so voodoo support is there but the device drivers are very few..
  • also i dont think it runs on non i386 systems....
  • You can't tell volounteers what to do. There's nothing that says the HURD developers would spend their time getting the next Debian release out the door if there was no Debian HURD. Besides, the long release cycles for Debian Linux seem to be more a policy/coordination problem than a code writing problem.
  • the main (only?) disadvantage to a microkernel OS like Hurd is that in order to have all drivers be servers, you have a lot more Interprocess Communication. In general, this is a huge speed hit-things like video card drivers have go between process spaces twice just to draw a pixel (line, if you are talking accellerated cards). This is actually why between NT 3.51 and NT 4.0, Microsoft moved the video card drivers into the kernel - they went from one kernel and one IPC call to just a kernel call, sacrificing speed for a big video performance boost.

    Of course, the ideal thing would be to figure out how to make IPC and context switches 'free' (i.e. very low cost as far as CPU time go), probably at the memory management and processor level. If you could cache say, three process structures in backup registers on a chip, you could potentially get something like a microkernel for no added cost CPU-time wise. That and the fast context switching of Sun's MAJC chip would probably completely change computer OS design.

    Of course, after you do that, you are pretty close to just building memory management and task switching directly into the CPU - you are one step away from not needing an OS at all ;-)
  • If you could cache say, three process structures in backup registers on a chip, you could potentially get something like a microkernel for no added cost CPU-time wise.

    Oddly enough, the old Z80 had two sets of general purpose registers which could be swapped with a single instruction. Something like that ported forward into a modern CPU, and extended to include TLB and segment registers, etc could be the answer. Of course, that's easy to say and I'm not a CPU archetect.

  • Kind of makes you wonder why they are even shipping Hurd anyway.

    It does seem to be a bit early to do HURD pre-installs. I suspect they are desperatly reaching for anything that will differentiate their product from the many others. If they really want to do that with HURD though, they need to wait, or hire some programmers.

    On the other hand, it looks like it is just a matter of time (as in finding some) before I end up with a HURD system. It has some intrigueing features.

  • While I am excited that Debian has come out with a GNU/Hurd, and would be even more delighted if they came out with a GNU/*BSD Distribution as well, I wish they would have waited to dilute their efforts until they were able to release Linux distributions more often than once every year. The Debian distribution is hands down one of the best if not the best in terms of completeness, organization, and stability, but the price tag is using libraries and versions of software that are over a year old (or running the "unstable" version which, trust me, can be very unstable at times).

    It is their distro and they can do as they like, but as a longtime Debian user (and hence "debugger") I wonder if diversifying their distribution like this wasn't putting the cart before the horse. Potato (the 2.2.x based release) will likely come out about the same time as the 2.4.x kernel, and I fear it will be another year before Woody (presumably 2.4.x based) will be released, about the time 2.6.x or even 2.8.x comes out.

    Don't get me wrong, I like the long-term direction of having similar releases of dissimilar Open Source/Free OSes. I just wish the issue of timely releases had been adequately dealt with first, before so much effort was divided in so many directions.
  • Ummmm...isn't the Linux kernel GPL'd also? Maybe this is the first machine to be completly composed of GNU software, but it doubt ALL of it is.

    Finkployd

  • The O'Reilley book "OpenSources" has an essay by
    Linus Torvalds that touches on the monolithic/microkernel issue:

    The Linux Edge - Linus Torvalds [oreilly.com]


    This is also a good place to find a copy of the
    famous argument Linus had on the subject with
    the author of Minix:

    Appendix A: The Tanenbaum-Torvalds Debate [oreilly.com]

  • The big thing is it is a microkernel design. The difference is that, while a monolithic kernel (e.g. Linux) is one large program that serves up everything from task switching to video (Frame Buffers), a microkernel (Mach) serves only memory management, task switching and inter process communication (IPC). Everything else is served by "servers" that are requested for services through the IPC.
    While Linux provides for kernel modules, the implementation of them are not as nice and general as the implementation of servers.
    Each server may run in its own memory space (Is this the case with Hurd?), providing for security (A crashing video driver won't crash the harddrive driver). Linux kernel modules, however are linked into the kernel, just as any dynamic library is loaded into an ordinary program.
    In addition to this, a micro kernel design provides a lot more flexibility for future extensions.
    In a /. article some weaks ago, there was an argumentation agains micro kernel design, and the entire micro kernel reserach. The article argumented that monolithic kernels have proved by practice that they work and are prortable and scalable. While this is true, it is true too, that you may create anything using only assembly language, and even succed to create a stable and extensible program. But you still do use C, Ada, LISP, Erlang, Python, Perl and everybody elses' pet language (Not to forget any of them!).
  • Is there a difference between what you term as "multi-cpu" and what "symmetric multiprocessors" mean? If not then this is from the GNU/Hurd page as one of the reasons why it is useful:

    The Hurd implementation is aggressively multithreaded so that it runs efficiently on both single processors and symmetric multiprocessors. The Hurd interfaces are designed to allow transparent network clusters (collectives), although this feature has not yet been implemented

  • It runs on my dual-Celeron-with-Voodoo-3 BP6.

    It seems stable, albeit slow, and there isn't much hardware support. Or software support. And those translator thingies, hmmmm. But hey, it's cool to play with.
  • I think them articles are a little one-sided. Linus choose to implement a monolithic kernal.

    But I do not think that Hurd is *just* a microkernal. The ability to modify the kernal without rebooting and by unprivilaged users is unique to the Hurd. As is translators and a whole host of other features that even other microkernal-based system cannot do. Other microkernal systems are NT and BeOS, I think. Note that BeOS supposedly has great performance so it is feasable that the Hurd's performance can be improved as well.
  • I want a distribution that will use the absolutely newest version of some form of software (even newer than unstable for debian)

    If you ever spot an out-of-date package in Debian unstable, please file a wishlist bug against the package. Instructions on how to submit a bug are outlined here [debian.org].

    In fact, please do it ASAP, because potato is freezing in a week.

  • > Unless HURD does things that other OS's cannot do.

    Well, of course, anything that any OS can do, a Turing machine can do. So it's more a question of relative ease.

    If we imagine the Hurd as more or less finish, it can do all the things that Linux does, perhaps slower (though it is hard to predict by how much), especially on single-processor systems. But here is a concrete use that the Hurd might have which would not work on Linux: it would make it possible to sell computing power on a machine (for example to put a web server on it), including root access (something which is often desirable), to several different clients without any interaction between them, and without compromising the system security. Because Hurd can be virtualized. This is an example something which could provide motivation to get the Hurd working.

  • Good point. Moreover, as I pointed out, the monolithic kernel vs microkernel debate is exactly a library vs client/server system, at a sufficiently abstract level.

    Clearly there is nothing you can do with a library that you can't do with a client/server system. But in fact, the converse is also mostly true. It would be quite possible, IMHO, to have the Hurd features on a monolithic kernel system: essentially, the monolithic kernel provides the capacity-based security model, and the filesystems are integrated in a set of dynamic libraries rather than in a set of servers. This requires the concept of setuid (or rather, ``acquire capability'') libraries, but there is nothing fundamentally impossible with this.

    So, the microkernel issue should not be judged as providing the Hurd functionality but rather, on its own grounds (replacing a library-alike calling system by a client/server model). I am not competent to judge in this matter, but one thing is certain, namely that the entire issue is not completely clear-cut. The official ``Tunes'' rant [tunes.org] on the microkernel issue might not be completely wrong (despite the author's numerous deliberately provotating statements).

  • I'd say the debugging is easier, except that what you're trying to debug is much more complex, so that as a whole the whole debugging process is more difficult. Essentially, because the Hurd servers are heavily multithreaded programs that pass messages to each others in all sorts of ways. That is always hard to debug, even in user space.

    I don't think this is really the reason why the Hurd has progressed so slowly, though, no matter what Stallman may say. Or at least, there are other factors in play.

  • I have a dream... that all computers of every architecture will come preinstalled with linux.

    I have a dream... that all computers will not be judged by the clock speed of thier processors, but by the file system of thier hard drive.

    I have a dream today....

    I have a dream... that when a brand new video card comes out, I will have a linux driver and an X-server all ready install... out of the BOX!

    I have a dream... that people will figure out that it doesn't matter what distrobution of linux you are running... Kernel 2.2.14 is Kernel 2.2.14! It doesn't matter what color packaging the box comes in.

    I have a dream today...
  • corrections...

    ...that when a brand new video card comes out, I will have a linux driver and an X-server all ready to install... out of the BOX!

  • Not when a monopoly doesn't cost anything. Linux is FREE. Windows is not. Check - mate.
  • I know...
  • this company makes computer that are installed with linux and hurd... hurd is on a different partition.
  • Well if it's using XFree, then it's not all GPL'ed.

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • If that's all.. would someone explain exactly what's so great about that?

    It's conceptually pretty which makes the Computer Scientist in all of us smile.. :)

    Seriously, there are a lot of advantages to microkernel's because you solve the problems in a more generic way. Example: any good microkernel inherently supports all the real-time stuff you would need because that is how it's device drivers work (Mach did not do this in the past.. i donno about now) and there are commercial microkernels which work quite well on this philosophy. This means that when your video driver crashes it dose not crash your system. It also make it much easier to write OS emulation software.

    It is also nice that unprivliged users can do all sorts of cool things like create there own file system or user privlage tracking system without creating a security hole. This security system is one of the more revolutionary parts of the Hurd as I understand it. Example: you can add and remove privliges from a running process under the Hurd.

    There are advantages and disadvantage and it is generally agreed that the advantages will eventually out weight the disadvantages AND the cost of porting all the software.. the question is when.. sorta like cleaning up your room.. :)

    Jeff
  • I am sure you are trying to make a point here, but I'll be damned if i can find it. Basically you are saying that linux is crap, but it's good since that is the way it should be?

    Linux is a lifeline into almost thirty years of UNIX tool development. This is a wonderful thing for developers, because we don't have to keep re-learning a new set of tools every few years. But that doesn't mean that Linux is the ultimate operating system outside of that context. That's something that Linux advocates often forget.
  • I want a distribution that will install completely on floppies if I so desire. I want a distribution that will use the absolutely newest version of some form of software (even newer than unstable for debian) that will at least be able to test stuff out instead of compiling it for myself (ever tried to really live on a 340Mb hd as a linux user? It can be painful).
    I have a dream that computer components would be completely 100% backwards compatable and allow me to just take a PIII or something like an Anthalon and just stick it into my 486's motherboard and get it to run. I would like to have a working copy of various GNOME applications that will always work fast and be scable to the processor and memory requirements of a machine. I have a dream that all packages will be allowed for a distribution and that no matter what I install I can upgrade and remove it seamlessly without complaint. I have a dream that all the really interesting developments in computers would not be just for the very elite people in the world and make them more approachable. I have a dream that uncertainty would be removed in the field of technology so that I can be guaranteed at least a decent source of employment. I have a dream that there would be more games that would be scalable and easier to run on crappy machines and not necessitate purtchess of expensive stuff that will be gone shortly.
    I have a dream that all kinds of access would be possible for the internet and that truely free internet access would actually work work for linux like it does already for windows and the Mac. I dream of all these things without a heed for their lack of possibility and those of the world call me a Quixotean fool for what ammounts to impossibility in terms of actually advancing civilization. We al dream for these things but they never come true. We all search for things that are not there and we try to believe. What is it that we really want? I want a computer that has all of my interests in heart. But the bastard machines have betrayed me at every turn! Ahhh the humanity of it all! What do I actually care if some person dosn't hae the grapes to actually just get a video card that is a little less powerful or a processor that is in fact a little slower than the absolte best? Does it really affect me? Does the universe cease to turn and does the su n then burn out? Do I suddently become fated to have my life
    eclipsed by the shadow of Zuses's wrath? I propose not really.
    Linux does a better job of my needs but why the bloat to the total operating system in the array of implimentations of such things as grahics? Why do simple dos games that ran quite well on 286s (Wolfenstein) and such still preform slugglishly on higher end 486 machines? Why the obcession with pretty pictures and the like? Why do we need to observer all that is there without regard to content or use?

    Yes we dream of things but will they come true for you or I or any of the people on this spinning blue sphere in any fashion at all? I think not.
  • And why exactly does clock speed not matter compared to the file system? If I have an 8088 processor and have this magical filesystem on a floppy does it mean that suddently I can play quakeII or what?
    I think in this weary world we have clock speed does matter. I should think quite a great deal. Why then do we have overclocking at all in the first place.
  • What the arguments basically sum up to is that because you have to communicate with each of those processes and such for even mundane things like the video card or the hd and such that the speed goes down. The nightmare that I envision is that eventually people will not even care the machine is just screwing around with it's time because processor power will just increase.
  • Just take a look at the difference between say something like perl which is interpreted for the most part and something like assembly.
    Assembly can do things quite quickly however it is not very easy (not impossibly mind you just takes a lot longer).
    That is basically what is happening here; because you are not executing snippets of the code you are executing programs that will interact with another kernel. Because of this you could have for instince say have a problem in playing quake or such when you need to do something like write temp data to the hd or something that it would force something else (maybe the keyboard to take a less active role or maybe the mouse) then boom someyone kills you with the BFG10k or something.
    Yes X does this already but some of that is changing look at the frambuffer in the 2.1 and 2.2 kernels.
  • I'm a user space developer. I'd be pretty wary of getting into the mad world of kernel development, and one of the things that would put me off is the difficulty of running a debugger against my code. So one of the attractive things about the HURD is that I could do stuff that would normally require me to be a kernel hacker in user space, making the whole development process easier.

    Except that it isn't. Not according to Stallman and the HURD developers, anyway. The received wisdom, confirmed by RMS and others, is that one of the things that allowed Linux to streak ahead of the HURD in completeness and useability was the relative ease of debugging on a monolithic kernel, compared to the microkernel.

    Clearly, I'm not getting something important: can someone lay it out for me? I'd be dead grateful...
    --
  • by David A. Madore ( 30444 ) on Sunday January 09, 2000 @01:12PM (#1388919) Homepage

    This is due to the way the communication works. On a monolithic kernel, when a process needs, say, to perform a filesystem read, it will invoke a processor trap to go in kernel mode (protection level 0), then perform the task as a privileged task (from the processor's point of view). The overhead is just a trap call, which is comparatively low: even the branch prediction mechanisms are not invalidated by this. On a microkernel, on the other hand, the process needs to send a message to the server task. Essentially, things proceed thus: the client task writes the message to a memory page, then goes in kernel mode to ``send'' the message to the server task (all that is just as fast as for a monolithic kernel); then it starts waiting for the reply. But that means it is unscheduled and another task (presumably the server task) is scheduled in its stead. The reschedule may be slow, since it involves replacing all the registers, but that is not the worst part. The worst part is that the paging register is changed (the virtual memory layout is unique to each task, so a task change must change the paging register, CR3 on an Intel), and that means that the TLB (``Translation Lookaside Buffer'', the cache for the paging mechanism) gets flushed. This is why processes are so much more costly than threads (which share the same address space), and this is why RPC calls are so costly. There are two context switches (TLB flushes) for each single message sent from the client to the server (plus, possibly, its reply).

    At a sufficiently abstract level, a monolithic kernel is nothing else but an I/O access library which is shared by every process and which enjoys certain specific rights. In a way, every process ``carries its own copy of the kernel'' with it. On the other hand, for a microkernel, the ``library'' approach is replaced by a ``client/server'' approach.

    There are possible ways around the problem. A microkernel architecture like QNX uses a single address space. Since user tasks operate at the same privilege level, they are not protected the ones from the others. It may be possible to put the device drivers at ring 1 (between the kernel at ring 0 and the user processes at ring 3), I think NT does this; but this contradicts the basic principles of symmetry which the Hurd tries to enforce. Segmentation might also help things, but it is hard to work with. Or, simply, more modern processor architectures might not enforce a TLB flush with each change of address space, but simply mark TLB entries with the address space with which they are associated (or some such thing), or some such thing. I think the Alpha does something of the kind.

    On a SMP machine, the cost of an RPC call is not nearly so high: imagine the client task is running on processor A while processor B is idle; when the request is sent, processor B immediately starts executing the server task, and when the client task starts waiting, it is not replaced by another task because no other task is waiting, processor A just waits until processor B finishes execution, so no context switch occurs on either processor if the scheduler is good enough. Essentially, we have not used two processors, but merely the fact that two processors have two TLB's.

    It may also be possible to group requests as much as possible before unscheduling the client. But that is rather hard to do. This is what the Xlib does for the X protocol, but I do not see how the libc (which is the analog of the Xlib for the Hurd protocol) could do the same, considering the semantics it has to obey (notably that each call must return an error code).

  • by nevroe ( 48688 ) on Sunday January 09, 2000 @10:46AM (#1388920)
    When RMS started out fighting for free software, he had a dream that one day, we would have a whole operating system based on this idea, and on the GNU Public License. They had already worked to make other free software, which ran on UNIX machines, but no kernel. (Things like emacs, compilers, X server, window managers). His eventual dream was for the HURD kernel, which would be the foundation for the GNU/HURD operating system. However, development went slowly, and when in 1991, Linus Torvalds released the Linux kernel, it was quickly paired up with the already available GNU tools to create a complete operating system. Thus, Linux became the kernel used to make the operating system that HURD was meant for. If you go and read the HURD webpage, and gnu.org (Note correct link: http://www.gnu.org/software/hurd/hurd.ht ml [gnu.org] ), then you will notice that they talk about the key advantages to HURD, being that it's object oriented (always a plus for easy modification, though often means drop in speed) and several other things that industry techies have critized the linux kernel for not having. Honestly, for Linus and his cohorts to do something drastic to the linux kernel ("Hey, let's modify it so it does ...") it would be a project that would take years to develop.

    However, I have never tried HURD myself, probably will never even do so unless their development kicks into action quickly like Linux has so they can survive, so I cannot verify anything. All that I know is probably just what I've read in the various FAQ's and on /.
  • by Junks Jerzey ( 54586 ) on Sunday January 09, 2000 @02:59PM (#1388921)
    All usual advocacy nonsense aside, there are two reasons to use Linux.

    It is more stable than other easily available options like Windows 98/NT.

    It provides access to, and a healthy environment for, a large pool of standard tools: gcc, Perl, Python, awk, Emacs, etc.

    The first is what we usually hear about, but the second item is just as important. If you were using UNIX on the job or at school in 1988, then you'll be pretty much at home in 2000, because everything is generall the same. I used UNIX on a Sun workstation for software development in 1991, and all of that experience carried over when I started using Linux at home and at work in 1999. Nothing much has changed. It's good to be able to keep that knowledge over a long period of time and not have to relearn in every few years.

    In that light, the line between Linux and the Hurd is pretty irrelvant. We already have a working key that lets us access the tools we want, so it doesn't matter what kernel is beneath them. There's no reason to run over to or even follow the development of the Hurd.

    That's not to say that Linux is the ultimate OS, because it isn't. It's a total piece of junk in many ways, but that's what you expect with UNIX, and that's where Linux descended from. There are great opportunities for operating systems with much different philsophies. Look at the OS in the Palm, for example. It's not an OS in the geeky computer user sense, but it does exactly what it needs to do, is extraordinarily useful, and people like it. Hurd is too close to the UNIX/Linux style for anyone to care.

  • by be-fan ( 61476 ) on Sunday January 09, 2000 @01:18PM (#1388922)
    A monolithic kernel DOES have some inherent advantages over a a microkernel, but the advantages of the microkernel outweigh its disatvantages. The whole concept of having servers respond to requests and communicating via IPC has the following advantages. 1. The are much more independant, which leads to more stability and easier coding. (You can make changes in one without chaning another, long as the interface remains the same.) 2. The are much more easily updated and maintained. 3. They are much more asynchronous since objects can make requests, then immediatly return and do some more work. This is especially good in something like a file system server or graphics server. For example, in the BeOS drawing kit, my application can make a request to draw a line. The line functions sends the message and returns immediatly. My app can continue its work while the line drawing occurs in the background. It helps even more in hardware accelerated things, since the server can have the hardware do some rendering, while the app continues to do some physics in the forground or something. All this leads to higher performance. Its true that IPC does incur some overhead, but it can be managed with. I don't know who Be does IPC in BeOS (and I don't think they are telling) but it obviously works well since BeOS apps are extremely fast and responsive. If the IPC overhead was really that bad I don't think BeOS would respond as well as it does in media apps. The other thing that bothers me is that you C programmers seem to think that Object Oriented programming incurs a huge amount of overhead. It does incur some, but it is negligable, and vastly outweighed by the fact that by using systems objects to represent the API, the system is not only easier to program, the API can evolve in time without adding a huge amount of weight to the system. I doubt if the performance hit is even 3 or 4%. And the time it saves can be put to good use optimizing the algorithms used. Finally, object oriented systems are much easier to make extensivly multithreaded, and even on a single processor machine, mulitple threads help because the processor does not stall on one task so long. Especially in an OS, which is mostly limited by the speed of the hard disk. By puting disk access into a seperate thread from the program, it stays MUCH more responsive. So not only CAN an object oriented, microkernel OS be fast, it already exists in the form of BeOS. Although it does bother me that HURD is slow at this point. Even BeOS developer releases were lightning fast, and if HURD is to be such a new OS, why is it slow at the beginning. Or is their a lot of overhead built in? Or maybe it is much farther from release than this article would have you believe. If all this sounds like some mind expanding thing, I urge you to go to Be's website and read the whitepaper on the MediaOS. It is heavily marketing based, but has some really nifty concepts.
  • by David A. Madore ( 30444 ) on Sunday January 09, 2000 @11:42AM (#1388923) Homepage

    Since a few people seem to be interested, I will recapitulate the overall HURD status, from my personal experience and my reading of the debian-hurd and bug-hurd mailing lists.

    First, is it usable? Well, it depends for what. It is still quite unstable. Filesystems are under active development, but there are still some problems with them (the ``native'' Hurd filesystem, if that means anything, is ext2 just like Linux, but the ext2 demon still has problems, one of them being that sometimes entire file blocks are replaced by zeroes — this will be fixed soon). The TCP/IP stack is a copy of that of Linux (but the Hurd maintainers are having trouble keeping up with the changes made to the Linux networking code). The security mechanisms are extremely flexible but that sometimes causes problems (for example, the Hurd has one more set of file permissions besides user, group and other permissions, the ``not-logged-in'' permissions, and no tools yet exist that will allow to manipulate these permissions). There are also some strange limitations: for example, Hurd will not work on a partition of more than 1Gb, and it crashes rapidly if not given a swap partition.

    X Windows will work with a set of patches. Some other programs cause problems, and sometimes it's the program's fault (because it makes assumptions about the Unix-like nature of the system which are not verified under the Hurd).

    On the other hand, the Hurd is stable enough to bootstrap itself (compiler, microkernel, libraries, demons) and perform tasks that do not have stringent hardware requirements.

    The Hurd shares the same libc as Linux (the GNU libc, currently version 2.1.2). So eventually it should be binary compatible with Linux (right now it is not, but there is no severe problem with that, it is only a matter of time). This is one of the great hopes of Hurd, the possibility of making the transition completely smooth.

    The slowness of the progress on the Hurd is due to nobody working on it full time. Some very competent programmers are devoting a lot of time to it (Thomas Bushnell, Brent Fulgham, Roland McGrath, Marcus Brinkmann and certainly a few I'm forgetting), but they are overwhelmed by the immensity of the task.

    In theory, though, the Hurd should be easier to develop than Linux, because it is more inherently modular, and because of the fantastic possibilities of gdb under the Hurd. Also, you do not need to reboot to test changes to the ``kernel'', and you can debug a live kernel without problem; plus, you can test some experimental features without endangering the base system. So, there is no reason that Hurd can't become very solid and stable — quite the contrary in fact. But they just need more volunteers. And the FSF unfortunately cannot affort to hire someone on it full time (say, why not write a check to the FSF specifying that you would like to see it employed for Hurd development?).

    On the other hand, in the domain of performance, it is probable that a microkernel architecture can never be at par with a monolithic kernel, at least on single-processor machines. For the moment, the Hurd is horribly slow with filesystems (rm -rf is just a nightmare), but this is mostly because it is completely unoptimized. Still, even when it is, it will probably remain noticeably slower than Linux. It has been claimed, however, that the difference may be significantly less than expected; but it is yet too early to see.

    The main advantage of the Hurd is its flexibility. User-land filesystems are part of that. In fact, you do not even need to be root to write a filesystem. (That is one of the things which angers me the most about Linux, the need to be root to mount a simple loopback file.) The Hurd is completely virtualizable, whereas Unix hardly is (well, there is a ``user-mode Linux'', but it is even more experimental than the Hurd), so any user can set up her own virtual sub-hurd with its own set of users, permissions and so on. The security system is soooo flexible: much better than access control lists, it uses capabilities (àla Eros [eros-os.org]) in the form of Mach ports. If this were made practical, this would be a huge gain on the security side, because you would practically never need to be root for anything, just introduce the ad hoc capabilities and permissions. And the virtualization possibilities let you surround dangerous demons by ``sanitary cords'', making the system much harder to break into. So, theoretically, the Hurd can be a very secure system. Finally, the whole translator system can be used in yet unthought-of ways to provide wondrous communication mechanisms between programs.

    However, the real question now is whether binary compatibility with Linux plus the great extra features and flexibility can be sufficient motivation for people to move to the Hurd when it is more stable, and, in the mean time, for more developpers to draw their attention to it. The lack of hardware support, on the other hand, is not a big problem: Roland McGrath has an experimental project for making the Mach microkernel run with the Flux OSKit, so that all the Linux hardware support would immediately benefit the Hurd.

If you can't understand it, it is intuitively obvious.

Working...