Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Debian

Debian Hurd Still Coming 126

mirko writes "After some time with almost no news about it, Doctor Dobb's journal finally choosed to break the silence and to deliver quite a comprehensive article (also available here) about the forthcoming Debian Hurd [?] operating system."
This discussion has been archived. No new comments can be posted.

Debian Hurd Still Coming

Comments Filter:

  • What strikes me from the tone of articles about Hurd lately, is that the tone has changed from "the OS that is going to replace Linux" to "A clean implementation (at least cleaner than Linux/FreeBSD) to dissect in the classroom)

    I.o.w. when it first was positioned as successor to Linux, it now seems to be the successor to Minix.
  • Wow... Wrong again.. Hurd has two cd's full of dpkgs you can install. Hurd is just another way of running debian.

    0

    It exists... and it works (for some).

    Dave
  • [...] in 2010, by which time it, and Unix, will be obsolete.

    Yeah, and 640k are enough for everybody, dammit!
  • Tis the nature of open-source stuff...no one owes anyone an answer to a question about a long-term goal of a project.

    True, but not responding, especially to so simple a question, doesn't endear people to help you in your project.

    Ranessin
  • by f5426 ( 144654 ) on Tuesday December 05, 2000 @05:46AM (#581333)
    > The problem with the HURD is that it is still obsessed with the microkernel architecture

    It is the only advantage of the HURD versus linux. We already have plenty of monolitic kernel to choose from.

    > This may have seemed like a good bet ten years ago, but as Linux has shown, Microkernels are no big deal

    This is plain wrong. There are many microkernels out there. They offer no specific advantage to monolithic kernels, but the HURD is a very different beast: A multi-server micro-kernel.

    > They are inefficient

    Bullshit. You are comparing highly tuned monolithic kernels to development micro kernels. This is apple and oranges, or mindcraft-like attitude.

    Sure, micro-kernels are inefficient to perform the tasks that monolithic kernels have been designed to perform. Big deal. But there are tasks that are impossible to perform with a monolithic kernel. When you'll base you benchmark on them, you'll say that monolitic kernels are inefficient.

    A couple of examples:

    Two years ago I had my first CD-RW. Someone showed me Direct-CD under windows, and I said: "Woov. Cool. How does it works. Can I do that under linux".

    Digged around. Found the UDF filesystem project and specs. Found the packet writing specs. Found that there were no packet-writing UDF driver under linux. Found the reason. "Currently you'll only be able to read CDR/CDRWs due to a kernel limitation". So what ? Well, I tried to find what the limitation is, to find if it was possible to work around, and do a packet-writed file system. Unfortunately, due to optimisation in the linux kernel it is not possible. But it may be possible later. Just wait. As you may know if you follow the linux kernel traffics, Linus specifically make it hard to become a kernel hacker, so the ones that hack the kernel are only the most motivated ones. Conclusion: No way to mount and modify a UDF packet-written filesystem. Period. The complexity of the kernel architecture and source rendered linux as closed as a proprietary kernel.

    A UDF driver would be possinle under the HURD without having to touch more things than neccessary (It would be called a translator. This is like a user-space file system). It will be less efficient sure. But noone gives a fuck to high efficiency when writing to such a slooow device.

    Example 2: Having drivers as server outside the kernel in a network transparent architecture (like mach is), should give you ideas of what will be possible. (Remote physical devices. Who mind a little efficiency loss when you access an USB bus from a different network point)

    > and the chances of a code fork with Hurd are even greater than for Linux, due to the easier understandibility of the source code.

    Oh, you were trolling. Guess what, I felt for it.

    Cheers,

    --fred
  • practice, a well designed monolithic kernel is no more likely to do that than a well designed microkernel. I have seen servers
    running on top of a microkernel cause an avalanche affect and bring down the entire system.


    And linux can be run on a microkernel... crash.. and be reloaded without rebooting. A well designed microkernel can be just as stable as a well designed monolithic. Its just easier to maintain a microkernel due to its small size.

    At least thats the general belief.

    Dave
  • > I just find this horribly ironic.

    I think you mean "hoorribly" :) :)

    --
  • Possible reasons:

    * Because Linux is available in very nice boxed distributions. i.e. Convenience, easy to re-install. Kickstart. Easy to re-image onto a bunch of lab PC's.

    * Because Linux is here now, and easily available. (See previous point.)

    * Inertia? People already know it?

    I could probably dream up other reasons.
  • Research on filesystems and UI's taking place under Linux (and others) and can be ported to other systems (like, for example Hurd).

    Hurd is looking at different ways of implementing the kernel.

  • A well designed microkernel can be just as stable as a well designed monolithic.

    True, true..

    Its just easier to maintain a microkernel due to its small size.

    I'd argue that once you get into the maintenance of the server, and dealing with their interactions, it gets just as complex.

    At least thats the general belief.

    To each their own.

    Ranessin
  • How about a new Cabal? It would consist of a secret society of members sworn to silence and the following act of subterfuge: members would resubmit archived Slashdot articles to the submission queue! But, aarrgh! We must act quickly, for Others have already begun our Task. Alack!
  • whoa...is this related to M$'s .NET thingie? what's the deal with Mcafee .NET "initiative?"

    -----
    # cd /
  • I think if you want to get hung up on calling things "Debian GNU/Linux" then you will be correct; you're only likely to offend others when they expect correctness and you don't give it.

    Maybe the GNU bit goes hand in hand with it being a port of Debian (versions of GNU s/ware packaged the Debian way) with a Hurd kernel.
    ~Tim
    --
    .|` Clouds cross the black moonlight,
  • I can tell by the fact that you use the term "freaks" that you're a really open-minded person.

    If you were a programmer, you'd understand why a microkernel architecture just makes sense.

  • Can this be?!!!
    A real exchange of ideas on slashdot?!

    I will agree that a poorly designed server will be much more difficult to maintain than a good one.

    I suppose we agree that design is key!!! :)

    That's good enough for me. I just really want to see a good microkernel implementation doing what the Hurd project proposes. Its really quite a good idea.

    Dave
  • I was refering to successor in function (being used to teach OS building) instead of successor in an OS pedigree.
  • Has anybody else noticed that the moderation lately has been more bizarre than usual. It seems like _everything_ is being marked "offtopic".

    The post above, for example, might be a little off-base, but it's definately on-topic.

    I've noticed other threads that were really offtopic, but instead of just the top message being modded down, every reply (some quite interesting) was modded down as well. What's the point in that?

    Anyway, just my observation. I'm going to start meta-moderating a lot more and see if I can do my bit to fix this.

  • Other would simply say that he has an opinion, just as you have an opinion about it.

    Besides the points he's trying to make are very defendable.

    Arguing otherwise would be like saying, assembly language is better than c, because it tend to produce faster code.

    Johan V.
  • Question: Why would you choose to transport 5 people from spot A to spot B with 5 bikes instead of one car? What would you do when you were to transport 6 people?

    This is not a good analogy. Neither does it answer your question. However, to put it short. Running MkLinux on top of Mach is not that much different from having a monolithic system -- it is just a monolithic system running on top of a microkernel. Having this input, you should be able to deduce yourself (from reading the referred article or other posts in this thread) why Hurd is not a duplication of the MkLinux effort.

  • I think most people here are very aware of the differences between a server as in web server, and a server as in a Hurd server.
  • And has anyone tried to explain "Gimp" to a non-Linux person? Or to a handicapped person?

  • I was using "web hosting" as an example of a "nix box you don't have root on," and "servers" as an example of "things that run at kernel-level or otherwise require superuser permissions to run." Also, Internet appliances that run nix tend not to give users root; otherwise, the user could circumvent the manufacturer's revenue stream. Besides, "kernel module" (Linux equivalent of a Hurd server) and "server" (the kind that handles network requests) overlap somewhat, e.g. in the TUX web server.
  • Fat chance they'll give you sudo permissions either. Often, hosting companies charge mucho dinero to add features to your site that you could add yourself if you had shell and sudo. Once you've paid for all those services, you might as well be colocating your own box.
  • I'm not arguing that source distribution is a bad idea. I think its great. I'm probably close to RMS on this one. However, the idea that you can build a source driver from one kernel version to another is just not entirely true. One of the reasons that ReiserFS hasn't made it into the kernel is because it doesn't follow the API as well as it should. However, the reason that it doesn't follow the API is because the kernel API has not been stable enough to follow closely. Semantics of certain kernel function calls change even within minor releases of the kernel. In these cases, in order to work 100%, all drivers using these functions have to be updated to use the new semantics. Now, even with such changes, many source compiles will compile, build, and even work for a time, but if they aren't following Linus's changing semantics exactly, there could be a number of race conditions that result.

    In addition, having a stable API forces you to examine the API in detail beforehand. Thus, you are probably more likely to come up with a reasonable, sensible API.

    From an engineering perspective, stable APIs are always the best, because, even if you have to work around certain things, you always know what they are. Now, just because you have a stable API, doesn't mean that you can't come up with a new one. A kernel could have two separate APIs (I don't know if the HURD allows this, but it is possible) - one for old drivers and one for new. "But what about bloat". Well, its just like LIBC - if you want compatibility, build the old library too, if you don't, leave it out. With LIBC, you have 2 stable APIs, but you can junk either of them if you want to. The same could be done with a kernel.
  • Hurd would be useful as a research system if any research was being done in it... but I don't see any. If Linux is useful as both a production kernel and a research platform, who needs the Hurd?

    Now if you want to talk about interesting research, let's look at SawMill and EROS.
  • You know it.

    As far as I can tell (I'm not a developer, just an IT dude), .NET is just an API for writing applications that run from the web on a client machine, negating the need to install the client on each local machine. Of course, it's hooked into Office 2000 and a host of other MS products, forcing anyone who uses MS products to use .NET. Such are the ways of Microsoft, I suppose.
  • I've got two conflicting opinions on this.

    1) Different approaches should be followed, expanded, tested, and thrown out in the wild to see how they thrive.

    2) There isn't enough brainpower to give Linux-level support to every new operating system that enters the mindspace.

    This applies on a personal note as well. I'd love to spend a couple months diving into HURD. I've got a BeOS 4 CD at home that I'd like to give the time it deserves. I'd really like to work with Linux a bit more. I want to install OS-X on my fiance's iMac and see what the hype is about. Unfortunatly for me, I spend all my time on the computer working in NT...

    On a related note, anyone else notice the full-paged add for Plan 9. There's another OS I'd like to check out.

    Some many OS so little time...

  • by becker ( 190314 ) on Tuesday December 05, 2000 @07:30AM (#581356)
    1) There are "servers" instead of "drivers". Why is this important? Servers have their own address space. So, what happens if my network driver decides to go ballistic? Just restart it. If that happens in Linux it's bye-bye system.

    Uhmmm, no. Under Linux, if the device driver dereferences NULL the kernel reports the problem and moves on. It has been this way since 1992.

    Under either system if the hardware writes a random chunck of physical memory, you are hosed. Putting the device driver code in its own address space doesn't make any difference.

    ) Any user can write kernel code without creating a security risk. This includes user filesystems and the like.

    If there is no security issue, why is that code in the kernel? Oh, because it's part of the file system security model? Hmmmm.

    I offered to contribute to the Hurd in 1988, soon after I left MIT. My offer was rejected, with the claim that the design work was already done and that only people physically near MIT would be able to effectively work on the nearly finished code. That makes sense -- it would be impossible to develop a working OS with developers scattered around the world communicating only with email and FTP.

    Six years later I remember RMS urging us to stop working on Linux, since it was distracting people from the Hurd, the only OS that the FSF would endorse. If anyone would have written "GNU/Linux" the FSF would probably have sued for trademark misuse! Hmmm, and why isn't it GNU/Hurd?

    Anyway, by my count the Hurd is well over a dozen years old. It started from cribbed software, and has used software from the many working OSes that have surpassed it, yet it still hasn't produced a usable system. The same old arguments about how monolithic kernels are too painful to develop for are still being repeated. Give it up, the evidence is clear: the Hurd is a dead-end path.

  • The FSF's desire to rebrand Linux to GNU/Linux goes against the idea of free software.

    Linux users are free to incorproate free software in their systems without rebranding the name.

    The FSF is also free to create an operating system, call it GNU, use a Linux kernel, if they want.

    Perhaps I should create an OS, label it Fred, and abuse the snot outa the GPL.

  • by Anonymous Coward
    You don't address my main point: if the Hurd is so much easier to write and extend, why is it still incomplete and unusable after more than a dozen years?

    Critical mass and knowledge transfer.

    The thing about Linux is that there are(were) a lot of potential developers that understood the Linux model (i.e. the monolithic Unix kernel).

    With the Hurd, because it was a relatively unorthodox design, there weren't so many.

    Linux has now reached critical mass. The fifteen year old hacker of today will become the kernel hacker of tomorrow.

    Hurd has not yet reached that mass and until it does, it won't attract the same kind of collaboration that Linux enjoys today, nor the braintrust that will allow Linux-to-be to continue to evolve long after today's major contributors are out of the game.

  • Tell us about it? Was it difficult to install? Does it run all debian packages? Seen any weird quirks?
  • Gee, and then they can go thru the pain of learning how to port it to different platforms. I thought the article spent too much space telling us why this isn't linux and how linux sucks, and too little space telling us useful stuff about Hurd.
  • You're right, those Modern OSes like Windows use microkernels and they are definately overrated.

    Nephs.
  • Actually, that is the reason for the very existance of Linux IIRC. The unwillingness of Tanenbaum to incorporate change to Minix in the painfully slow messagehandling made Linus break away, and create his own kernel. (remember, the post that started Linux was in a Minix NG!) (Minix is dog slow on a 486, while a moderate Linux (like Slackware) can cope)
  • I didn't see any mention of security in the article (other than the authentication "server"). Does anyone know how they handle Data vs Code in memory? Seeing as they are still in the development stage, couldn't they take a big step away from one of the intrinsic security issues in all the *NIXes? I just can't imagine starting the process of weeding out thousands of buffer overflows in a new OS. (Please note, I may be smoking crack here, but I thought that separating Data from Code in memory would help the buffer overflows.) Oh well, it sounds like it _Might_ be interesting.

  • Can this be?!!!A real exchange of ideas on slashdot?!

    Who knew that was possible? :-)

    I just really want to see a good microkernel implementation doing what the Hurd project proposes. Its really quite a good idea.

    And I agree, which is why I have a small harddrive at home setup with the Hurd on it. :-)

    Ranessin
  • It seems that /. rarely if ever corrects a spelling mistake when that mistake is made by the submitter of the story. So its up to the submitters to learn to spell and use grammar correctly. In which case, we could be here a damn long time...
  • A few points here from one who watched the Linus vs Andy T debate in real time.

    It is well know what Linus thinks of Microkernels, while QNX (which was quite spiffy on an IBM PC-1!) proves that all microkernels are not created equal.

    The Linux of today is not nearly as peppy compared to the Linux of yesteryear performance wise. At the same time, the first Linux kernels could only handle 64Mb processes. The original Linux was like a Datsun 240Z compared to your Grand-dad's DeSoto. Not alot of features - but it was FAST.

    Compared to Minix at the time - it was lightyears ahead! Minix was REALLY designed to STAY on a 8088.

    Finally - if Microkernels were God's gift to the world. Why did it take 10 years to get to a useable state, while Linux got there in less than two years?
  • by Hacksaw ( 3678 ) on Tuesday December 05, 2000 @05:54AM (#581367) Homepage Journal
    HURD is based on Mach, which is on a whole bunch of platforms, such as... err... IA32 and, ummm, PA-RISC.

    Well, in any case, Mach is made to be highly portable. If anyone cared to do it.

    On the other hand, no one but a few people at the FSF seem to have worked on it since 1995.

    Could someone tell me again why the HURD is a good idea?

    Oh right, it's a system for managing complexity, making it easy for regular users to add services.

    Do regular users have a hard time adding services to Linux? I'm root on my own boxes, and I can load and unload kernel modules all day.

    Do regular users spend a lot of time wishing they could services to their box?

    Let's face it folks, I think the reason the HURD is there at all is because Thomas Bushnell was/is having fun, along with a few other people.

    I am sort of waiting for a few people to take issue with some of the claims made in that article about Linux, such as the claim that it's not very maintainable. I'm sure Linus (who's primary pet peeve is code maintainability) would have something to say about that, if he didn't have better things to do.
  • I can't say I see how this is offtopic. Maybe it's just me though. . .
  • by johnnyb ( 4816 ) <jonathan@bartlettpublishing.com> on Tuesday December 05, 2000 @06:34AM (#581369) Homepage
    I'm sorry, RMS has little to do with this. "Tell RMS to scrap it". Yeah, why don't we tell one of the current desktop projects to do the same (obvious sarcasm). The fact is, a number of people are working on the HURD, and its because they want to, not because RMS told them to. The HURD has a number of advantages over Linux. Let's look at a few

    1) There are "servers" instead of "drivers". Why is this important? Servers have their own address space. So, what happens if my network driver decides to go ballistic? Just restart it. If that happens in Linux it's bye-bye system.

    2) Any user can write kernel code without creating a security risk. This includes user filesystems and the like.

    3) Much more flexible authentication system

    4) The ability to emulate multiple environments (the current HURD has several servers that implement UNIX services (process list, scheduling, etc), but all of those are pluggable and/or removable. So, you can have as much or as little UNIX as you want, and you can have it on a per-process basis.

    5) The HURD is much more extensively multithreaded, and _should_ scale better to multiple processors (note the _should_ - I'm kind of skeptical here. I think it'll take them a good amount of time to remove the bottlenecks).

    6) Stable Kernel API. This is something that Linus has said he will _not_ do.

    7) I _believe_ (please correct me if I'm wrong) that the HURD is being developed using CVS. Now, this isn't really a technical issue, but a management issue. Linus still maintains the kernel using his INBOX. That has worked well, but as the kernel grows in size, it works less and less well. HURD's development process allows for more distributed maintenance, rather than a single point. Using CVS also allows developers to get patches into the tree for multiple kernel versions. For example, if Linus had a CVS server, he'd open up a 2.5 branch now, so all new stuff can be put into 2.5, and bug fixes into 2.4. This sounds petty, but project management is a real issue, and I think the Linux kernel is reaching its limits in this area.

    So, there you have it. Don't call something you don't understand a bunch of bunk. It really sounds lame.
  • by yerricde ( 125198 ) on Tuesday December 05, 2000 @06:36AM (#581370) Homepage Journal

    Do regular users have a hard time adding services to Linux? I'm root on my own boxes, and I can load and unload kernel modules all day.

    Ever tried to run a server over your dial-up connection? It's almost impossible to (legally) get root on a machine with a fast connection to the Internet. For example, Internet hosting companies tend to run several customers on one box; fat chance they'll give away the root password so you can screw it up for other customers. They may give you shell (even this is rare), but they don't give you root. Short of colocating your own dedicated box (very expen$ive), unprivileged userspace services are the only way to run servers on your site.

  • Well its been great talking to you.

    Have a nice day!

    Dave
  • at least he is notw a coward.
  • What are you talking about?
    Every Linux distro has GNU software. It's not the name of a Debian product. It's how Linux distros should be called: [distro_name] GNU/Linux.
  • I think that you are overlooking something rather obvious: that the fundamental design of an operating system (*nix) is so sound and solid that it is still being used 30 years after it's invention. I think that it is a tribute to the creators of Unix.

    Are there problems with it? Sure. As there are with any operating system or software product. At least with *nix, you're likely to see problems fixed in a reasonable amount of time.

    Well Unix is a great OS for hackers (in the old sense), and the fact that a free and fashionable implementation exists has made it popular. It's not the fact tha it's Unix that makes it so popular - it's that it's free.
    And I think that you are dead wrong here. Making Unix free has brought it to thousands upon thousands of people who learned Unix and loved using Unix but otherwise couldn't afford the prohibitive cost of owning a unix workstation for personal use.
  • Mac OS X isn't microkernel-based at all, so I wouldn't really call it a single-server.

    Likewise, AFAIK only the BeOS network stack is in user space, so I'm not sure what that counts as.
  • 7) I _believe_ (please correct me if I'm wrong) that the HURD is being developed using CVS.

    Yup. It's using CVS. I couldn't tell you where, though. For that, start hunting at the GNU Hurd [gnu.org] page. You may also want to visit the real Debian/Hurd home page [debian.org].

  • You're missing the point. In HURD, the servers are in virtual memory, not physical memory. Therefore it _can_ write random chunks of memory.

    Also, I think you're missing another point. In HURD, the kernel/user distinction is not very big. Each server is only given certain permissions, and it only has rights within that. Therefore, if you write and load kernel code, it still only operates within your permission set. I think the main difference between kernel/user code is the communication method.
  • I had most of this over 10 years ago with my Amiga, so one wonders what we've all been doing since then. Ok, so no memory management (not that important, well written and tested software won't crash often, where as badly written, bloated softare will, even with MM eg Windows), and unsophisticated IPC (shared memory), but everything else was there - microkernel OOP executive, modular librarie/devices, handlers (translators in the HURD speak). I wrote a mail server in ARexx using the IP handler exposed by AmiTCP in around 12 lines! Oh well, no use living in the past (which appears to have actually been the future), so I remain interested in the HURD although I'd rather not have all that UNIX junk piled ontop. Hmm, wonder if someone could do AmigaOS ontop of this :)
  • ...what are all those 'Unable to connect to ad server' messages sprinkled around the text? Must be that damn junkbuster proxy again, messing up the good intentions of Dobb's marketeers...

    /me renices junkbuster to -10
  • Wouldn't Hurd be useful as a research tool? (Not be all end all.)

    Given that you can extend the OS by just building servers, doesn't this lend it to quickly being able to show proof of concept?

    I'm assuming here that building servers for Hurd (even for things that are "in" the OS, such as network stacks, device drivers, new filesystems, etc.) is considerably easier than putting similar things into Linux. If not, then I suppose it makes no better a research platform than Linux.
  • We've got Linux and the BSDs, why do we need anything else?

    Because there can be no GNU System without the hurd. The rational for calling a particular OS environment "GNU/Linux" is just ridiculous if there has never been a completed GNU System for it to be derived from. But even Debian Hurd won't be the GNU System, it will just be the Debian distribution of an OS never released by its makers.

    No matter how you define "operating system", it still requires a kernel. No kernel, no operating system. And GNU is described by its makers as an operating system.

    Moderators: don't mod this down. I am not bashing your holy GNU. I think GNU is great. But I'm also honest enough to admit that it isn't an operating system, yet. In the meantime it makes a great parts repository for those creating other free operating systems.
  • by linzeal ( 197905 ) on Tuesday December 05, 2000 @05:06AM (#581382) Journal
    I always thought of debian/hurd as a tenacious possum or some other clever and mischieviious backwoods rabies-carrying animal. Infectious as rabies now that is advertising!

    It should be more vicious than a penguin and ugly I think these are important qualities to keep it from becoming cute like linux.

    A whiskey drinking pot-bellied racoon perhaps named ralph or something ?

  • will my insurance cover any damages caused by such a hurd? I'd hate to see my yard get trampled... I took so long to plan, and I just choosed where to start my new garden. It was bad enough when a hurd of gnus killed my dog, but now a hurd of debians? man, what is this world coming to...

  • There is a running gag in this movie that goes like this:

    "How's father?"

    "Still alive and dying."

    For some reason this story reminds of of it.
  • And BeOS, BSD, Darwin and Solaris also have GNU software. What's your point? Are you demanding that Steve Jobs call his new product "GNU/OS X"?

    Until GNU finally completes its own operating system by integrating a working hurd into the rest of their OS stuff, there is no GNU System. It does not exist.
  • like I've already visited it? maybe because of this [slashdot.org]?
    --
  • People will no more call the GNU/Linux system only as Linux.

    This does not follow out all. None of the Linux distributions out there are the GNU System. RMS says that it should be called "GNU/Linux" because it really is the GNU System with just the kernel swapped out. But there is *NO* GNU System! There is not now any stable OS with a stable hurd to swap out. And there certainly wasn't when the first Linux distributions arose out of the primordial software soup.

    Rather, I think when Debian GNU/Hurd is finally released as stable (and it really ought to be named just "Debian GNU System"), then all of this crap about "LiGnuX" will disappear.
  • Yeah, I know tons of BSD people who understand GIMP perfectly well.
  • by DrProton ( 79239 ) on Tuesday December 05, 2000 @05:10AM (#581389)

    The most disappointing aspect of the Hurd project for me is that it only runs on wintel hardware. The Hurd will only be "linux compatible" when it can run on all the platforms that linux does (sparc, powerpc, arm, alpha, etc). I posted a message on the Hurd mailing list a while back asking if there was any work afoot to port Hurd to powerpc. No one bothered to answer.

  • by Pflipp ( 130638 ) on Tuesday December 05, 2000 @05:25AM (#581390)
    See my comments on Debianplanet:

    Deeehhh.... December issue?

    Here's a link to this article from Slashdot, posted on 1 November:

    http://slashdot.org/articles/00/11/01/1326225.shtm l [slashdot.org]

    I thought I knew this article already :-)

    It's... It's...
  • Don't get me wrong, I'm not downing the Hurd, it's a very interesting project with a lot of promise.

    Just a small note though - your stable kernel API is not necessarily an advantage. Linus doesn't want that for good reasons.

    A stable kernel API means a commitment to backward binary compatibility. Which, over time, means an accumulation of all kinds of cruft (look at the latest iteration of Windows9x for plenty of examples.) It can severely limit developer options. Sometimes changing the API is the right thing to do. And the only advantage one would get out of this is binary compatibility - with any Free (or even merely Open) program, a simple recompile against the updated kernel source works. The drawbacks in this case clearly outweigh the rather dubious benefits.

    I'm not a Hurd hacker, just an interested party that follows the information released on it, but I don't remember seeing this claim that the HURD will maintain a stable kernel API anywhere. Lacking any official word to the contrary, I would tend to expect that their position is the same as Linus on this - I would certainly be shocked if the FSF's own kernel project were willing to go make any significant technical sacrifice simply to maintain binary compatibility with some yet-to-be written commercial program that isn't even Open Source.

  • > what are all those 'Unable to connect to ad server' messages sprinkled around the text

    Have the same without junkbuster. I think they fucked something at Dr Daube.

    Cheers,

    --fred
  • >A monolithic kernel has some drawbacks in that >there is a lot more sensitive code that can bring >the whole system to a screaching halt. In theory yes, but in practice the server processes in a microkernel OS also can kill the OS. Example: Kick out msgsrv32 in windows.
  • I can already see the "why bother" people starting to come out of the woodwork. We've got Linux and the BSDs, why do we need anything else?

    I don't think we really do at the moment, but it's healthy to have some people doing something completely different and seeing where it will take them. It's called "research" and it's the precursor to the "innovation" that everybody thinks free software is incapable of.

    So there's a good chance that The Hurd will never make it past the "what a neat idea" stage. But it's also possible that, just when Linux starts hitting the scalability wall (it'll be a few years yet), a system like Hurd will be waiting to take over.

  • You can read it from my lips !
  • by commandant ( 208059 ) on Tuesday December 05, 2000 @05:30AM (#581396)

    The article spends a lot of effort talking about maintainability. However, the Linux kernel source is broken into drivers, so you only need to modify the driver thatis important. I don't see the Hurd as being any different. You just call them "servers" instead of "drivers", and when you're done modifying, the servers are built separately. But that's not the issue.

    So if that's all the Hurd has, it's a bunch of bunk. The only truly appealing feature of the Hurd is the ability to plug in new kernels without rebooting. But for that one feature, I'm going to wait decades? Please.

    I'd like to ask RMS to either turn out the Hurd very soon, or scrap it. I'm tired of hearing about all it's promise and seeing it's barely able to walk on its own. I think this has become some ego trip for RMS, and that's not what we need. Developer effort could be better spent on Linux, or whatever the next generation kernel is.

    I do not belong in the spam.redirect.de domain.

  • When I saw this Journal cover I thought "wonderful" (because of the fact that Dr. Dobbs are covering it) and "horrible" at the same time. After all of his explanations and propaganda, RMS has not managed to get people to say "GNU/Linux" rather than "Linux". It appears that most people believe that since RMS didn't write the OS's kernel, he doesn't get to name it.
    But what do we see now, on what is probably the first major covering of the GNU kernel? Not "GNU Hurd", or even "Debian GNU/Hurd" as Debian insist calling the distro, but simply "Debian Hurd". I don't care much, myself -- I usually try to give credit where credit's due, but this is not a life or death issue to me.
    I just find this horribly ironic.
  • by YxorY ( 260621 )
    Hurd will finally make the GNU more visible.
    People will no more call the GNU/Linux system only as Linux. And it will put GNU in more evidence.
    By the way, Debian rules!
  • The fact is that *it doesn't even work*

    I guess it must be my imagination that I have a working system at home with GNU/Hurd on it.

    Ranessin
  • A monolithic kernel has some drawbacks in that there is a lot more sensitive code that can bring the whole system to a screaching halt.

    In practice, a well designed monolithic kernel is no more likely to do that than a well designed microkernel. I have seen servers running on top of a microkernel cause an avalanche affect and bring down the entire system.

    Ranessin
  • I'm sure Apple's engineers will be surprised to hear that Mach runs only on IA-32 and PA-RISC, especially since Mach is the core of Mac OS X. Ditto for the MkLinux people.
  • Data and code are separate in just about every current environment. The problem is the 1) the stack, and 2) that executable pages are marked as writeable. You can get rid of most buffer overflows by compiling everything with a program I believe is called StackGuard, or something like that.
  • An unconfirmed report by a NASA staffer claims that the computers responsible for the successful extension of the International Space Station solar panels are, in fact, controlled the the Hurd kernel and GNU software in general.

    Not only does this demonstrate the stability and reliability of GNU software in a 'mission critical' context, it also is the first instance of the Hurd, shot around the world.

    (sorry. I had to)

    The REAL jabber has the /. user id: 13196

  • Truly separating data and code on the ia32 platform is a little tricky since the "read" and "execute" page permissions are inseparable :-(
  • Is there any chance that development/popularization of HURD will act as a brain drain from Linux development? Or, is the current popularity of Linux hampering the development of HURD?

    I realize that the two are fundamentally different OSes. However, I am not sure how many people are trained and skilled enough to work on low level programming projects such as HURD. So my question is simple and can be reduced to: how many open source OSes can the planet support?

  • MkLinux uses the Mach microkernel as the core of a Linux-compatible operating system. It was originally developed in cooperation with Apple, so it runs mostly on Macintoshim. Still, I remember either hearing or hallucinating that an IA-32 version existed somewhere.

    In any case, with MkLinux, you have an operating system that's based on Mach and, to applications, looks like Linux. What does Hurd have that MkLinux doesn't? For that matter, what does MkLinux have that Hurd doesn't?

    I suppose phrasing it this way is an invitation to a flame war, but, well, I'm ignorant of the details. Thus, Hurd looks to me like a duplication of the MkLinux effort. If I'm wrong in that conclusion, I'd appreciate it if someone would exlain why.
  • I was almost certain I'd already read that article, and couldn't imagine why ... then it occurred to me; Slashdot thinks several month old articles are news!
  • X is a _great_ idea for businesses, even if it doesn't offer a lot of advantages for the home user. It certainly doesn't hurt them any. HURD actually is kind of revolutionary in OS design. The fact that it emulates UNIX is more of a compatibility deal than a design issue. The overall design of HURD is extremely modern. There are many reasons why it would be useful. And of course there are no guarantees about its usefulness (how many projects have usefulness guarantees attached?).

    You mentioned security exploits. The reason that there are "more exploits for UNIX than Windows", is that people don't usually even look for console exploits on Windows, since they assume the console operator will be trusted. In UNIX, since it has the additional feature of allowing so much to be done through remote logins. If I wanted to make my OS a featureless as Windows, I would make it more secure as well.

    Anyway, don't flame an OS just because it doesn't give _you_ any advantages. That's the beauty of the open source movement - there is such diversity. If something doesn't appeal to you, don't use it! But there are many people (including myself) anxiously awaiting HURD to emerge because it offers us a lot of useful features. Maybe HURD isn't for the desktop, maybe it is. So what! Not every OS is for the desktop. Some are for servers, some are for mainframes, some are for graphic designers, some are for home desktops. If you go to a "one size fits all" mentality, then the OS chosen will suck for everyone.
  • Maybe a drunk squirrel [cheatindex.com]?

    There is also a pretty funny picture of a squirrel lounging with a bottle of beer and a cigarrette floating around the web somewhere.
  • Maybe, just maybe, they want to get the damn thing working decently before they concern themselves with porting? Ever think of that?

  • by squarooticus ( 5092 ) on Tuesday December 05, 2000 @05:15AM (#581424) Homepage
    Doctor Dobb's journal finally

    choosed to break the silence

    Me fail English? That's unpossible!
  • by American AC in Paris ( 230456 ) on Tuesday December 05, 2000 @05:35AM (#581427) Homepage
    Slashdot Editors: methinks you may benefit from creating and using a centralized Slashdot link repository, which would contain the URL of every link posted to a Slashdot story. You could then run a simple SQL query against this repository to check each link [ddj.com] in submitted stories for potential repeats [slashdot.org]. Not necessarily a 100% solution, but it would help reduce the number of repeat stories choosed [dictionary.com]...

    $ man reality

  • by Leimy ( 6717 ) on Tuesday December 05, 2000 @05:39AM (#581428)
    Microkernels are a big deal. Try learning how they work before you criticize them.

    A monolithic kernel has some drawbacks in that there is a lot more sensitive code that can bring the whole system to a screaching halt. It also has very low latency to its advantage.

    A microkernel has much less critical code to it making it easier to maintain but in most cases incurs much greater latency in context switches during such activities as IPC between servers and user processes and the kernel proper.

    L4 is a microkernel with low latency. If developments like this continue it could spell the end for monolithic kernels.

    Sorry.... Its just the truth biting ya in the ass again! :)

    When linux finally does swell to the point where it becomes unmaintainable you'll wish you had a microkernel to work on. There is a growing number of people in support of porting Hurd to L4. The main Hurd developers don't care what microkernel it runs on... Just that it runs.

    Hope this clarifies the some of the myths and can cut through some of the bigotry that is unfortunately present in the linux community.

    dave
  • If you don't have a stable API, a simple recompile does _not_ work. Even if it compiles, there is no guarantee without a stable API that it will work. Stable APIs have a number of advantages - the biggest of which is the further parralellization (I think I messed that word up) of development.

    Also, remember that recompiling is something hackers, not users, do - even in free software.
  • Under either system if the hardware writes a random chunck of physical memory, you are hosed. Putting the device driver code in its own address space doesn't make any difference.
    >>>>>>>>>>
    How often is hardware the cause of crashes? Much more often it is flakey OS code. Take BeOS's (soon to be replaced) net_server for example. When I ran it as a NAT server, it used to crash on me twice a day. However, since the whole thing was in user-space, I had a shell script that would restart it. I think you're missing some of the benifets of microkernels. QNX and BeOS have used them to great effect, and in order to get a balanced view, I suggest you read some of the docs that come with each (www.be.com/developer, qdn.qnx.com)
  • > I'm going to start meta-moderating a lot more and see if I can do my bit to fix this.

    Be prepared to lose karma over it. Rationality is not in vogue on slashdot (sad to say).

    I will say "I bet I lose karma over saying this". The question is, do I mean it, or am I just saying it because I'm a karma whore? ;-)

    Mike.
  • It seems we mean different things by "stable kernel API." Normally when this comes up, the reference is not to changing the external (POSIX) API but the internal API - this has been an issue with proprietary device drivers for years. The specifications for how kernel modules interact has changed, and Linus has made it clear it can change again at any time. A recompile of the modules, linked against the new kernel source, normally works in this case.

    Also, remember that recompiling is something hackers, not users, do - even in free software.

    I won't grant that this is necessarily true - I am hardly a hacker, but I compile things fairly often. With configure and make this is hardly a challenging task, unless there is something wrong with the source distribution.

    It's dangerous to make too much of that dichotomy - hackers are users too, albeit a particular type of users, just like DTP and Graphic Designers are types of users.

    However, it is true that most people install binaries, but the availability of binaries is a direct function of compilation. With a closed source driver, only one organisation has the source and thus only they can do a recompile and make the updated binary available. This is often a major bottleneck. In the world of Free Software, everyone has or can get the source, and therefore many people can and do compile the same program and make packages available - often linked against different libraries, with different compilation options. This is as it should be, and maximises the availability and choice, and minimises the turn-around time when an update must be made, for all users - even the ones that don't compile, for whatever reason.

  • Simple. MkLinux is a kernel fork and therefore responsible for the downfall of civilisation. The Hurd, by comparison, isn't.

  • I find it extremely ironic to find a note complaining about bias on Slashdot.
  • > The biggest myth is the one put out by you microkernel freaks is that the linux kernel is a monolithic kernel. It's not. It's a hybrid.

    Could you back up this assertion, please ?

    Cheers,

    --fred
  • I'm assuming here that building servers for Hurd is considerably easier than putting similar things into Linux.

    I haven't used the Hurd, so I don't know. But if it is considerably easier, why isn't anyone doing it? Why are most research projects using Linux and *BSD instead?
  • I don't think the Hurd (or any other OS) for that matter will be a brain drain for the Linux kernel. Thing is that most (kernel) programmers stick to a system which they are familiar with. New programmers will find some project that they want to work on, and they will maybe come to master that project to great extent (and stick to it).

    Also, being a kernel developer is not necessarily any harder than developing for a user-level project. This is a myth. In fact, it might in many cases be easier to be a kernel developer becasue you do not have to deal with different operating system APIs. In other words, they key issue is complexity. That is, there are relatively few Linux kernel developers because the complexity of the kernel is too high (and not well enough docuemented) to overcome for most people. So, if the kernel itself is less complex, more people would be able to make contributions. I think this was the main reason that I chose to ditch Linux in favour of BSD years ago -- I just got too fed up with the ugly undocumented hacks inside the Linux kernel.

    (Note, I'm not saying that BSD is completely free of hacks, but I do find the code easier to understand. In addition, you tend to get manual pages for in-kernel features. The size of my man9 section currently counts 233.)

  • I'd like to ask RMS to either turn out the Hurd very soon, or scrap it. I'm tired of hearing about all it's promise and seeing it's barely able to walk on its own.

    I agree. All these selfish people wasting time on the Hurd. What kind of world do we live in where irresponsible people can work on the project they chose? Clearly all this Hurd nonsense must be what has held back the 2.4 Linux release. For shame!
  • I always thought that DDJ was a more high-tech publication, but this article appears to be written for a lowest-common denominator audience.
    Linux kernel development is dominated by a hacker ethos, in which external documentation is held in contempt, and even code comments are viewed with suspicion. In such an environment, quick code modification is the top priority...
    I know that the linux kernel is not documented as well as it should be, but for starters, there is a Documentation directory right there in the source code! I know I've seen some HOWTO's about writing device drivers. And looking at various files, I see plenty of comments.

    So, let's take a look at Hurd [gnu.org] and see how it compares. Looking at a few files, I see some comments, but not nearly as many as in the Linux source code tree. And do you see any documentation, other than TODO and CHANGES? I think Linux wins the documentation battle hands-down.

    Full multithreading has only been possible in Linux since the fairly recent widespread use of the glibc C library, while multiprocessing is an ongoing effort that is not yet complete.
    Ummm... Ok.... I think this person was confusing multiprocessing with Multiple Processors. And this is a guy who writes software for medical devices. Sheesh.

    Oh well. I guess it's still nice that The Hurd is getting some publicity.

  • (in the voice of the Comic Book store owner, typing into his laptop) "Choosed", best grammatical error for the purpose of enhancing posts and ad revenue ever!

  • Yeah, my usual response to the "hey, we could make this 2% faster by just totally munging up the code!!" mentality, is that computing and software development these days is no longer about maximizing scarce resources, but in managing complexity. Resources are abundant, we're drowning in disk space, memory, data, etc. We should be writing software that solves *human* problems, not hardware problems. A stick of 128 MB RAM is a much better solution in many cases than 2 man-years of optimizing for hardware which will just be obsolete anyway. I'd rather have a solidly designed, well implemented system, which is easily maintainable and upgradable, than a system optimized to run like an amphetamine addict which will break in a year, once the hardware needs to be upgraded or a feature added.
  • Were as I drove a 84 300ZX for 11 years - about the same distance between that and a 240Z as there is between a 0.12 kernel (my first) and a 2.4.x kernel ;-)
  • You can't find anyone more convinced than me. I am on the same development since 1994, and I written most parts myself.

    Managing complexity is the key. Complexity in design, complexity in implementation, complexity in debugging, complexity in testing, complexity in documentation, complexity in maintenance, complexity in evolution.

    The scarsiest resource is the development time. No way to do a baby in one month by using 9 women. Hardware is cheap. Complexity use time. Complexity is time.

    When I look back at that huge project, I can see that Donald Knuth (I think the sentence is from him) was right. "Premature Optimisation is the Root of al Evils". Writing 1000 lines of code one day is of no use if you have to dig it a couple of years later.

    Cheers,

    --fred

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...