Forgot your password?
typodupeerror

Torvalds on the Microkernel Debate 607

Posted by ScuttleMonkey
from the never-met-a-pointer-i-could-trust dept.
diegocgteleline.es writes "Linus Torvalds has chimed in on the recently flamed-up (again) micro vs monolithic kernel, but this time with an interesting and unexpected point of view. From the article: 'The real issue, and it's really fundamental, is the issue of sharing address spaces. Nothing else really matters. Everything else ends up flowing from that fundamental question: do you share the address space with the caller or put in slightly different terms: can the callee look at and change the callers state as if it were its own (and the other way around)?'"
This discussion has been archived. No new comments can be posted.

Torvalds on the Microkernel Debate

Comments Filter:
  • Linus Quote (Score:5, Informative)

    by AnalystX (633807) on Wednesday May 10, 2006 @02:37AM (#15299124) Journal
    This my favorite Linus quote from that whole thread:

    "In the UNIX world, we're very used to the notion of having
    many small programs that do one thing, and do it well. And
    then connecting those programs with pipes, and solving
    often quite complicated problems with simple and independent
    building blocks. And this is considered good programming.

    That's the microkernel approach. It's undeniably a really
    good approach, and it makes it easy to do some complex
    things using a few basic building blocks. I'm not arguing
    against it at all."
    • by j-stroy (640921) on Wednesday May 10, 2006 @02:49AM (#15299161)
      Linus FTFA:

      "The fundamental result of access space separation is that you can't share data structures. That means that you can't share locking, it means that you must copy any shared data, and that in turn means that you have a much harder time handling coherency. All your algorithms basically end up being distributed algorithms.

      And anybody who tells you that distributed algorithms are "simpler" is just so full of sh*t that it's not even funny.

      Microkernels are much harder to write and maintain exactly because of this issue. You can do simple things easily - and in particular, you can do things where the information only passes in one direction quite easily, but anythign else is much much harder, because there is no "shared state" (by design). And in the absense of shared state, you have a hell of a lot of problems trying to make any decision that spans more than one entity in the system.

      And I'm not just saying that. This is a fact. It's a fact that has been shown in practice over and over again, not just in kernels. But it's been shown in operating systems too - and not just once. The whole "microkernels are simpler" argument is just bull, and it is clearly shown to be bull by the fact that whenever you compare the speed of development of a microkernel and a traditional kernel, the traditional kernel wins. By a huge amount, too.

      The whole argument that microkernels are somehow "more secure" or "more stable" is also total crap. The fact that each individual piece is simple and secure does not make the aggregate either simple or secure."

      • by ultranova (717540) on Wednesday May 10, 2006 @03:28AM (#15299274)

        The whole argument that microkernels are somehow "more secure" or "more stable" is also total crap. The fact that each individual piece is simple and secure does not make the aggregate either simple or secure."

        Individual pieces aren't really any simpler either. In fact, if you want your kernel to scale, to work well with lots of processes, you are going to run into a simple problem: multitasking.

        Consider a filesystem driver in a monolithic kernel. If a dozen or so processes are all doing filesystem calls, then, assuming proper locking and in-kernel pre-emption, there's no problem - each process that executes the call enters kernel mode and starts executing the relevant kernel code immediately. If you have a multiprocessor machine, they could even be executing the calls simultaneously. If the processes have different priorities, those priorities will affect the CPU time they get when processing the call too, just as they should.

        Now consider a microkernel. The filesystem driver is a separate server process. Executing a system call means sending a message to that server and waiting for an answer. Now, what happens if the server is already executing another call ? The calling process blocks, possibly for a long time if there's lots of other requests queued up. This is an especially fun situation if the calling process has a higher priority than some CPU-consuming process, which in turn has a higher priority than the filesystem server. But, even if there are no other queued requests, and the server is ready and waiting, there's no guarantee that it will be scheduled for execution next, so latencies will be higher on average than on a monolithic kernel even in the best case.

        Sure, there are ways around this. The server could be multi-threaded, for example. But how many threads should it spawn ? And how much system resources are they going to waste ? A monolithic kernel has none of these problems.

        I don't know if a microkernel is better than monolithic kernel, but it sure isn't simpler - not if you want performance or scalability from it, but if you don't, then a monolithic kernel can be made pretty simple too...

        • by Hast (24833) on Wednesday May 10, 2006 @03:44AM (#15299322)
          assuming proper locking and in-kernel pre-emption, there's no problem - each process that executes the call enters kernel mode and starts executing the relevant kernel code immediately. If you have a multiprocessor machine, they could even be executing the calls simultaneously.

          That's a pretty big assumption. Or rather, you have basically taken all the hard parts of doing shared code and said "Let's hope someone else already solved this for us".

          The filesystem driver is a separate server process. Executing a system call means sending a message to that server and waiting for an answer. Now, what happens if the server is already executing another call ? The calling process blocks, possibly for a long time if there's lots of other requests queued up.

          Sooooo, it's easy to have someone else handle the multi-process bits in a monolithic design. But when it comes to writing services for microkernels suddenly everyone is an idiot?

          Besides, as Linus pointed out, when data is going one way microkernels are easy. And in the case of file systems that is really the case. Sure multiple processes can access it at once, but the time scale on handling the incoming signals is extremely fast compared to waiting for data from disk. Only a really, *really* incompetent idiot would write such a server which blocked until the read was finished.
        • In the end it boils down to the old question centralisation vs. local autonomy. Centralisation is fine for keeping state, it is fine for enforcing a thoroughly similar approach to everything, it helps with 'single points of contact'. Local autonomy helps with less administrational effort, with clearly defined information paths and with clear responsibilities, thus with keeping problems locally.

          Both approaches have their merits, and in the real world you will never see a purely central organisation or a pure
          • by drgonzo59 (747139) on Wednesday May 10, 2006 @04:55AM (#15299507)
            You seem to completely ignore the main reason for using a microkernel -- the ability to prove (even mathematically) that the kernel is correct. In other words the main advantage is not to make a it "easy" or "fun" for the programmers to program, or make Quake run with 25fps faster,but but to enforce a strict and precise security policy. That is why critical real-time OSes are often based on a microkernel which is only about 4000-8000 lines of code. Even at that size is might take years to prove it does what it is supposed to do.

            The analogy of centralisation vs. local autonomy is not totally accurate either. Both the monolithic and the microkernel are centralized, except that in the first case there a large beaurocratic structure and in the second case it just a dictator and a couple of "advisors". If the dictator or the king is chosen well, the system will be more predictable and will work much better. If case of the large beaurocratic system, if some of its members get corrupted [and they will because there are so many of them] the whole system will fail. It is like saying that a small bug in the mouse driver will freeze and crash the system with a monolithic kernel. Good thing if the system was only running Doom at the time and not controlling a reactor, or administering a drug. If the same happens in the microkernel system, the kernel will reload the driver, raise an alarm, or in general -- be able to take the system to a predictable predetermined state. Going back to the analogy is it is like having the dictator execute a corrupted staff member and replace him immediately.

            • As a fan of Haskell and type theory, I know and love the good points of being able to prove correctness.
              The problem is that it doesn't match the way most people work right now.
              Check out this brilliant paper by Alistair Cockburn (spoken as Co-burn) - Characterizing People as Non-Linear, First-Order Components in Software Development [cockburn.us]. Over and over in this paper he says:
              • Problem 1. The people on the projects were not interested in learning our system.
              • Problem 2. They were successfully able to ignore us, an
        • by putaro (235078) on Wednesday May 10, 2006 @05:00AM (#15299524) Journal
          Individual pieces aren't really any simpler either. In fact, if you want your kernel to scale, to work well with lots of processes, you are going to run into a simple problem: multitasking.
          This is very true.

          Consider a filesystem driver in a monolithic kernel. If a dozen or so processes are all doing filesystem calls, then, assuming proper locking and in-kernel pre-emption, there's no problem - each process that executes the call enters kernel mode and starts executing the relevant kernel code immediately.

          OK, here's where things start getting a little tricky. The whole locking setup in a monolithic kernel is pretty tricky. Early multi-processor kernels often took the course of "one big lock" at the top of the call stack - essentially only one process could be executing in the kernel. Why? Because all that "proper locking" is tricky. Took years to get this working right. Of course it's done now in Linux so you can take advantage of it, but it wasn't easy.

          Now consider a microkernel. The filesystem driver is a separate server process. Executing a system call means sending a message to that server and waiting for an answer.

          OK, now here, you're kind of running off the rails. What is a "message"? There is no magical processor construct called a "message" - it's something that the OS provides. How messages are implemented can vary quite a bit. What you're thinking of is a messaging system ala sockets - that is the message would be placed onto a queue and then a process switch would happen sometime and the server on the other end would read messages out of the queue and do something. That's how microkernels are usually presented conceptually so it tends to get stuck in peoples' heads.

          However, messages can be implemented in other ways. For example, you could make a message be more like a procedure call - you create a new stack, swap your address table around, and then jump into the function in the "server". No need to instantiate threads in the "server" anymore than there is a need to instantiate threads within a monolithic kernel. The server would essentially share the thread of the caller. I've worked on microkernel architectures that were implemented just this way.

          If the number of data structures that you can directly access is smaller, the amount of locking that you have to take into account is smaller. Modularity and protection makes most people's tasks easier.

          Many of the arguments made for monolithic kernels are similar to the arguments you used to hear from Mac programmers who didn't want to admit that protected memory and multi-tasking were good things. Mac programmers liked to (as I used to say) "look in each other's underware". Programs rummaged about through system data structures and other apps data structures sometimes, changing things where they felt like it. This can be pretty fun sometimes and you can do some really spiffy things. However, set one byte the wrong way and the whole system comes crashing down.
        • by PhotoGuy (189467) on Wednesday May 10, 2006 @06:58AM (#15299790) Homepage
          Now consider a microkernel. The filesystem driver is a separate server process. Executing a system call means sending a message to that server and waiting for an answer. Now, what happens if the server is already executing another call ? The calling process blocks, possibly for a long time if there's lots of other requests queued up.

          Well maybe that's how *you* would design *your* Microkernel. And yes, it would suck.

          The way I would design the filesystem driver, would be to accept a request, add it to a queue of pending requests to serve. If there are no initiated requests, find the request that can most efficiently be served based upon your preferred policy (closest seek time, for example, or first come first serve, your choice), and initiate that request. Add some smarts for multiple devices, so multiple requests can be initiated at the same time to different devices. When data comes back, answer the requesting process with their data. Rather than sitting around blocking on a request, go grab more requests from other processes and queue them up. No need to block. When an initiated request comes back, send back the data to the requesting process, and everyone's happy. Just because things are separated out into different processes, doesn't mean that they can't do some asynchronous juggling to be efficient. Add multi-threading, and the coding becomes a bit easier; but multi-threading isn't necessary to rely upon to have this work well.

          I'm pretty sure the monolithic kernels do things somewhat similarly; build a request queue, service that queue. They could also block until they're done other requests, but that would be bad design. Don't assume a Microkernel Filesystem server has to suffer from similarly bad design.

      • by Inoshiro (71693) on Wednesday May 10, 2006 @03:29AM (#15299283) Homepage
        "You can do simple things easily - and in particular, you can do things where the information only passes in one direction quite easily, but anythign else is much much harder, because there is no "shared state" (by design). And in the absense of shared state, you have a hell of a lot of problems trying to make any decision that spans more than one entity in the system."

        I think you're looking at this the wrong way around.

        There has been a lot of research into this over the past 40 years, ever since Dijkstra first talked about coordination on a really big scale in the THE operating system. Any decent CS program has a class on distributed programming. Any decent SW architect can break down these different parts of the OS into weakly-connected pieces that communicate via a message passing interface (check out this comment [slashdot.org] by a guy talking about how Dragonfly BSD does this).

        It's obvious that breaking something like your process dispatcher into a set of processes or threads is silly, but that can be easily separated from the core context switcher. Most device driver bottom halves live fine as a userland process (each with a message-passing interface to their top-halves).

        If you're compiling for an embedded system, I'm sure you could even entirely remove the interface via some #define magic; only debug designs could actually have things in separate address spaces.

        The point I'm trying to make is: yes, you can access these fancy data structures inside the same address space, but you still have to serialize the access, otherwise your kernel could get into a strange state. If you mapped out the state diagram of your kernel, you'd want the transistions to be explicit and synchronized.

        Once you introduce the abstraction that does this, how much harder is it to make that work between processes as well as between threads in the kernel? How much of a benefit do you gain by not having random poorly-written chunks pissing over memory?

        How about security benefits from state-machine breakdowns being controlled and sectioned off from the rest of the machine? A buffer overflow is just a clever way of breaking a state diagram and adding your own state where you have control over the IP; by being in a separate address space, that poorly written module can't interact with the rest of the system to give elevated privileges for the attacker (unless, of course, they find flaws in more of the state machines and can chain them all together, which is highly unlikely!).

        Clearly there is a security benefit as much as there is a consistency benefit. Provably correct systems will always be better.
        • by Gorshkov (932507) <admgorshkov AT yahoo DOT com> on Wednesday May 10, 2006 @03:51AM (#15299344)
          Provably correct systems will always be better.

          Well, I could certainly argue THAT one.

          Years ago, I was a lead analyst on an IV&V for the shutdown system for a nuclear reactor - specifically, Darlington II in Ontario, Canada.

          This was the first time Ontario Hydro wanted to use a computer system for shutdown, instead of the old sensor-relay thingie. This made AECB (Atomic Energy Control Board) rather nervous, as you can understand, so they mandated the IV&V.

          I forget his first name - but Parnas from Queen's University in Kingston had developed a calculus to prove the correctness of a programme. It was susinct, it was precice, it was elegant, and it worked wonderfully.

          ummmmm ..... well, kind of. About 3/4 of the way through the process, I asked a question that nobody else had thought of.

          OK, so we prove that the programme is correct, and it'll do what it's supposed to do .... but how long will it take?

          You see, everybody had kinda/sorta forgot that this particular programme not only had to be correct, but it had to tell you that the reactor was gonna melt down BEFORE it did, not a week afterwards.

          The point is, that there is often much more involved in whether or not a programme (or operating system) is usefull than it's "correctness"
          • The point is, that there is often much more involved in whether or not a programme (or operating system) is usefull than it's "correctness"

            Sort of. In the scanerio you describe, the program was, in fact, not proved to be correct because the people who did the proof failed to take into account the real requirements for the system.

            If you don't even know your requirements, no methodology to implement those requirements is going to work reliably.

            • by Gorshkov (932507) <admgorshkov AT yahoo DOT com> on Wednesday May 10, 2006 @04:26AM (#15299429)
              Sort of. In the scanerio you describe, the program was, in fact, not proved to be correct because the people who did the proof failed to take into account the real requirements for the system.

              If you don't even know your requirements, no methodology to implement those requirements is going to work reliably.


              I agree absolutly.

              The point I was trying to make - and this is where I see the parallel - is that T seems to be trying to say that microkernel good, monolithic bad based only on elegant design, and theoretical simplicity. No doubt, it appeals to the academic in him (Go figure)

              But he is ignoring the "time domain" of an operating system, if you will - it's practicality, it's ability to do usefull work in a reasonable period, and it's usability in the real world - just as Parnas' notation did.

              I make no claim as to whether or not Parnas *intended* for his notation to be used for a hard real-time system - I know he was retained as a consultant on the project, but I personally neither saw nor heard of/from him during the entire time. And let me be perfectly clear - his notation was absolutly *gorgeous* and extremly usefull. I, on the other hand, having been the idiot who raised the point in the first place, wound up having to do the timing analysis based on best/worst case sensor timings, and instruction-by-instruction counting of the clock cycles required for each ISR, etc. I plugged the numbers into a programme I wrote for the purpose, and basically did nothing more than an exhaustive analysis of all possible combinations of timings. Not as elegant by far, but what the hell. Who cares if it takes two days to run, if it means you don't have to worry about glowing in the dark?

          • Tovarisch Gorshkov, to prove that the program is correct ("covert channel analysis" and such.) might take up to a year and that is only if there are less than 10k lines of code and no more, but that doesn't mean that the program will _run_ slow. The time and methods used to prove correctness don't necessarily say anything about the speed of the program during runtime.

            So correct systems will always be better, because you know it is correct and you know the limits (want it to run faster -- just buy faster h

            • by Gorshkov (932507) <admgorshkov AT yahoo DOT com> on Wednesday May 10, 2006 @04:40AM (#15299468)
              Tovarisch Gorshkov, to prove that the program is correct ("covert channel analysis" and such.) might take up to a year and that is only if there are less than 10k lines of code and no more, but that doesn't mean that the program will _run_ slow. The time and methods used to prove correctness don't necessarily say anything about the speed of the program during runtime.

              You're right, it *doesn't* say anything about the code efficiency, or the runtime per se ..... but then again, neither did I. Just as some algorythms are faster than others O(n) vs O(log n), etc, some designs are inherently slower than others. And what is a kernel, if not the expression of an (albiet complex) algorythm to accomplish a task (provide system services)?

              The microkernel may be more elegant, more pristine in the lab ..... but it's slow by design. There is only so much you can do to speed it up - the limitations are inherent in the message passing mechanisms.

              I'm sorry, I'm with Linux on this one.

              Provably correct doesn't mean "good" .... and "I haven't bothered proving the sucker" doesn't mean crash and burn.

              Also, there is nothing about a microkernel that makes it more inherently provably correct than a monolithic kernel. Even going back to Parnas' notation that we used years ago, and thinking about the structure of the Linux kernel, it would be pretty easy to go through the exercise and prove it correct/incorrect .... and no easier to do so with with Minix, or BeOS, or any other microkernel.
              • by drgonzo59 (747139) on Wednesday May 10, 2006 @05:29AM (#15299594)
                The microkernel may be more elegant, more pristine in the lab ..... but it's slow by design

                Today most of the software that is used to fly planes (both fighter jets and passenger) is based on a microkernel architecture. So microkernels are not just lab toys, real and mission critical systems are run by microkernel architectures.

                The speed problem can often be solved just buy getting a faster hardware. The main reason Linus rejected microkernels back in the day was because the cost of context switches was prohibitive. Today hardware is lot faster (roughtly Moore's law), so context switches will be alright on a 3GHz Pentium IV machines while it would not be doable on a 33Mhz machines.

                Also, there is nothing about a microkernel that makes it more inherently provably correct than a monolithic kernel.

                Theoretically you are right. But in practice Linux 2.6 is 6 million lines of code and a typical microkernel is less than 10k. It can already take up to a year to check the correctness of a 8k lines of code microkernel and there will be an exponential demand for resources as the code size increases. So in reality it will not be possible to check the linux kernel for correctness.

                • by TobascoKid (82629) on Wednesday May 10, 2006 @06:18AM (#15299685) Homepage
                  But in practice Linux 2.6 is 6 million lines of code and a typical microkernel is less than 10k.

                  Umm, doesn't that mean while you've prooved that the 10k microkernel lines correct, you'd still have ~6 million lines of code sitting outside the microkernal waiting to be prooved? I can't see how a microkernel can magically do with 10k everything Linux is doing with 6 million lines (especially as by the definition of microkernel, than there's no way it could).
                  • by drgonzo59 (747139) on Wednesday May 10, 2006 @06:33AM (#15299721)
                    You don't have to prove it, as long as the microkernel will be able to put the system into a predetermined state, it could for example unload the driver and try another one or just try to relaod it, it could contact you via a pager and so on. As opposed to the whole system freezing because some idiot wrote if(a=1) instead of if(a==1) in the mouse driver. You can only hope that the system that froze was running Doom and Firefox and wasn't flying planes, or administering drugs.
                • Today most of the software that is used to fly planes (both fighter jets and passenger) is based on a microkernel architecture. So microkernels are not just lab toys, real and mission critical systems are run by microkernel architectures.

                  And where did I say that microkernels were unusable? I've personally used QNX and VRTX myself. For small, simple (for the O/s) systems, they're beautifull. As a general purpose coputing platform, they tend not to be.

                  The speed problem can often be solved just buy gettin
      • And what is your definition of a "Traditional Kernel?" If you wanna get really technical, start looking back at ENIAC and UNIVAC, from the '50s/60s.
      • Distributed algorithms are of course difficult to implement with a f***ed up language like C.

        Here, it seems, the means justify the ends. Linus basically says "I won't take any challenges".

        Linus tells me that we can never write a proper scalable OS for a NUMA machine, or a modular system that can serve well to parallel I/O systems and the like. I highly disagree.

        Because these things are not pipe dreams, they have been done. IBM guys have made amazingly abstract and modular OS stuff and they've been using the
      • The whole argument that microkernels are somehow "more secure" or "more stable" is also total crap. The fact that each individual piece is simple and secure does not make the aggregate either simple or secure."

        I know as a good and faithful /.-ers we should worship Linus and take all of his words as gospel, but in this case I think he is talking out of his arse. Microkernels are "more secure" and "more stable" because only one component needs to work well -- the microkernel, it's main job is to enforce sec

    • Wait, wait, you've read TFA? Around here that's called "cheating"!
  • Not unexpected... (Score:5, Informative)

    by Cryptnotic (154382) * on Wednesday May 10, 2006 @02:41AM (#15299133) Homepage
    He basically continues his previous argument that monolithic kernels are more efficient and easier to implement. Microkernels may seem simpler, but they have complexity in implementing all but the simple tasks. Microkernels have a more marketable name. "Microkernel" just sounds more advanced than "monolithic". He finishes off with the observation that the term "hybrid kernel" is a trick to grab marketing buzz from the microkernel side of things.
    • But he fails to discuss the one area where message passing makes a certain amount of sense. Even a single computer is beginning to look more like a distributed system today: SMP is becoming increasingly common. In these situations communication and coordination is often costly. The trouble with a monolithic kernel comes in precisely what it makes easy: sharing memory and data structures--the very sort of thing that if not done very carefully destroys SMP performance, e.g., by cache-line stumping.

      Taking a
  • pfff (Score:4, Funny)

    by Umbral Blot (737704) on Wednesday May 10, 2006 @02:43AM (#15299142) Homepage
    pfff, Linus, what would he know?
  • Code talks (Score:5, Insightful)

    by microbee (682094) on Wednesday May 10, 2006 @02:52AM (#15299168)
    The whole discussion of micro-kernel vs monolithic kernel is totally pointless. All popular OS kernels are monolithic. We can get back to the debate when we have a working fast microkernel in the market that is actually competitive.

    Linus is a pragmatist. He didn't write Linux for academic purpose. He wanted it to work.

    But you can always prove him wrong by showing him the code, and I bet he'd be glad to accept he was wrong.
    • Re:Code talks (Score:3, Informative)

      by microbee (682094)
      A couple more things to mention.

      1. Windows has some essential system services running in the user space, such as Win32 environment (csrss.exe). But if it dies, you are pretty much hosed anyways. It doesn't necessarily make the system more stable in any meaningful way by running stuff in user space. Windows even had GDI in user space before, and later moved into the kernel for performance reasons, and GDI in user space didn't provide more stability.

      2. Linux kernel 2.6 now has support for user space filesyste
      • Re:Code talks (Score:3, Insightful)

        by ray-auch (454705)

        Windows even had GDI in user space before, and later moved into the kernel for performance reasons, and GDI in user space didn't provide more stability


        You have to be joking. It was a massive step backwards in stability. NT 3.51 was rock solid, NT 4 was far more flaky.

        Not only that, but with GDI in kernel all sorts of resource limits came in that just weren't there before. Writing heavy graphics was much more of a pain - no matter how careful you were with GDI, other programs could consume limited kernel
    • Re:Code talks (Score:4, Informative)

      by moosesocks (264553) on Wednesday May 10, 2006 @03:12AM (#15299234) Homepage
      HUH??

      Get your facts straight.

      Every popular Operating System developed in the past 15 years (and then some) apart from Linux has been either a microkernel [wikipedia.org] or a hybrid kernel [wikipedia.org].

      Mach, upon which Darwin and OS X are based is a microkernel. OSX and Darwin borrow some monolithic-esque features, but not quite enough to make them hybrids it would seem...

      Windows NT, NetWare, ReactOS and BeOS are all Hybrid kernels. This model seems to be the most popular right now, and seems to be a reasonable compromise...

      The only thing that's left are the old big-iron Unices, Solaris, MS-DOS, and Linux. In other words, Linux is the only major player left using a monolithic kernel. I don't know enough about computer science to properly make an argument one way or another, but it would see that monolithic kernels have heavily fallen out of favor in the past 15 years.

      That said, perhaps a monolithic kernel is better suited to the open-source development process, which would seem counterintuitive at first because it discourages modularization, but who knows.... it could very well be true. I don't know enough to comment.
      • Re:Code talks (Score:5, Interesting)

        by microbee (682094) on Wednesday May 10, 2006 @03:16AM (#15299249)
        "Hybrid" kernel? Sorry, I just don't buy this terminology (as Linus put it, it's purely marketing).

        Windows NT is monolithic. So is OS X. Anyone claims it to be microkernel please show me the proof other than "it is based on Mach".
        • Re:Code talks (Score:5, Insightful)

          by Bacon Bits (926911) on Wednesday May 10, 2006 @04:40AM (#15299465)
          "Hybrid" kernel? Sorry, I just don't buy this terminology (as Linus put it, it's purely marketing).
          It is pointless to argue semantics. You can say a hybrid kernel is a monolothic kernel trying to be a microkernel, or you can say it is a microkernel trying to be monolithic. As long as you understand what is meant by the term, your agreement about the precise semantics of it is largely irrelevant. Particularly with it's relevance to this debate.

          One of the biggest problems I continually have with technical people (whether that's computer techs or engineers) is that they tend to overemphasize the syntax and semantics of what people say. They tend to latch on to a specific phrase and then rip it apart rather than taking the meaning of the whole (which is the important part) and finding problems in the whole. Most particularly, they tend to find it incomprehensible that a single phrase might have multiple meanings.

          Part if this is doubtlessly due to exposure to highly precise technical jargon, but it is inappropriate to apply strictness of meaning inherent to, say, Python, to everyday language. Even in a technical debate.

          A hybrid kernel in simplest terms is a kernel is a combination of two discrete other types of kernels. Plain English tells you that. It makes no sense to try to wrestle with whether WinNT is a monolithic or microkernel. It's a semantic debate that serves only to label the object, and it doesn't describe it or aid in understanding it. If you say WinNT is a microkernel, you then have to ignore the non-essential code objviously running in kernel mode and that doesn't help understanding. If you say WinNT is a monolithic kernel, you have to ignore the userland processes that are really system services. Again, that's no aid to understanding.

          Stop complaining about the language and forcing labels on things. Labeling is not understanding.

      • Hybrid kernels??? (Score:4, Informative)

        by r00t (33219) on Wednesday May 10, 2006 @03:24AM (#15299269) Journal
        That would be monolithic+marketing.

        MacOS X is no microkernel system. It does have Mach, sure. Mach is arguably not a microkernel by today's standards, and in any case MacOS X has a full BSD kernel bolted onto the Mach kernel. Mach and BSD are sharing address space. In other words, it's not a microkernel.

        NT is the same way.

        I don't know all that much about NetWare, but I'd never before heard anyone claim it to be a microkernel. It's not terribly popular anyway. (it was, but back then I'm sure it wasn't a microkernel system) ReactOS isn't much yet. BeOS died for unrelated reasons, so we really can't judge.

        Monolithic kernels can be very modular. Microkernels can get really convoluted as the developers struggle with the stupid restrictions.
        • Re:Hybrid kernels??? (Score:5, Interesting)

          by jackjeff (955699) on Wednesday May 10, 2006 @04:09AM (#15299394)
          Depends on what you mean by Micro Kernel and Monolithic.

          True, the kernel of MacOS/X - Darwin, aka XNU, for performance reasons run the Mach and BSD layer both in superuser space to minimize the lattency.

          Maybe this is what you call a hybrid kernel: http://en.wikipedia.org/wiki/Hybrid_kernel [wikipedia.org]

          You may call XNU whatever you wish but the fact remains:
          - it's not a monolithic kernel by design
          - it has Mach in it and Mach is some sort of microkernel. Maybe it does not reach "today's" standards of being called a microkernel but it was a very popular microkernel before.

          So maybe the things running on top of Mach ( http://developer.apple.com/documentation/Darwin/Co nceptual/KernelProgramming/index.html [apple.com] ) are conceptually "different" from what the services of microkernel should be, and they do share indeed the address space, but this is very very very different from the architecture of a traditional monolithic kernel such as Linux

          This guy ( http://sekhon.berkeley.edu/macosx/intel.html [berkeley.edu] ) recently tested some stats software on his Mac running OS X and Linux, and found out that indeed MacOS X had performance issues, very likely due to the architecture of the kernel.

          There's even a rumor that says that since Avie Tevanian left Apple ( http://www.neoseeker.com/news/story/5553/ [neoseeker.com] ), some guys are now working for removing the Mach microkernel and migrate to a full BSD kernel in the next release of the operating system.

          And now my personal touch. I agree with Linus when he says that having small components doing simple parts on their sides and putting them together with pipes and so on, is somehow the UNIX way and is attracting (too lazy to find the quote). However as he demonstrates later, distributed computing is not easy, and there's also the boundray crossing issue. I guess he has a point when he says this is a problem for performance and the difficulty on designing the system... So if performance is what you indeed expect from a kernel, then you must stop dreaming of a clean-centralized good software architecture like those we have for our high oo-oriented software.

          But the truth is that, although developing a monolithic kernel is an easier task to do from scratch than a microkernel, I guess the entry ticket (learning curve) for a monolithic kernel developes is more expensive. The main reason being, "things ARE NOT separated". Anyone, anywhere in the kernel could be modifying the state of that thing, for non obvious reason, even if there's a comment that says "please don't do that" or it shoulld not be the case etc.... Microkernel can obviouisly provide some kind of protection and introspections to these things, but have always hurt performances to do so.

          Now it has everything to do on what you expect. Linux has many many many developpers and obviously can afford having a monolithic design that changes every now and then and you may prefer a kernel that goes fast than one whose code is clearn, well organized and easy to read. But the corrolary of that observation is that for the same reasons, grep, cat, cut, find, sort, or whatever unix tools you use with pipes and redirection are similarly a cleaner but YET INEFFICIANT design. However, it's been proven (with time) to be a good idea..

          I think things that are "low level" will be bound to have a poor spagehtti software architecture because performance matters and the code is smaller.. but the higher level you go, the less performance matters, and the more code maintenance and evolutivity matters... Everything is a tradeof: good design practice depends on the type of problems your software tackles.

          That said, it does not mean no progress can be made in kernel developments. Linux already uses a somewhat different C lang
          • Re:Hybrid kernels??? (Score:4, Informative)

            by diegocgteleline.es (653730) on Wednesday May 10, 2006 @08:39AM (#15300080)
            Let's ask Apple what thinks about all this: "Advanced Synchronization in Mac OS X: Extending Unix to SMP and Real-Time" [usenix.org]:

            "xnu is not a traditional microkernel as its Mach heritage might imply. Over the years various people have tried methods of speeding up microkernels, including collocation (MkLinux), and optimized messaging mechanisms (L4)[microperf]. Since Mac OS X was not intended to work as a multi-server, and a crash of a BSD server was equivalent to a system crash from a user perspective the advantages of protecting Mach from BSD were negligible. Rather than simple collocation, message passing was short circuited by having BSD directly call Mach functions. While the abstractions are maintained within the kernel at source level, the kernel is in fact monolithic. xnu exports both Mach 3.0 and BSD interfaces for userland applications to use. Use of the Mach interface is discouraged except for IPC, and if it is necessary to use a Mach API it should most likely be used indirectly through a system provided wrapper API."
          • by r00t (33219) on Wednesday May 10, 2006 @12:03PM (#15301552) Journal
            I worked on a commercial microkernel OS.

            The learning curve was very steep. New developers took at least half a year to be productive. A number of people never became productive and had to be fired.

            Linux is really clean and tidy compared to that. Even BSD is clean and tidy compared to that microkernel OS.

            Separated components tend to get complex interactions. Sharing data can be very awkward, even if you are co-located.

      • Re:Code talks (Score:5, Interesting)

        by Stephen Williams (23750) on Wednesday May 10, 2006 @03:35AM (#15299299) Journal
        That said, perhaps a monolithic kernel is better suited to the open-source development process, which would seem counterintuitive at first because it discourages modularization

        Not necessarily. Despite being a monolithic design, Linux is pretty modular. Device drivers, filesystems, network add-ons etc. are separate enough from the core of the kernel that they don't even need to be statically linked into it, but can be loaded as modules into a running kernel, as I'm sure you know.

        It's not a microkernel approach because all the modules are loaded into the kernel's address space. They're bits of extra functionality that are dynamically grafted to the monolithic kernel image, so to speak. Nevertheless, it's still a modular approach to kernel design.

        -Stephen
      • HUH??

        Get your facts straight.

        Every popular Operating System developed in the past 15 years (and then some) apart from Linux has been either a microkernel or a hybrid kernel.

        Exactly, Linus doesn't know what he's talking about. He should know better than to be posing as some kind of credible source. Someone should write him an e-mail and let him know that he should have defered to someone more knowledgeable.

        it could very well be true. I don't know enough to comment.

        Sure enough.
    • Re:Code talks (Score:5, Insightful)

      by Anonymous Coward on Wednesday May 10, 2006 @03:16AM (#15299248)
      Three letters: Q N X.

      Small, fast, real-time. http://en.wikipedia.org/wiki/QNX [wikipedia.org]
    • Re:Code talks (Score:4, Insightful)

      by SanityInAnarchy (655584) <ninja@slaphack.com> on Wednesday May 10, 2006 @03:32AM (#15299287) Journal
      The whole discussion of micro-kernel vs monolithic kernel is totally pointless. All popular OS kernels are monolithic.

      The whole discussion of Windows vs anything else is totally pointless. All popular OSes are Windows.

      Linus is a pragmatist. He didn't write Linux for academic purpose. He wanted it to work.

      That's true, and that's a good point. However, it's much easier to start a project if you already have some good people, even if the code is entirely from scratch. Therefore, making the point in a place like the kernel development lists is a good idea, because that's a good place to recruit people.

      Certainly in the case of OSes, there really isn't much of an opportunity for something like Linux to emerge from one person's efforts. As far as I can tell, Linux originally worked because enough people were interested in helping him early on in a hobby, doing things like sending him an actual copy of the POSIX specs, and he was mostly able to get it to where it actually duplicated Minix's functionality, and exceeded it in some cases.

      In fact, there was such a shortage of good OSes for this machine that really, Linux succeeded because it wasn't Minix and wasn't DOS. In fact, one has to wonder -- could an open Minix have done what Linux did? It's possible that, given all the programmers who eventually decided to work on Linux, the problems with microkernels could've been solved. Similarly, if all the programmers working on Linux suddenly decided to do a microkernel, it would succeed and it would replace Linux.

      But, that isn't going to happen. Not all at once, and probably not ever, unless it can be done incrementally.
    • So then let me ask you this: Would you get on a plane that is flown by Linus' latest 2.6 monolithic kernel? Could it be because a bug in one of the hundreds of drivers might lock up the system and you'll crash and die?

      Most of the critical RT operating systems will have some kind of a ukernel architecture. velOSity is one such ukernel, it is used by the INTEGRITY OS made by Green Hills. The next time you fly on the plane it will probably be a ukernel that will make sure you land safely, not a monolithic bl

  • Mirror (Score:4, Informative)

    by Anonymous Coward on Wednesday May 10, 2006 @02:57AM (#15299182)
    Quick slashdoteffect there, that forum is already down. Anyhow.. mirror: http://www.mirrordot.org/stories/3f6b22ec7a7cffcf2 847b92cd5dec7e7/index.html [mirrordot.org]
  • by Bombula (670389) on Wednesday May 10, 2006 @02:59AM (#15299190)
    can the callee look at and change the callers state as if it were its own

    Any chance we could do this with my long distance phone service?

  • by gigel (817544) on Wednesday May 10, 2006 @02:59AM (#15299191)
  • comments i liked (Score:3, Insightful)

    by bariswheel (854806) on Wednesday May 10, 2006 @03:00AM (#15299201) Homepage
    "The whole "microkernels are simpler" argument is just bull, and it is clearly shown to be bull by the fact that whenever you compare the speed of development of a microkernel and a traditional kernel,the traditional kernel wins. By a huge amount, too. He goes on to say, "It's ludicrous how microkernel proponents claim that their system is "simpler" than a traditional kernel. It's not. It's much much more complicated, exactly because of the barriers that it has raised between data structures." He states that the most fundamental issue is the sharing of address spaces. "Nothing else really matters. Everything else ends up flowing from that fundamental question: do you share the address space with the caller, or put in slightly different terms: can the callee look at and change the callers state as if it were its own?"
  • I'm pretty sure it's a shit load of work to port the linux kernel to become a micro kernel. Anyway got the spare time to do that?
    PS, besides porting all parts of the kernel you first need to redesign the kernel so it can cope with the micro kernel idea (and structural limitations).

    As soon as GNU Hurd is mature we'll have a drop-in replacement (right?).
  • by SigNick (670060) on Wednesday May 10, 2006 @03:22AM (#15299262)
    I think Linus hit the spot by pointing out that the future of home computing is going to to focus on parallel processing - it's 2006 and all my computers, including my LAPTOP, are dual-processor systems.

    By 2010 I suspect at least desktops are 4-CPU systems and as the numbers of cores increase one of the large drawbacks of microkernels raises it's ugly head: microkernels turn simple locking algorithms into distributed computing-style algorithms.

    Every game developer tells us how difficult it is to write multi-threaded code for even our monolithic operating systems (Windows, Linux, OSX). In microkernels you constantly have to worry how to share data with other threads as you can't trust them to give even correct pointers! If you would explicitly trust them, then a single failure at any driver or module would bring down the whole system - just like in monolithic kernels but with a performance penalty that scales nicely with the number of cores. What's even worse is that at a multi-core environment you'll have to be very, very careful when designing and implementing the distribution algorithms or a simple user-space program could easily crash the system or gain superuser privileges.
    • by ingenthr (34535) on Wednesday May 10, 2006 @03:45AM (#15299324) Homepage

      Don't take C's poor support for threading and tools to build/debug threaded code to mean that writing threaded code isn't possible. Other platforms and languages have taken threads to great extremes for many years, and I'm not necessarily referring to anything Unix (or from Sun).

      This reminds me of the story (but I don't know how true it is) that in the early days of Fortran, the quicksort algorithm was widely understood but considered to be too complicated to implement. Now 2nd year computer science students implement it as a homework project. Threads could be considered similar. Anyone who has written a servlet is implicitly writing multithreaded code and you can very easily/quickly write reliable and safe threaded code in a number of modern languages without having to get into the details C forces you into. It's the mix of pass-by-reference and pass-by-value with only a bit of syntactical sugar that creates the problems, not the concepts of parallelism.

      On the other hand, I agree with you that we'll see increased parallelism driving increases in computing capabilities in the coming years. It was mathematically proven some time ago, but Amdahl's law is now officially giving way to Gustafson's law [wikipedia.org] (more on John Gustafson here [sun.com]). Since software codes are sufficiently complex these days (even the most simple of modern programs can make use of parallelism-- just think of anything that touches a network), it's those platforms that exploit this feature which stand to deliver the best benefits to it's users.

      • This reminds me of the story..

        The early days of Fortran were before the 70's. Given the extremly tight ram constraints you'd probably have to implement a non-recursive iterative form which is FAR more complex. And this is Fortran we're talking about, not known for being the cutest language out there - and if we're referring to pre fortran66 then you're only branching construct is the 3way arithmatic IF statment. Now given that and considering your only method of debugging is taking a heap dump and looki
  • by Anonymous Coward on Wednesday May 10, 2006 @03:31AM (#15299285)
    Hi folks,

            I worked two years for a society that was developing its own micro-kernel system, for embedded targets. I was involved in system programing and adaptation of the whole compiler tools, based on GCC chain.
            Linus is right: basic problem is address space sharing, and if you want to implement memory protection, you rapidly falls into address space fragmentation problem.
            The main advantage of the system I worked on wasn't really its micro-kernel architecture, but the fact that its design allowed to suppress most of glue code that is needed between a C++ program and a more classic system.
            In my opinion, micro-kernel architecture has the same advantage and drawbacks that so-called "object-oriented" programing scheme : it is somewhat intellectually seducive for presentations but it is just a tool.
            It would certainly be intersting for Linux to provide the dynamic link management specificities of a micro-kernel system, for instance to allow someone to quickly modify IP stack for its own purpose, but should the whole system being design that way ? I am not sure.
            If you want to have an idea of the problem encountered with programing for these systems, one can look at the history of the AmigaOS, which have a design very close to a micro-kernel one.
  • by Anonymous Coward

    If the Linux kernel had have been coded using Forth.

    Just saying.
  • in other news (Score:4, Insightful)

    by convolvatron (176505) on Wednesday May 10, 2006 @03:39AM (#15299307)
    abstraction and state isolation considered harmful
  • Entire comment (Score:5, Insightful)

    by Futurepower(R) (558542) <MJennings.USA@NOT_any_of_THISgmail.com> on Wednesday May 10, 2006 @03:53AM (#15299351) Homepage


    Name: Linus Torvalds (torvalds AT osdl.org) 5/9/06

    ___________________

    _Arthur (Arthur_ AT sympatico.ca) on 5/9/06 wrote:

    I found that distinction between microkernels and "monolithic" kernels useful: With microkernels, when you call a system service, a "message" is generated to be handled by the kernel *task*, to be dispatched to the proper handler (task). There is likely to be at least 2 levels of task-switching (and ring-level switching) in a microkernel call.

    ___________________


    I don't think you should focus on implementation details.

    For example, the task-switching could be basically hidden by hardware, and a "ukernel task switch" is not necessarily the same as a traditional task switch, because you may have things - hardware or software conventions - that basically might turn it into something that acts more like a normal subroutine call.

    To make a stupid analogy: a function call is certainly "more expensive" than a straight jump (because the function call implies the setup for returning, and the return itself). But you can optimize certain function calls into plain jumps - and it's such a common optimization that it has a name of its own ("tailcall conversion").

    In a similar manner, those task switches for the system call have very specific semantics, so it's possible to do them as less than "real" task-switches.

    So I wouldn't focus on them, since they aren't necessarily even the biggest performance problem of an ukernel.

    The real issue, and it's really fundamental, is the issue of sharing address spaces. Nothing else really matters. Everything else ends up flowing from that fundamental question: do you share the address space with the caller, or put in slightly different terms: can the callee look at and change the callers state as if it were its own (and the other way around)?

    Even for a monolithic kernel, the answer is a very emphatic no when you cross from user space into kernel space. Obviously the user space program cannot change kernel state, but it is equally true that the kernel cannot just consider user space to be equivalent to its own data structures (it might use the exact same physical instructions, but it cannot trust the user pointers, which means that in practice, they are totally different things from kernel pointers).

    That's another example of where "implementation" doesn't much matter, this time in the reverse sense. When a kernel accesses user space, the actual implementation of that - depending on hw concepts and implementation - may be exactly the same as when it accesses its own data structures: a normal "load" or "store". But despite that identical low-level implementation, there are high-level issues that radically differ.

    And that separation of "access space" is a really big deal. I say "access space", because it really is something conceptually different from "address space". The two parts may even "share" the address space (in a monolithic kernel they normally do), and that has huge advantages (no TLB issues etc), but there are issues that means that you end up having protection differences or simply semantic differences between the accesses.

    (Where one common example of "semantic" difference might be that one "access space" might take a page fault, while another one is guaranteed to be pinned down - this has some really huge issues for locking around the access, and for dead-lock avoidance etc etc).

    So in a traditional kernel, you usually would share the address space, but you'd have protection issues and some semantic differences that mean that the kernel and user space can't access each other freely. And that makes for some really big issues, but a traditional kernel very much tries to minimize them. And most importantly, a traditional kernel shares the access space across all the basic system calls, so that user/kernel difference is the only access space boundary.

    Now, the real problem with split acce
  • IANAKH (I am not a kernel hacker)

    I'd love to hear some examples. It would be nice to see an example where something that's easy in a monolithic kernel is difficult in a microkernel. See, I imagine this bunch of distinct services all happily calling each other for what they need; I'd like to know more about how the need for complex distributed algorithms arises.
  • by jsse (254124) on Wednesday May 10, 2006 @04:46AM (#15299484) Homepage Journal
    I don't want to repost this old debate that I believe every geeks should have read it; but since nobody post it yet. I repost it for anybody who haven't read about this famous debate between Linus and Prof. Tanenbaum on microkernel.

    Linus vs. Tanenbaum - "Linux is obsolete" Jan,1992 [fluidsignal.com]

    (Save your mod point for someone who really need them thanks!)
  • by jackjansen (898733) on Wednesday May 10, 2006 @04:55AM (#15299512)
    I think the real point here, which both Andy and Linus hint on but don't state explicitly (as far as I'm aware) is about keeping the OS designers and implementers honest. If you need an interface between two parts of the system you should design that interface, define it rigidly, then implement it.

    Andy likes microkernels because they force you to do that. Time spent on design leads to insight, which may well point to better and cleaner ways to do the task you originally set out to acomplish.

    Linus hates microkernels because they force you to do that. Time spent on design is time lost getting working code out the door, and working code will give you experience that will point to better and cleaner ways to do the task you originally set out to acomplish.

  • by master_p (608214) on Wednesday May 10, 2006 @05:04AM (#15299534)
    The only way the monolithic vs microkernel debate will go away is if CPUs provide a better way of sharing resources between modules.

    One solution to the problem is to use memory maps. Right now each process has its own address space, and that creates lots of problems. It would have been much better if each module had its own memory map, ala virtual memory, so as that the degree of sharing was defined by the O/S. Two modules could then see each other as if they belong to the same address space, but other modules would be inaccessible. In other words, each module should have its own unique view of the memory.

    Of course the above is hard to implement, so there is another solution: the ring protection scheme of 80x86 should move down to paging level. Each page shall have its own ring number for read, write, and execute access. Code in page A could access code/data in page B only if the ring number of A is less than or equal to the ring number of B. That's a very easy to implement solution that would greatly enhance modularity of operating systems.

    A third solution is to provide implicit segmentation. Right now 80x86 has an explicit segmentation model that forces inter-segment addresses to be 48 bits wide on 32-bit machines (32 bits for the target address and 16 bits for the segment id). The implicit segmentation model is to use a 32-bit flat addressing mode but load the segment from a table indexed by the destination address, as it is done with virtual memory. Each segment shall have a base address and a limit, as it is right now. If a 32-bit address falls within the current segment, then the instruction is executed, otherwise a new segment is loaded from the address and a security check is performed. This is also a very easy to implement solution that would provide better modularization of code without the problems associated with monolithic kernels.

    There are various technical solutions that can be supported at CPU level that are not very complex and do not impose a big performance hit. These solutions must be adopted by CPU manufacturers if software is to be improved.
  • What's the problem with monoliths, that they are supposed to be less marketable? Ever since 1968, Monoliths [imdb.com] have been doing great!
  • by penguin-collective (932038) on Wednesday May 10, 2006 @07:12AM (#15299816)
    Microkernels like Mach have been unsuccessful because putting everything into separate address spaces makes a lot of things quite difficult.

    C-based monolithic kernels like Linux and UNIX run into software engineering problems--it gets harder and harder to ensure stability and robustness as the code mushrooms because there is no fault isolation.

    The solution? Simple: get the best of both worlds through language-supported fault isolation (this can even be a pure compile-time mechanism, with no runtime overhead). It's not rocket science, it's been done many times before. You get all the fault isolation of microkernels and still everything can access anything else when it needs to, as long as the programmer just states what he is doing clearly.

    C-based monolithic kernels were a detour caused by the UNIX operating system, an accident of history. UNIX has contributed enormously to information technology, but its choice of C as the programming language has been more a curse than a blessing.
  • The Thing Is (Score:3, Informative)

    by ajs318 (655362) <sd_resp2NO@SPAMearthshod.co.uk> on Wednesday May 10, 2006 @07:52AM (#15299925)
    Whilst microkernels are a lovely idea in theory, they don't deliver in practice. There is already a bottleneck between user space and kernel space and this will impact upon performance. No matter what you are trying to do, the slowest part of the process will always determine the maximum rate at which you can do it.

    Monolithic, Linux/Netware-style modular and so-called hybrid kernels get around this limitation by moving things to the other side of the bottleneck. It makes sense on this basis to put a hardware driver in kernel space. You usually only pass "idealised" data to a driver; the driver generally has to pass a lot more to the device because it isn't ideal. For example, when talking to a filesystem driver, you generally only want to send it the data to stick into some file. The filesystem driver has to do all the donkey work of shunting the heads back and forth and waiting for the right spot of disc to pass under them.

    It might be "beautiful" to have as little code as possible situated on one side of the division, but it's most practical to have as little data as possible having to travel through the division.
  • by Jerk City Troll (661616) on Wednesday May 10, 2006 @08:59AM (#15300200) Homepage

    Whenever this issue comes up, I swear to myself that proponents of microkernel architectures created the term which they use to address their opponent. The terms used to discuss this are heavily loaded. “Microkernel” sounds lean, quick, and simple, while by subjective contrast “monolithic” sounds bulky, old, and unwieldy. I think that when engaging in this debate, it is best that we prefer to at least use “unified kernel” in place of “monolithic”, being it is more accurate and contrasts with “microkernel” objectively. The term most people use for kernels like Linux and NT seems to imply that there is no logical separation of components and that all pieces are somehow a gigantic (dare I say monolithic) glob and that is nonsense.

  • My take (Score:3, Insightful)

    by MrCopilot (871878) on Wednesday May 10, 2006 @11:38AM (#15301328) Homepage Journal
    Linus' Kernel works. Has worked, will work and is a collosalkernel.

    IBM is shipping it. Novell, RedHat, WindRiver, LinuxWorks, Motorola, Sharp, Sony, Hell even I'm shipping it in embedded products. It is easy to "prove it works" as alluded to in another post.

    Microkernels are also shipping from QNX and, uh and, oh I'm sure there are a few more. (Not knocking QNX, considered it but tossed it, due to cost and liscensing.)

    Is one more secure or stable than another, is really the wrong question.

    The question is really is the "System" designed with microkernel more or less stable or secure or functional then the alternative.

    I think it has, to my satisfaction, been settled. From RevolutionOS the movie (BuyIt!) Stallman is asked why HURD is so far behind Linux. His answer, (paraphrased, sorry RMS) Turns out a microkernel is very difficult to pull off because of the constant stream of messages required for the simplest of tasks. This forced ovehead only makes the Kernel more secure not the system, if the "drivers" keep crashing out and restarting you could go months without noticing critical flaws. "But the kernel is rock solid" doesn't really help if I can't ship the "System", does it. The only evidence you need, is the development pace of Hurd or even QNX to show this.

    I respect the professor and his work, but it was an inspiration for a much more scalable design that clearly is superior for the rapid development a modern OS is expected to have.

    As an engineer I see the beauty, but as a Production Engineer I can also see the added complexity a microkernel brings.

    Of course you could, argue theoretically that I'm wrong or prove it by making a GNU/Minix distribution to compete in the real world with Linux. Almost 15yrs and a flood of Students haven't helped Professor T, produce it yet. Admittedly not his goal, but come on I know students and CompSci students have a knack for carrying their favorite teacher/classes with them throughout their career and it shows up in their projects.

"Someone's been mean to you! Tell me who it is, so I can punch him tastefully." -- Ralph Bakshi's Mighty Mouse

Working...