Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

Microkernel: The Comeback? 722

bariswheel writes "In a paper co-authored by the Microkernel Maestro Andrew Tanenbaum, the fragility of modern kernels are addressed: "Current operating systems have two characteristics that make them unreliable and insecure: They are huge and they have very poor fault isolation. The Linux kernel has more than 2.5 million lines of code; the Windows XP kernel is more than twice as large." Consider this analogy: "Modern ships have multiple compartments within the hull; if one compartment springs a leak, only that one is flooded, not the entire hull. Current operating systems are like ships before compartmentalization was invented: Every leak can sink the ship." Clearly one argument here is security and reliability has surpassed performance in terms of priorities. Let's see if our good friend Linus chimes in here; hopefully we'll have ourselves another friendly conversation."
This discussion has been archived. No new comments can be posted.

Microkernel: The Comeback?

Comments Filter:
  • Re:Eh hem. (Score:3, Informative)

    by Anonymous Coward on Monday May 08, 2006 @10:14AM (#15284927)
    SE Linux provides security models to compartmentalize your programs and applications and such. This is a completely different beast then compartmentalizing your individual kernel parts. Modules was kind of a primitive step in the direction of a microkernel, but still a long ways off from a technical standpoint.
  • Oh Dear (Score:1, Informative)

    by segedunum ( 883035 ) on Monday May 08, 2006 @10:14AM (#15284932)
    Not again: vs_Tanenbaum.html []

    We've got Andy Tanenbaum coming up with nothing practical in the fifteen or sixteen years he's been promoting microkernels, and then turning around and telling us he was right all along. Meanwhile, the performance of OS X sucks like a Hoover, as we all knew: []

    I'll just pretend I didn't see this article.
  • by Shazow ( 263582 ) <andrey.petrov@s[ ] ['haz' in gap]> on Monday May 08, 2006 @10:24AM (#15284998) Homepage
    wouldn't rule compartmentalization out completely, but it should be understood that doing so will increase the complexity/size and not necessarily lower the size/complexity.

    Just to clear things up, my understanding is that Tanenbaum is advocating moving the complexity out of kernel space to user space (such as drivers). So you wouldn't be lowering the size/complexity of the kernel altogether, you'd just be moving huge portions of it to a place where it can't do as much damage to the system. Then the kernel just becomes one big manager which tells the OS what it's allowed to do and how.

    - shazow
  • Re:Feh. (Score:1, Informative)

    by Anonymous Coward on Monday May 08, 2006 @10:27AM (#15285026)
    He has. It's called Amoeba. I haven't tried it myself though.
  • Re:NT4 (Score:1, Informative)

    by Anonymous Coward on Monday May 08, 2006 @10:41AM (#15285119)
    NT4 was the release where NT stopped pretending to have a micro-kernel architecure. Microsoft pulled a load of previously user-mode code (e.g. the graphics subsystem) into the kernel to improve performance.

    The "cleanest" NT versions were NT 3.1, 3.5 and 3.51.

  • Re:Feh. (Score:3, Informative)

    by Anonymous Coward on Monday May 08, 2006 @10:48AM (#15285181)
    Performance was fixed partly through a series of hacks, and partly through raw horsepower.

    O RLY []
  • Re:How hard... (Score:3, Informative)

    by mmkkbb ( 816035 ) on Monday May 08, 2006 @10:50AM (#15285190) Homepage Journal
    Linux can be run on top of a Mach microkernel. Apple supported an effort to do this called mkLinux [] which ran on both Intel and PowerPC hardware.
  • by Anonymous Coward on Monday May 08, 2006 @10:53AM (#15285209)
    A minor nitpick, but you'll learn something for the future:

    "The proof is in the pudding" is actually an INCORRECT usage of the actual saying. If you think about it, it makes no sense -- what proof is in the pudding (and what pudding?!)?

    Instead, the actual, proper phrase is: "The proof of the pudding is in the eating."
  • QNX ! (Score:5, Informative)

    by alexhs ( 877055 ) on Monday May 08, 2006 @10:55AM (#15285232) Homepage Journal
    I have yet to see a "proper" non-academic microkernel which lets one part fail while the rest remain.

    QNX [], but it isn't open source.

    VxWorks [] and a few other would also fit.
  • Mklinux (Score:3, Informative)

    by barutanseijin ( 907617 ) on Monday May 08, 2006 @10:57AM (#15285236)
    Mklinux, one of the first versions of linux-for-ppc, had a microkernel architecture. So it's been done, at least for ppc. Mklinux will run on a 10yr old mac. I don't know about newer ppc machines.
  • QNX for teh win :) (Score:4, Informative)

    by WinPimp2K ( 301497 ) on Monday May 08, 2006 @11:02AM (#15285276)
    Not only does the "Q" stand for "Quick", but when Quantum Software Systems Ltd (now know as QNX) first released their "microkernal", 'message passing", "real time" OS for the 8086 processor in the early 80's they called it "QUNIX". After a brief discussion with AT&T's legal staff, they determined that the vowels were way too expensive and renamed it to "QNX". The microkernal took up less than 64K.

    Unlike certain other OS's, QNX is used in control applications with life and death implications. (nuclear reactors and medical equipment for example)

    QNX has been through a lot of changes since then. And I have not kept up with most of them. I do know that as of a few years ago they did make a "free for personal use" release that included their development system. And a few years before that, they had a 1.44meg demo disk that had their entire OS, GUI and web browser on it.

    But don't take my word for it go check out their website.
  • Re:NT4 (Score:3, Informative)

    by Xiaran ( 836924 ) on Monday May 08, 2006 @11:30AM (#15285480)
    Indeed. Microsoft have also been forced time and time again to make compromises to get better performance. An example of this is known to people that write file filter ddrivers for NT. Basically they seemingly couldnt make(a reasonably nice) object model run fast enough in some circumstance. So now there are effectively *two* file system interfaces for and NT files system : one that uses the regular IRP passing schmantics and the other doing direct calls into your driver to speed things up.

    This, as people can imagine, complicates everything(in a particular part of NT that is already complicated enough and not terribly well documented).
  • by ronanbear ( 924575 ) on Monday May 08, 2006 @11:37AM (#15285537)
    The 30 or so million lines of code in XP refers to everything. The XP kernel is only a fraction of XP but is still larger than the linux kernel.
  • Re:Eh hem. (Score:4, Informative)

    by Kjella ( 173770 ) on Monday May 08, 2006 @11:44AM (#15285580) Homepage
    Isn't SELinux kinda like compartmentalization of the OS?

    No, it's compartmentalization of the applications. Besides, the analogy is really bad because a ship with a blown compartment is quite useful. Computers with a blown network driver will e.g. break any network connections going on, in other words a massive failure. What about a hard disk controller which crashes while data is being written? Drivers should not crash, period. Trying to make a system that could survive driver failure will just lead to kernel bloat with recovery code.
  • by Xiaran ( 836924 ) on Monday May 08, 2006 @11:44AM (#15285582)
    You're still dependent on sane message passing for the system to function. Further, no one has yet to argue that message passing doesn't badly impact system performance. That's because it does.

    Ive coded under QNX a lot and would stronghly disagree with your view on the message passing overhead. From this QNX [] page.

    QNX interprocess communication consists of sending a message from one process to another and waiting for a reply. This is a single operation, called MsgSend. The message is copied, by the kernel, from the address space of the sending process to that of the receiving process. If the receiving process is waiting for the message, control of the CPU is transferred at the same time, without a pass through the CPU scheduler. Thus, sending a message to another process and waiting for a reply does not result in "losing one's turn" for the CPU. This tight integration between message passing and CPU scheduling is one of the key mechanisms that makes QNX message passing broadly usable. Most UNIX and Linux interprocess communication mechanisms lack this tight integration, although an implementation of QNX-type messaging for Linux does exist. Mishandling of this subtle issue is a primary reason for the disappointing performance of some other microkernel systems.
  • by r00t ( 33219 ) on Monday May 08, 2006 @11:44AM (#15285586) Journal
    I'm more than a Linux hacker. I actually worked on a commercial microkernel OS.

    Kernels don't often crash for reasons related to lack of memory protection. It's quite silly to imagine that memory protection is some magic bullet. Kernel programmers rarely make beginner mistakes like buffer overflows.

    Kernels crash from race conditions and deadlocks. Microkernels only make these problems worse. The interaction between "simple" microkernel components gets horribly complex. It's truly mind-bending for microkernel designs that are more interesting than a toy OS like Minux.

    Kernels also crash from drivers causing the hardware to do Very Bad Things. The USB driver can DMA a mouse packet right over the scheduler code or page tables, and there isn't a damn thing that memory protection can do about it. CRASH, big time. A driver can put a device into some weird state where it locks up the PCI bus. Say bye-bye to all devices on the bus. A driver can cause a "screaming interrupt", which is where an interrupt is left demanding attention and never gets turned off. That eats 100% CPU. If the motherboard provides a way to stop this, great, but then any device sharing the same interrupt will be dead.

    I note that Tanenbaum is trying to sell books. Hmmm. He knows his audience well too: those who can't do, teach. In academia, cute theories win over the ugly truths of the real world.
  • by C_nemo ( 520601 ) on Monday May 08, 2006 @11:50AM (#15285633)
    May i Point out that the Titanic was divided into compartments, but only in the vertical direction. No horizontal compartments, and hence since she sutained damage to critical_number_of_compartments + 1, trimming of the ship + water ingress allowed the remaing compartments to be progressively filled.
  • I read their "what's new" [] and they're participating in Google's Summer of Code.

    27 April 2006

            The GNU Hurd project will participate in this year's Google Summer of Code, under the aegis of the GNU project.

            The following is a list of items you might want to work on. If you want to modify or extend these tasks or have your own ideas what to work on, please feel invited to contact us on the bug-hurd mailing list or the #hurd IRC channel.

                    * Make GNU Mach use more up to date device drivers.
                    * Work on GNU Mach's IPC / VM system.
                    * Design and implement a sound system.
                    * Transition the Hurd libraries and servers from cthreads to pthreads.
                    * Find and implement a reasonable way to make the Hurd servers use syslog.
                    * Design and implement libchannel, a library for streams.
                    * Rewrite pfinet, our interface to the IPv4 world.
                    * Implement and make the Hurd properly use extended attributes.
                    * Design / implement / enhance support for the...
                                o Andrew File System (AFS);
                                o NFS client and NFSd;
                                o EXT3 file system;
                                o Logical Volume Manager (LVM).

            Please see the page GNU guidelines for Summer of Code projects about how to make an application and Summer of Code project ideas list for a list of tasks for various GNU projects and information about about how to submit your own ideas for tasks.
  • Re:Feh. (Score:5, Informative)

    by Chris Burke ( 6130 ) on Monday May 08, 2006 @12:00PM (#15285722) Homepage
    I'm very skeptical of this. It would seem to me, at a fundamental level, that a microkernel architecture is simply a heavily reduced kernel with most accepted kernel functions now delegated to external "programs", and a high level of trust is now placed in each and every one of these programs. I can't see how this is good for security.

    Well, trust is placed in those user-land programs to perform the task for which they are responsible. Whereas in a monolithic kernel, trust is placed in each subsystem to not only perform the task it is responsible for, but also not to muck with the workings of every other subsystem in the kernel as they all reside in the same address space. Therefore in a microkernel you can have a bug in your network stack without compromising your file system driver or authentication module, while this isn't necessarily true in a macrokernel. Compartmentalization is very good for security.

    Which is just one of the reasons Mach is so popular as a research OS, despite never seeing any success in the real world. Compartmentalization also makes the OS easier to maintain, easier to understand, and easier to make modifications for. Plus it's very easy to port to new hardware, if that's required.

    In a sense, most OSes are "microkerneled" anyway. Most functionality is implemented by programs running on top of the kernel, which pass messages back and forth between themselves and the kernel. Perhaps my view on this is a little naive, but I don't see too much of a difference between a microkernel module and any other process on the machine.

    I think you underestimate the things that are handled by the kernel? Unix uses many user-land services, but also has many services integrated into the kernel. Take the concept of moving functionality into user space to the limit, and you have a microkernel. Your last observation isn't naive, it's correct: a microkernel module isn't necessarily any different than any other process on your machine.
  • Re:A Good example? (Score:3, Informative)

    by buckhead_buddy ( 186384 ) on Monday May 08, 2006 @12:16PM (#15285853)
    misleb wrote:
    What true microkernel are/were there? ... OS X? Ok, I guess, but it is slow and no more stable or secure than any other Unix. In fact, I've had OS X machines crash far more than any Linux or FreeBSD systems I've used.
    Mac OS X is not a true microkernel architecture. Apple has a kernel plug-in architecture and allows drivers to run in kernel space which goes against true microkernel design principles.

    How can the average user see this? When "Software Update" runs, almost any update to the system (not updates to an Apple application like iTunes) will require a restart of the whole machine. In a true Microkernel design you might need to relaunch the Finder or restart the communications architecture, but unless something changes kernel space code you wouldn't need to restart the whole computer.

    The uptime command would give Apple proponents much more to brag about if it were a true microkernel, but beyond hardware abstraction I don't think Apple has the same needs for a microkernel architecture as others. Since that's the case, I don't think it's fair to hold it up as an example of the fatal sins of microkernels in general. Nor do I think dragging in your personal valuations of speed and stability are rigorous indictments of Mac OS X's performance either.

  • Minix (Score:3, Informative)

    by Espectr0 ( 577637 ) on Monday May 08, 2006 @12:22PM (#15285917) Journal
    I have a friend who is getting his doctorate in amsterdam, and he has tanenbaum next door.

    Guess what he told me. A revamped version of minix is coming.
  • by Elastri ( 911062 ) on Monday May 08, 2006 @12:25PM (#15285939)
    I actually saw Tanenbaum giving a talk above Minix a few weeks ago. He brought up the speed issue, saying that in modern (and future) hardware the performance hit from the microkernel will be less and less relevant. Even my current computer is probably more power than I need almost all the time, and I need a lot more computing power than a casual user. Tanenbaum firmly believes that the stability and modularity will be prefered over a slight speed increase. After he showed all of the benefits of the Minix architecture, I was inclined to agree.

    One thing which hasn't really been touched on in this thread yet is exactly what the modularity means for end users. It isn't just that the kernel is simplified. When the drivers and other subsystems are external to the kernel, their failure can become a non-issue. This *does not* just mean that they won't take the whole system down. Minix has a process called the ressurection server. It monitors all of the various drivers and subsystems and, if one of them fails, will attempt to get it running again. Professor Tanenbaum showed us data where the hard disk controller was failing a rediculous number of times per second, but the system was still maintaining about 90% of the speed that it would have if it wasn't failing at all. The OS design can actually make up for poorly designed drivers.
  • Re:Feh. (Score:5, Informative)

    by aristotle-dude ( 626586 ) on Monday May 08, 2006 @12:26PM (#15285953)
    Yeah really. These tests were already debunked pretty much on the net already. I'm surprised people keep on quoting their "tests" for trolling purposes.

    There were several flaws in their tests:
    1. They used GCC 3.x compiler instead of GCC 4.x compiler shipping with Tiger because the linux distros they were comparing against had not updated to 4.x of GCC yet.
    2. They did not include the OS X specific patches to alter the threading mechanism. This caused a significant performance hit as MySQL was written for the linux threading model rather than a Mach one or more generic model.
    3. Binary builds with OS X specific patches were available for download via links from the official sites. There was no need to compile a crippled version.
    4. They should have also tested the free/evaluation versions of Oracle as there are optimized version available for both linux and OS X. Assuming this was not a test of only OSS but rather performance as a "server", I do not see why they did not include it.

  • Re:Metaphors eh? (Score:4, Informative)

    by srussell ( 39342 ) on Monday May 08, 2006 @01:03PM (#15286273) Homepage Journal
    Aren't you going to want to reboot it anyway or is the theory that you can restart a component without rebooting?
    Yeah, I think that's the point.

    The goal is to have a system such that you maximize the segregation of the parts. If the SCSI subsystem crashes -- for example -- you flush it and restart it. While it may not be possible to totally isolate every subsystem, with a microkernel subsystems should be more robust than in monolithic kernels.

    For all of Linus' scorn of microkernels, Linux borrows heavily from the concept, if not from the theory. One could almost say that Linux implements a microkernel poorly through the kernel module interface. It fails to be a true microkernel in a number of ways, though, not least of which is the low degree to which it isolates modules.

    In any case, your nervousness about a system where a "fundamental" subsystem craps out is understandable in someone who's main experience is with monolithic kernels, because the corruption of one subsystem often infects other systems. For example, IME when the Linux SCSI module starts barfing (which happens with distressing regularity), if you're lucky, you can unload and reload the SCSI modules, but eventually, you're going to have to reboot, because it never quite works well after a reload. In a microkernel, subsystems are just services that other subsystems may use, but aren't intimate with. A corruption in one subsystem shouldn't lead to corruptions in any other subsystem.

    --- SER

  • Re:Feh. (Score:3, Informative)

    by ( 653730 ) on Monday May 08, 2006 @01:16PM (#15286400)
    NT is not a microkernel. Microkernels are supposed to mve things like drivers, filesystems, network stacks, etc, to userspace.

    Where does NT implements all that? In kernel space. A NULL pointer in that code brings the system down.

    Just because it was STARTED from a microkernel (like mac os x) doesn't means it's a REAL microkernel. How can you call "microkernel" to something that implements the filesystem in kernelspace?

  • Restarting drivers (Score:5, Informative)

    by Sits ( 117492 ) on Monday May 08, 2006 @01:37PM (#15286613) Homepage Journal
    I'm going to weigh in over here mainly becuase my quiet slumber in the minix newsgroup has been disturbed by a call to arms from ast [] to correct some of the FUD here on Slashdot.

    Drivers have measurably more bugs in them than other parts of the kernel. This has been shown by many studies (see the third reference in the article). This can also been shown empirically - modern versions of Windows are often fine until a buggy driver gets on to them and destablises things. Drivers are so bad that XP even warns you about drivers that haven't been through checks. Saying people should be careful just doesn't cut it and is akin to saying people were more careful in the days of multitasking without protected memory. Maybe they were but some program errors slipped through anyway, bringing down the whole OS when I used AmigaOS (or Windows 95). These days, if my my web browser misbehaves at least it doesn't take my word processor with it, losing the web browser is pain enough.

    In all probability you would know that a driver had to be restarted because there's a good chance its previous state had to be wiped away. However a driver that can be safely restarted is better than a driver that locks up everything that touches it (ever had an unkillable process stuck in the D state? That's probably due to a driver getting stuck). You might be even able to do a safe shutdown and lose less work. From a debugging point of view I prefer not having to reboot the machine to restart the entire kernel when driver goes south - it makes inspection of the problem easier.

    (Just to prove that I do use Minix though I shall say that killing the network driver results in a kernel panic which is a bit of a shame. Apparently the state is too complex to recover from but perhaps this will be corrected in the future).

    At the end of the day it would be better if people didn't make mistakes but since they do it is wise to take steps to mitigate the damage.
  • Re:How hard... (Score:3, Informative)

    by samkass ( 174571 ) on Monday May 08, 2006 @01:56PM (#15286794) Homepage Journal
    Years before mkLinux, CMU experimented with some software called MacMach. It ran a "real" BSD kernel over Mach, and MacOS 6.0.7 on top of that. (By "real" I mean that you had to have one of those uber-expensive old BSD licenses to look at the code.) You could even "ps" and get MacOS processes in the list, although they weren't full peers of a unix process. I believe the most recent machine MacMach could run on was a Mac ][ci.

    Also, in the early 90's Tenon Intersystems had a MacOS running on Mach that had some UNIXy stuff underneath as well.

    Then, of course, there was mkLinux, known for being almost compatible with Linux, almost compatible with most hardware, and almost as fast as just running Linux.

    All of them ran reasonably well, but neither really embraced the kernelized design from UI to drivers to hardware, and all were to varying degrees slower than the mainstream versions of their software.
  • by FireFury03 ( 653718 ) <slashdot@nexusu[ ]rg ['k.o' in gap]> on Monday May 08, 2006 @02:23PM (#15287043) Homepage
    Drivers are so bad that XP even warns you about drivers that haven't been through checks.

    However, the driver certification program is to some extent a waste of time anyway:
    • When MS sign the driver they cannot test all execution paths - there are known cases where the driver manufacturers have put the drivers into a safe (read: slow) mode for the certification and then switched to a completely different (fast) execution path in real life - this makes the driver no more stable than an uncertified driver
    • Many driver writers don't want the time and expense of getting MS to certify each release of their driver, so they release uncertified drivers - a large chunk of drivers are uncertified and so will pop up the warning upon installation
    • Windows gives the users so many pop up messages anyway (made worse by the previous point) that the users just ok the message without reading it - an unread warning is worse than no warning at all since the user is no better off and you just annoyed them by making them click yet another box.

    I think a big part of the problem is that it really isn't worth the driver writer's development costs to make the drivers stable. There is often a rapid turnover of hardware so they need to keep revising the driver and so long as it's stable enough that the average user doesn't realise it's that driver that's to blame before the product is end-of-lifed then what benefit is it to the manufacturer to spend the extra cash to make the driver stable?

    However a driver that can be safely restarted is better than a driver that locks up everything that touches it (ever had an unkillable process stuck in the D state? That's probably due to a driver getting stuck).

    The D state _is_ a bug, and in many cases an example of lazy coding. It's the "oops something went wrong but we don't want to complicate our code by catching the error and cleaning up so lets put the machine into an unrecoverable state".

    For example, if you pull a USB mass storage device while it's mounted (a very silly thing to do, but it really shouldn't break the machine) then all the processes that try to access it will probably drop into the D state. There is no good reason for this - the filesystem driver has asked the USB block device driver to read or write some data and the USB block device driver _knows_ full well that the device has gone away so it should return a failure which the filesystem driver and catch and (after cleaning up locks, etc.) can return to userland as a failed operation. Unfortunately, rather than catching this error gracefully, either the block driver or the filesystem driver just gives up and goes to sleep waiting for an i/o operation that will never complete.

    In this case, the D state is no better than a user's application bombing out on an ASSERT() failure - something went wrong, we can't be bothered to even save the user's work to a recovery file, lets bomb out losing the lot - if that's not a bug I don't know what is. (Yes, I'm aware that data integrity can't be guaranteed in many cases but you should at least dump out the (potentially corrupt) data to a recovery file).

    At the end of the day it would be better if people didn't make mistakes but since they do it is wise to take steps to mitigate the damage.

    I think there is some truth in the "less risk increases lazyness" idea, but I do agree that mitigating the damage is more important than scaring coders away from lazyness.
  • by galvanash ( 631838 ) on Monday May 08, 2006 @03:22PM (#15287546)

    Do you actually want people to take you seriously when you post utter shit like this?

    Fact: Mach performs poorly due to message passing overhead. L3, L4, hybridized kernels (NT executive, XNU), K42, etc, do not.

    That is a veiled lie. Mach performed very poorly mostly because of message _validation_, not message passing (although it was pretty slow at that too). I.e. it spent alot of cycles making sure messages were correct. L3/L4 and K42 simple dont do any validation, they leave it up to the user code. In other words once you put back the validation in userland that Mach had in kernelspace, things are a bit more even. And for the love of god NT is NOT a microkernel. It never was a microkernel. And stop using the term "hybrid", all hybrid means is that the marketing dept. wanted people to think it was a microkernel...

    Now I will throw a few "facts" at you. It is possible with alot of clever trickery to simulate message passing using zero-copy shared memory (this is what L3/L4/K42/QNX/etc... any microkernel wanting to do message passing quickly). And if done correctly it CAN perform in the same league as monolithic code for many things where the paradigm is a good fit. But there are ALWAYS situations where it is going to be desirable for seperate parts of an OS to directly touch the same memory in a cooperative manner, and when this is the case a microkernel just gets in your damn way...

    Fact: OpenBSD (monolithic kernel) performs worse than MacOS X (microkernel) on comparable hardware! Go download lmbench and do some testing of the VFS layer.

    Ok... Two things. OpenBSD is pretty much the slowest of all BSD derivitives (which is fine, those guys are more concerned with other aspects of the system and its users are as well), so using it in this comparison shows an obvious bias on your part... Secondly, and please listen very closely because this bullshit needs to stop already, !!OSX IS NOT A MICROKERNEL!! It is a monolithic kernel. Yes it is based on Mach, just like mkLinux was (which also was not a microkernel). Lets get something straight here, being based on Mach doesnt make your kernel a microkernel, it just makes it slow. If you compile away the message passing and implement your drivers in kernel space, then you DO NOT have a microkernel anymore.

    So what you actually said in your post could be re-written like this:

    Fact: OSX is sooooo slow that the only thing it is faster than is OpenBSD. And you cant even blame its slowness on it being a microkernel. How pathetic... Wow, that says it all in my book :)

    And no, you dont have to believe me... Please read this [] before bothering to reply.

  • Re:Feh. (Score:1, Informative)

    by Anonymous Coward on Monday May 08, 2006 @04:45PM (#15288177)
    Nope, you'll still need the MMU for 2 reasons.

    The obvious one is to provide paging.

    The other one is so that processes run in their own address spaces.

    One of the other posters mentions the Microsoft research kernel written in C# (plus constraints). Apparently Tannenbaum does that as well. While this is an interesting system, it's also a SASOS (single address space OS). Which (in a real environment) has the major drawback that any process can DOS the whole system simply by (over) allocating memory. To prevent that you need processes to run in their own address space. If you do that you need some mecanism to transfer memory pages between processes (aka messages), and you have to make a context switch when you cross address spaces boundaries. And if you do that it doesn't matter that some of your systemn is in ring 0 and the rest in ring 3, you'll incur the cost anyway.

    That's not to say that there's no merit in writing (most of) the OS in a higher level language and putting the JIT compiler in there as well, thus allowing code to be safely (-ish) uploaded in the kernel (with the associated speed benefits), but a) that's hardly a microkernel approach, b) you'still run the bulk of the code in user land, and c) it's most definetely not new. Look up exokernel.
  • by renoX ( 11677 ) on Monday May 08, 2006 @05:58PM (#15288729)
    And BeOS is called a "Modular Hybrid Kernel" by wikipedia: hybrid implies that you don't get the full protection..
  • by mcoon ( 648300 ) on Monday May 08, 2006 @09:57PM (#15289805) Homepage
    For an even better example of what a microkernel can do, have a look at the L4 kernel: [] where the kernel, designed in assembler + C++ has some decent performance stats run by IBM. The short version is that a linux personality on an L4 microkernel costs a 3.8% performance penalty. When you consider that this persnality that is being tested has in no particular way been tuned to fit the L4 kernel, this 3.8 percent penalty for another kernel under linux doesn't seem that bad. Infact, one wonders, if it were tuned, how much faster would it be?

"We don't care. We don't have to. We're the Phone Company."