Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Microkernel: The Comeback? 722

bariswheel writes "In a paper co-authored by the Microkernel Maestro Andrew Tanenbaum, the fragility of modern kernels are addressed: "Current operating systems have two characteristics that make them unreliable and insecure: They are huge and they have very poor fault isolation. The Linux kernel has more than 2.5 million lines of code; the Windows XP kernel is more than twice as large." Consider this analogy: "Modern ships have multiple compartments within the hull; if one compartment springs a leak, only that one is flooded, not the entire hull. Current operating systems are like ships before compartmentalization was invented: Every leak can sink the ship." Clearly one argument here is security and reliability has surpassed performance in terms of priorities. Let's see if our good friend Linus chimes in here; hopefully we'll have ourselves another friendly conversation."
This discussion has been archived. No new comments can be posted.

Microkernel: The Comeback?

Comments Filter:
  • How hard... (Score:3, Interesting)

    by JonJ ( 907502 ) <jon.jahren@gmail.com> on Monday May 08, 2006 @10:12AM (#15284915)
    Would it be to convert Linux to a microkernel? And Apple is using Mach and BSD as the kernel XNU, are they planning to make it a true microkernel? AFAIK it does some things in kernel space that makes it not a microkernel.
  • NT4 (Score:3, Interesting)

    by truthsearch ( 249536 ) on Monday May 08, 2006 @10:14AM (#15284925) Homepage Journal
    NT4 had a microkernel whose sole purpose was object brokering. What I think we're missing today is a truely compartmentalized microkernel. The NT4 kernel handled all messages between kernel objects, but all it did was pass them along. One object running in kernel space could still bring down the rest. I assume that's still the basis of the XP kernel today.

    I haven't looked at GNU/Hurd but I have yet to see a "proper" non-academic microkernel which lets one part fail while the rest remain.
  • Trusted Computing (Score:3, Interesting)

    by SavedLinuXgeeK ( 769306 ) on Monday May 08, 2006 @10:14AM (#15284931) Homepage
    Isn't this similar, in idea, to the Trusted Computing movement. It doesn't compartamentalize, but it does ensure integrity at all levels, so if one area is compromised, the nothing else is given the ability to run. That might be a better move, than the idea of compartamentalizing the kernel, as too many parts are interconnected. If my memory handler fails, or if my disk can't read, I have a serious problem, that sinks the ship, no matter what you do.
  • The thing is... (Score:5, Interesting)

    by gowen ( 141411 ) <gwowen@gmail.com> on Monday May 08, 2006 @10:15AM (#15284940) Homepage Journal
    Container ships don't have to move cargo from one part of the ship to another, on a regular basis. You load it up, sail off, and then unload at the other end of the journey. If the stuff in the bow had to be transported to the stern every twelve hours, you'd probably find fewer enormous steel bulkheads between them, and more wide doors.
  • Theory Vs. Practice (Score:4, Interesting)

    by mikeisme77 ( 938209 ) on Monday May 08, 2006 @10:18AM (#15284958) Homepage Journal
    This sounds great in theory, but in reality it would be impractical. 2.5 million lines of code handling all of the necessary things the Linux Kernel handles really isn't that bad. Adding compartmentalization into the mix will only make it more complicated and make it more likely for a hole to spring somewhere in the "hull"--maybe only one compartment will be flooded then, but the hole may be harder to patch. However, I wouldn't rule compartmentalization out completely, but it should be understood that doing so will increase the complexity/size and not necessarily lower the size/complexity. And isn't Windows XP or Vista like 30 million lines of code (or more)? That's a LOT more than double the size of the Linux kernel...
  • Re:Feh. (Score:1, Interesting)

    by Anonymous Coward on Monday May 08, 2006 @10:18AM (#15284959)
    YEAH!

    Why doesn't Tannenbaum write his OWN O/S following his examples, THEN we can talk! Minix DOESN'T COUNT! Frankly, Linux has been amazingly stable through most of its life, as have other UNIX variants/versions. I didn't see that with Minix.

    The industry has better and more important things to worry about.

  • Re:Feh. (Score:5, Interesting)

    It holds no more true in practice today than it did when he started.

    WRONG.

    Tanenbaum's research is correct, in that a Microkernel architecture is more secure, easier to maintain, and just all around better. The problem is that early Microkernel architectures killed the concept back when most of the OSes we use today were being developed.

    What was the key problem with these kernels? Performance. Mach (one of the more popular research OSes) incurred a huge cost in message passing as every message was checked for validity as it was sent. This wouldn't have been *so* bad, but it ended up worse because a variety of flaws in the Mach implementation. There was some attempt to address this in Mach 3, but the project eventually tappered off. Oddly, NeXT (and later Apple) picked up the Mach kernel and used it in their products. Performance was fixed partly through a series of hacks, and partly through raw horsepower.

    Beyond that, you might want to read the rest of TFA. Tanenbaum goes over several other concepts that are hot at the moment, include Virtual Machines, Virtualization, and driver protection.
  • Re:NT4 (Score:4, Interesting)

    by segedunum ( 883035 ) on Monday May 08, 2006 @10:19AM (#15284964)
    NT4 had a microkernel whose sole purpose was object brokering.

    Well, I wouldn't call NT's kernel a microkernel in any way for the very reason that it was not truly compartmentalised and the house could still be brought very much down - quadruply so in the case of NT 4. You could call it a hybrid, but that's like saying someone is a little bit pregnant. You either are or you're not.
  • by Ayanami Rei ( 621112 ) * <rayanami&gmail,com> on Monday May 08, 2006 @10:20AM (#15284969) Journal
    Most drivers don't need to run in kernel mode (read: any USB device driver)... or at least they don't need to run in response to system calls.
    The hardware manipulating parts kernel should stick to providing higher-level APIs for most bus and system protocols and provide async-io for kernel and user space. If most kernel mode drivers that power your typical /dev/dsp and /dev/input/mouse and such could be rewritten as kernel-threads that dispatch requests to and from other kernel threads servicing physical hardware in the system you can provide fault-isolation and state reconstruction in the face of crashes without incurring much overhead. Plus user processes could also drive these interfaces directly so user space programs could talk to hardware without needing to load in dangerous, untrusted kernel modules (esp. from closed-source hardware vendors).

    Or am I just crazy?

    Yeah but microkernels seems like taking things to an extreme that can be accomplished with other means.
  • by Hacksaw ( 3678 ) on Monday May 08, 2006 @10:21AM (#15284973) Homepage Journal
    I won't claim that Professor T is wrong, but the proof is in the pudding. If he could produce a kernel set up with all the bells and whistles of Linux, which is the same speed and demonstrably more secure, I'd use it.

    But most design is about tradoffs, and it seems like the tradeoff with microkernels is compartmentalism vs. speed. Frankly, most people would rather have speed, unless the security situation is just untenable. So far it's been acceptable to a lot of people using Linux.

    Notably, if security is of higher import than speed, people don't reach for micro-kernels, they reach for things like OpenBSD, itself a monolithic kernel.

  • Re:Or... (Score:3, Interesting)

    by Zarhan ( 415465 ) on Monday May 08, 2006 @10:21AM (#15284974)
    Considering how much stuff has recently been moved to userland in Linux (udev, hotplug, hal, FUSE (filesystems), etc) I think we're heading in that direction. SELinux is also something that could be considered "compartmentalized".
  • Minix3 (Score:2, Interesting)

    by wysiwia ( 932559 ) on Monday May 08, 2006 @10:24AM (#15285003) Homepage
    A paper might show the concept but only a real working sample will provide answers. Just wait until Minix3 (http://www.minix3.org/ [minix3.org]) is finished and then lets see if it's slower or not and if it's saver or not.

    O. Wyss
  • by Bush Pig ( 175019 ) on Monday May 08, 2006 @10:30AM (#15285047)
    The Titanic wasn't actually _properly_ compartmentalised, as each compartment leaked at the top (unlike a number of properly compartmentalised ships built around the same time, which would have survived the iceberg).

  • by mikeisme77 ( 938209 ) on Monday May 08, 2006 @10:32AM (#15285059) Homepage Journal
    But then you'd have issues with performance and such. The reason the current items are in the kernel to begin with have to do with the need for them to be able to easily communicate with one another and their need to be able to have system override access to all resources. It does make his claim more valid, but it's still not a good idea in practice (unless you're primary focus for an OS is security rather than performance). I also still think that this method would make the various "kernel" components harder to manage/patch--I put kernel in quotes because the parts that would be moved to user land would still be part of the kernel to me (even if not physically).
  • by youknowmewell ( 754551 ) on Monday May 08, 2006 @10:39AM (#15285103)
    The same arguements for using monolithic kernels vs. microkernels is the same sort of arguement for using C/C++ over languages like Lisp, Java, Python, Ruby, etc. I think maybe we're at a point that microkernels are now practical, same as with those high-level languages. I'm no kernel designer, but it seems reasonable that a monolithic kernel could be refactored into a microkernel.
  • by Junks Jerzey ( 54586 ) on Monday May 08, 2006 @11:06AM (#15285303)
    Lots of big ideas in programming get pooh-poohed for being too resource intensive (a.k.a. big and slow), but eventually we look back and think how silly we were to be worried about such details, and that of course we should go with the cleaner, more reliable option. Some examples:

    zbuffering - go back to any book from the 1970s, and it sounds like a pipe dream (more memory needed for a high-res zbuffer than in entire computer systems of the time)

    Lisp, Prolog, and other high-level languages on home computers - these are fast and safe options, but were comically bloated on typical hardware of 20 years ago.

    Operating systems not written in assembly language - lots of people never expected to see the day.

  • Re:Metaphors eh? (Score:4, Interesting)

    Methinks a better analogy is: Snap your timing chain on a high performance engine and watch the entire machine tear itself into a piece of junk. Snap a belt or two on a more pedestrian engine and watch it stop until the belt is replaced.
  • Re:Feh. (Score:4, Interesting)

    Look a few posts up at the fellow who mentioned the L4 kernel. While the L4 was really too little, too late (all the OSes we use today were written by that time), it managed to prove that Microkernels *can* be speed demons. What they require, however, is a radically different architecture. If you simply attempt to shoehorn a microkernel into existing Unix systems - precisely what Mach did - you're going to run into trouble.

    On the other hand, if you architect the system so that it is impossible to pass a bad message, you may find that performance can actually be *increased*. My own preference has always been an OS based on a VM like Java where it is literally impossible to write code that can cross memory barriers. The result would be that the hardware protection of an MMU would be unnecessary, as would the firewall between the kernel and usermode. Performance would increase substantially due to a lack of kernel mode (i.e. Ring 0) interrupts or jumps.
  • Re:Feh. (Score:5, Interesting)

    by LWATCDR ( 28044 ) on Monday May 08, 2006 @11:25AM (#15285441) Homepage Journal
    "I see people hitting YEARS of up-time with Linux/BSD/Solaris and hell, even win2k machines. "
    Are they not upgrading the kernel? I know that Win2K has had some critical updates in the last few years that required a reboot.
    Microkernels do have the potential to be easier to secure than monolithic kernels.
    In theory a secure system is a secure system. It is possible to make a monolithic kernel as secure as a microkernel, however it will be harder to make a monolithic kernel as secure as a microkernel.
    Just like everything else it is a trade off.
    Monolithic
    Easier to make a hi-performance kernel.
    Harder to secure and to test security.

    Microkernel.
    Easier to make secure and to test security.
    Harder to make hi-performance.

    There are secure monolithic systems OpenBSD, Linux, Solaris, and Z/OS jump to my mind.
    There are fast microkernels. QNX is a very nice system.

    I really like the idea of a microkernel OS. I will try out the first stable, useful OSS Microkernel OS that I find.

  • by Inoshiro ( 71693 ) on Monday May 08, 2006 @11:28AM (#15285467) Homepage
    Slashdot may be news for nerds, but it has a serious drawback when it comes to things such as this. The drawback is that what is accepted as "fact" by most people is never questioned.

    "Fact": Micorkernel systems perform poorly due to message passing overhead.

    Fact: Mach performs poorly due to message passing overhead. L3, L4, hybridized kernels (NT executive, XNU), K42, etc, do not.

    "Fact": Micorkernel systems perform poorly in general.

    Fact: OpenBSD (monolithic kernel) performs worse than MacOS X (microkernel) on comparable hardware! Go download lmbench and do some testing of the VFS layer.

    Within the size of L1 cache, your speed is determined by how quickly your cache will fill. Within L2, it's how effecient your algorithm is (do you invalidate too many cache lines?) -- smaller sections of kernel code are a win here, as much as good algorithms are a win here. Outside of L2 (anything over 512k on my Athlon64), throughput of common operations is limited by how fast the RAM is -- not IPC throughput. Most microkernel overhead is a constant value -- if your Linux kernel us O(n) or O(1), then it's possible to tune the microkernel to be O(n+k) or O(1+k) for the equivalent operations. The faster your hardware, the smaller this value of k since it's a constant value. L4Linux was 4-5% slower than "pure" Linux in 1997 (See L4Linux site for the PDF of the paper [l4linux.org]).

    But none of this is something the average slashdotter will do. No, I see lots of comments such as "micorkernels suck!" already at +4 and +5. Just because Mach set back microkernel research by about 20 years, doesn't mean that all micorkernels suck.
  • Re:How hard... (Score:5, Interesting)

    by iabervon ( 1971 ) on Monday May 08, 2006 @11:33AM (#15285498) Homepage Journal
    This is actually sort of happening. Recent work has increased the number of features that can be provided in userspace. Of course, this is done very differently from how a traditional microkernel does it; the kernel is providing virtual features, which can be implemented in user space. For example, the kernel has the "virtual file system", which handles all of the system calls, to the point where a call to the actual filesystem is needed (if the cache, which the VFS handles, is not sufficient). The actual calls may be made to userspace, which is a bit slow, but it doesn't matter, because it's going to wait for disk access anyway.

    The current state is that Linux is essentially coming around to a microkernel view, but not the classic microkernel approach. And the new idea is not one that could easily grow out of a classic microkernel, but one that grows naturally out of having a macrokernel but wanting to push bug-prone code out of it.
  • driver banishment (Score:4, Interesting)

    by bperkins ( 12056 ) on Monday May 08, 2006 @11:35AM (#15285517) Homepage Journal
    What I'd like to see is a compromise.

    There are quite a few drivers out there to support weird hardware (like webcams and such) that are just not fully stable. It would be nice to be able to choose that a driver be run in kernel mode, at full speed, or in a sort of DMZ with reduced performance. This could also make it easier to reverse engineer non-GPL kernel drivers, as well facilitate driver development.

  • Re:Oh Dear (Score:3, Interesting)

    by NutscrapeSucks ( 446616 ) on Monday May 08, 2006 @11:45AM (#15285589)
    I've seen OS X kernel panic after plugging in a funky USB mouse, and when a SMB share suddenly disappears. These are both cases which a real microkernel could in theory recover from. So I don't believe there's any particular "reliability" in the OS X design.
  • by Ayanami Rei ( 621112 ) * <rayanami&gmail,com> on Monday May 08, 2006 @11:54AM (#15285676) Journal
    This is true.
    It doesn't necessarily make it less crash prone. But it does make it instrumentable if it proves to be unstable (you could easily trace, debug, intercept, or otherwise validate the requests the blob made if so needed).

    Furthermore, the kernel mode portion would merely be relaying commands to trusted memory-mapped regions and IO space requested by the process initially (limited by configuration files, perhaps). Most kernel crashes are the cause of errors (pointer mistakes, buf overflow, race condition, etc.) in the complex driver code which "trap" the system in kernel space. The user space portion would likely instead SIG11 and die... if it left the hardware in a weird state it could be fixed by simply restarting the driver program which would, at its outset, send RESET type commands to the device putting it in a known state.

    The largest problem I see is that it isn't possible to easily recast a userspace driver program into a device node without a mechanism like FUSE. It only works if the hardware target in question is nearly always accessed behind a userspace library (OpenGL, libalsa/libjack/OpenAL, libusb).
  • Re:Virtualization (Score:3, Interesting)

    by Znork ( 31774 ) on Monday May 08, 2006 @12:09PM (#15285800)
    "It seems reasonable to think that a tiny microkernel built for virtualization"

    The article (bad me, dont read linked articles, I know) actually mentions hypervisors as sort-of microkernels, so yes. Xen and VmWare ESX fits rather well into such a mapping, as they both have their own 'kernel', and where the controller domain is merely another virtual machine.

    "If we then get minimal, very application specific kernels to run on top of it for specific needs"

    You can accomplish a similar setup by just fudging your definitions a bit again. For example, a bare stripped linux kernel capable of running only on top of the Xen hypervisor isnt all that huge. Combine that with simple single-purpose OS installs doing, for example, fileserving, security services, etc, and communicate over the network for everything, and you more or less have a serverized 'microkernel' architecture. The server is the network is the server is the OS. So to speak. Postmodernistic computing.

    "Granting, of course, that OS vendors go along with the idea,"

    Yes, well, they can go along with it or we can run free software. Yet another indication of the advantages of adaptability.
  • by tenchiken ( 22661 ) on Monday May 08, 2006 @12:34PM (#15286024)
    It also should be noted that for some insane reason, the Titanic crew didn't counterflood. If they had they might have been able to significantly slow down the entire drop of the ship, and almost certainly the frame of the ship would have remained intact on the water rather then having the stern of the ship rise out of the water, and then the entire ship snapping.
  • Re:Feh. (Score:4, Interesting)

    by homer_ca ( 144738 ) on Monday May 08, 2006 @12:56PM (#15286212)
    You just described the fourth idea in TFA:
    The most radical approach comes from an unexpected source--Microsoft Research. In effect, the Microsoft approach discards the concept of an operating system as a single program running in kernel mode plus some collection of user processes running in user mode, and replaces it with a system written in new type-safe languages that do not have all the pointer and other problems associated with C and C++.
  • Re:Feh. (Score:3, Interesting)

    by moro_666 ( 414422 ) <kulminaator@gmai ... Nom minus author> on Monday May 08, 2006 @12:56PM (#15286215) Homepage
    i have to agree on the rebooting thing.

    a microkernel can indeed be secure enough so that the system doesn't have to reboot for years, with a monolithic kernel and over a million lines of code, this is just a wet dream.

    infact, the microkernel if written well enough, can take a lot of updates, including updates to disk drivers or graphics drivers and stuff without restarting the whole system, so far the monolithic kernels have run flat on this feature.

    i would give away a few percentages of performance if the system will become more stable and will need less reboots. even bsd and linux boxes need to reboot every once in a while when some kind of new security thing is brought up or a critical fix has been issued, not all drivers can be reloaded even if they are only as modules (probably some hooks go way to deep in the kernel).

    however, until we have a nice usable microkernel system around, i'll stick with linux and freebsd, we can't just run our servers and desktops on hurd or minix out of the blue :)
  • by tenchiken ( 22661 ) on Monday May 08, 2006 @02:30PM (#15287105)
    You miss the point. The problem was that the Titanic was overtopping the vertical dividers that were designed to keep areas os the ship isolated in case of a accident. If it were a issue with a total failure along the entire ship, then yes, counterflooding would be bad. The idea is to minimize when the tipping occurs to keep the ship slowly sinking, instead of rapidly sinking when the barriers were overtopped.

    Engineering estimates is that it might have added 3-5 hours onto titanics lifespan, enough to save lives, even if not enough to preserve the ship till help arrived.
  • by Inoshiro ( 71693 ) on Monday May 08, 2006 @08:07PM (#15289346) Homepage
    "Fact: OSX is sooooo slow that the only thing it is faster than is OpenBSD. And you cant even blame its slowness on it being a microkernel. How pathetic... Wow, that says it all in my book :)"

    Actually, OS X was within a few percentage points of Linux on all hardware tested; actually outperforming it on memory throughput on PowerPC and some other tests. It's also faster than NT.

    "But there are ALWAYS situations where it is going to be desirable for seperate parts of an OS to directly touch the same memory in a cooperative manner, and when this is the case a microkernel just gets in your damn way..."

    I fail to see how L4's shared-memory is somehow magically different from Linux's shared-memory. Once you let more than 1 process have access to a page via a TLB mapping, it's all the same.

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...