Microkernel: The Comeback? 722
bariswheel writes "In a paper co-authored by the Microkernel Maestro Andrew Tanenbaum, the fragility of modern kernels are addressed: "Current operating systems have two characteristics that make them unreliable and insecure: They are huge and they have very poor fault isolation. The Linux kernel has more than 2.5 million lines of code; the Windows XP kernel is more than twice as large." Consider this analogy: "Modern ships have multiple compartments within the hull; if one compartment springs a leak, only that one is flooded, not the entire hull. Current operating systems are like ships before compartmentalization was invented: Every leak can sink the ship." Clearly one argument here is security and reliability has surpassed performance in terms of priorities. Let's see if our good friend Linus chimes in here; hopefully we'll have ourselves another friendly conversation."
How hard... (Score:3, Interesting)
NT4 (Score:3, Interesting)
I haven't looked at GNU/Hurd but I have yet to see a "proper" non-academic microkernel which lets one part fail while the rest remain.
Trusted Computing (Score:3, Interesting)
The thing is... (Score:5, Interesting)
Theory Vs. Practice (Score:4, Interesting)
Re:Feh. (Score:1, Interesting)
Why doesn't Tannenbaum write his OWN O/S following his examples, THEN we can talk! Minix DOESN'T COUNT! Frankly, Linux has been amazingly stable through most of its life, as have other UNIX variants/versions. I didn't see that with Minix.
The industry has better and more important things to worry about.
Re:Feh. (Score:5, Interesting)
WRONG.
Tanenbaum's research is correct, in that a Microkernel architecture is more secure, easier to maintain, and just all around better. The problem is that early Microkernel architectures killed the concept back when most of the OSes we use today were being developed.
What was the key problem with these kernels? Performance. Mach (one of the more popular research OSes) incurred a huge cost in message passing as every message was checked for validity as it was sent. This wouldn't have been *so* bad, but it ended up worse because a variety of flaws in the Mach implementation. There was some attempt to address this in Mach 3, but the project eventually tappered off. Oddly, NeXT (and later Apple) picked up the Mach kernel and used it in their products. Performance was fixed partly through a series of hacks, and partly through raw horsepower.
Beyond that, you might want to read the rest of TFA. Tanenbaum goes over several other concepts that are hot at the moment, include Virtual Machines, Virtualization, and driver protection.
Re:NT4 (Score:4, Interesting)
Well, I wouldn't call NT's kernel a microkernel in any way for the very reason that it was not truly compartmentalised and the house could still be brought very much down - quadruply so in the case of NT 4. You could call it a hybrid, but that's like saying someone is a little bit pregnant. You either are or you're not.
A compromise needs to be made. (Score:5, Interesting)
The hardware manipulating parts kernel should stick to providing higher-level APIs for most bus and system protocols and provide async-io for kernel and user space. If most kernel mode drivers that power your typical
Or am I just crazy?
Yeah but microkernels seems like taking things to an extreme that can be accomplished with other means.
Proof is in the pudding (Score:5, Interesting)
But most design is about tradoffs, and it seems like the tradeoff with microkernels is compartmentalism vs. speed. Frankly, most people would rather have speed, unless the security situation is just untenable. So far it's been acceptable to a lot of people using Linux.
Notably, if security is of higher import than speed, people don't reach for micro-kernels, they reach for things like OpenBSD, itself a monolithic kernel.
Re:Or... (Score:3, Interesting)
Minix3 (Score:2, Interesting)
O. Wyss
Re:multicompartment isolation (Score:3, Interesting)
Re:Theory Vs. Practice (Score:4, Interesting)
Interesting corollation (Score:3, Interesting)
Less a comeback than hardware has caught up (Score:4, Interesting)
zbuffering - go back to any book from the 1970s, and it sounds like a pipe dream (more memory needed for a high-res zbuffer than in entire computer systems of the time)
Lisp, Prolog, and other high-level languages on home computers - these are fast and safe options, but were comically bloated on typical hardware of 20 years ago.
Operating systems not written in assembly language - lots of people never expected to see the day.
Re:Metaphors eh? (Score:4, Interesting)
Re:Feh. (Score:4, Interesting)
On the other hand, if you architect the system so that it is impossible to pass a bad message, you may find that performance can actually be *increased*. My own preference has always been an OS based on a VM like Java where it is literally impossible to write code that can cross memory barriers. The result would be that the hardware protection of an MMU would be unnecessary, as would the firewall between the kernel and usermode. Performance would increase substantially due to a lack of kernel mode (i.e. Ring 0) interrupts or jumps.
Re:Feh. (Score:5, Interesting)
Are they not upgrading the kernel? I know that Win2K has had some critical updates in the last few years that required a reboot.
Microkernels do have the potential to be easier to secure than monolithic kernels.
In theory a secure system is a secure system. It is possible to make a monolithic kernel as secure as a microkernel, however it will be harder to make a monolithic kernel as secure as a microkernel.
Just like everything else it is a trade off.
Monolithic
Easier to make a hi-performance kernel.
Harder to secure and to test security.
Microkernel.
Easier to make secure and to test security.
Harder to make hi-performance.
There are secure monolithic systems OpenBSD, Linux, Solaris, and Z/OS jump to my mind.
There are fast microkernels. QNX is a very nice system.
I really like the idea of a microkernel OS. I will try out the first stable, useful OSS Microkernel OS that I find.
Cue the peanut gallery. (Score:5, Interesting)
"Fact": Micorkernel systems perform poorly due to message passing overhead.
Fact: Mach performs poorly due to message passing overhead. L3, L4, hybridized kernels (NT executive, XNU), K42, etc, do not.
"Fact": Micorkernel systems perform poorly in general.
Fact: OpenBSD (monolithic kernel) performs worse than MacOS X (microkernel) on comparable hardware! Go download lmbench and do some testing of the VFS layer.
Within the size of L1 cache, your speed is determined by how quickly your cache will fill. Within L2, it's how effecient your algorithm is (do you invalidate too many cache lines?) -- smaller sections of kernel code are a win here, as much as good algorithms are a win here. Outside of L2 (anything over 512k on my Athlon64), throughput of common operations is limited by how fast the RAM is -- not IPC throughput. Most microkernel overhead is a constant value -- if your Linux kernel us O(n) or O(1), then it's possible to tune the microkernel to be O(n+k) or O(1+k) for the equivalent operations. The faster your hardware, the smaller this value of k since it's a constant value. L4Linux was 4-5% slower than "pure" Linux in 1997 (See L4Linux site for the PDF of the paper [l4linux.org]).
But none of this is something the average slashdotter will do. No, I see lots of comments such as "micorkernels suck!" already at +4 and +5. Just because Mach set back microkernel research by about 20 years, doesn't mean that all micorkernels suck.
Re:How hard... (Score:5, Interesting)
The current state is that Linux is essentially coming around to a microkernel view, but not the classic microkernel approach. And the new idea is not one that could easily grow out of a classic microkernel, but one that grows naturally out of having a macrokernel but wanting to push bug-prone code out of it.
driver banishment (Score:4, Interesting)
There are quite a few drivers out there to support weird hardware (like webcams and such) that are just not fully stable. It would be nice to be able to choose that a driver be run in kernel mode, at full speed, or in a sort of DMZ with reduced performance. This could also make it easier to reverse engineer non-GPL kernel drivers, as well facilitate driver development.
Re:Oh Dear (Score:3, Interesting)
Re:A compromise needs to be made. (Score:4, Interesting)
It doesn't necessarily make it less crash prone. But it does make it instrumentable if it proves to be unstable (you could easily trace, debug, intercept, or otherwise validate the requests the blob made if so needed).
Furthermore, the kernel mode portion would merely be relaying commands to trusted memory-mapped regions and IO space requested by the process initially (limited by configuration files, perhaps). Most kernel crashes are the cause of errors (pointer mistakes, buf overflow, race condition, etc.) in the complex driver code which "trap" the system in kernel space. The user space portion would likely instead SIG11 and die... if it left the hardware in a weird state it could be fixed by simply restarting the driver program which would, at its outset, send RESET type commands to the device putting it in a known state.
The largest problem I see is that it isn't possible to easily recast a userspace driver program into a device node without a mechanism like FUSE. It only works if the hardware target in question is nearly always accessed behind a userspace library (OpenGL, libalsa/libjack/OpenAL, libusb).
Re:Virtualization (Score:3, Interesting)
The article (bad me, dont read linked articles, I know) actually mentions hypervisors as sort-of microkernels, so yes. Xen and VmWare ESX fits rather well into such a mapping, as they both have their own 'kernel', and where the controller domain is merely another virtual machine.
"If we then get minimal, very application specific kernels to run on top of it for specific needs"
You can accomplish a similar setup by just fudging your definitions a bit again. For example, a bare stripped linux kernel capable of running only on top of the Xen hypervisor isnt all that huge. Combine that with simple single-purpose OS installs doing, for example, fileserving, security services, etc, and communicate over the network for everything, and you more or less have a serverized 'microkernel' architecture. The server is the network is the server is the OS. So to speak. Postmodernistic computing.
"Granting, of course, that OS vendors go along with the idea,"
Yes, well, they can go along with it or we can run free software. Yet another indication of the advantages of adaptability.
Re:The unsinkable Kernel (Score:3, Interesting)
Re:Feh. (Score:4, Interesting)
Re:Feh. (Score:3, Interesting)
a microkernel can indeed be secure enough so that the system doesn't have to reboot for years, with a monolithic kernel and over a million lines of code, this is just a wet dream.
infact, the microkernel if written well enough, can take a lot of updates, including updates to disk drivers or graphics drivers and stuff without restarting the whole system, so far the monolithic kernels have run flat on this feature.
i would give away a few percentages of performance if the system will become more stable and will need less reboots. even bsd and linux boxes need to reboot every once in a while when some kind of new security thing is brought up or a critical fix has been issued, not all drivers can be reloaded even if they are only as modules (probably some hooks go way to deep in the kernel).
however, until we have a nice usable microkernel system around, i'll stick with linux and freebsd, we can't just run our servers and desktops on hurd or minix out of the blue
Re:The unsinkable Kernel-Just add water. (Score:3, Interesting)
Engineering estimates is that it might have added 3-5 hours onto titanics lifespan, enough to save lives, even if not enough to preserve the ship till help arrived.
Cue the peanut gallery redux. (Score:3, Interesting)
Actually, OS X was within a few percentage points of Linux on all hardware tested; actually outperforming it on memory throughput on PowerPC and some other tests. It's also faster than NT.
"But there are ALWAYS situations where it is going to be desirable for seperate parts of an OS to directly touch the same memory in a cooperative manner, and when this is the case a microkernel just gets in your damn way..."
I fail to see how L4's shared-memory is somehow magically different from Linux's shared-memory. Once you let more than 1 process have access to a page via a TLB mapping, it's all the same.