Forgot your password?
typodupeerror

Microkernel: The Comeback? 722

Posted by Hemos
from the time-to-hash-this-out-all-again dept.
bariswheel writes "In a paper co-authored by the Microkernel Maestro Andrew Tanenbaum, the fragility of modern kernels are addressed: "Current operating systems have two characteristics that make them unreliable and insecure: They are huge and they have very poor fault isolation. The Linux kernel has more than 2.5 million lines of code; the Windows XP kernel is more than twice as large." Consider this analogy: "Modern ships have multiple compartments within the hull; if one compartment springs a leak, only that one is flooded, not the entire hull. Current operating systems are like ships before compartmentalization was invented: Every leak can sink the ship." Clearly one argument here is security and reliability has surpassed performance in terms of priorities. Let's see if our good friend Linus chimes in here; hopefully we'll have ourselves another friendly conversation."
This discussion has been archived. No new comments can be posted.

Microkernel: The Comeback?

Comments Filter:
  • Eh hem. (Score:4, Insightful)

    by suso (153703) * on Monday May 08, 2006 @10:09AM (#15284896) Homepage Journal
    Current operating systems are like ships before compartmentalization was invented

    Isn't SELinux kinda like compartmentalization of the OS?
  • by maynard (3337) <j@maynard@gelinas.gmail@com> on Monday May 08, 2006 @10:14AM (#15284934) Journal
    didn't save the Titanic [wikipedia.org]. Every microkernel system I've seen has been terribly slow due to message passing overhead. While it may make marginal sense from a security standpoint to isolate drivers into userland processes, the upshot is that if a critical driver goes *poof!* the system still goes down.

    Solution: better code management and testing.
  • by LurkerXXX (667952) on Monday May 08, 2006 @10:23AM (#15284994)
    BeOS didn't seem slow to me. No matter what I threw at it.
  • by WindBourne (631190) on Monday May 08, 2006 @10:25AM (#15285012) Journal
    Back in the 80's and 90's, the argument for monolithic was performance. Considering that CPUs were so small, it made sense. If Linux had been on a micro kernel design, it would have been slower than MS. IOW, it would never have gotten off the ground.
     
      The 2'nd approach(paravirtualization) could actually be used WRT linux as a means of not only separating the usermode from the device drivers, but it would also allow for some nice networking capabilities. After all, the average systems does not really need all the capabilities that is has. If a simple server(s) can be set up for the house and then multiple desktops without driver is set up, it simplifies life.
  • by csoto (220540) on Monday May 08, 2006 @10:28AM (#15285031)
    Dearest Andy, please take some University courses on evolutionary biology. Perhaps you will take away a meaningful sense of the differences between "optimal" and "sufficient." I agree 100% with what you say. "Microkernels are better." That being said, this does nothing to diminish the viability of Linux, or any other monolithic system. Evolution only requires that a species retain sufficient qualities to ensure survivability (and therefore reproduction) in a given environment. "Perfection" never enters the equation (not even qualifiers such as "best" or "better" - just "good enough").

    So, let's all agree with Andy, then go on using the best tools for our purposes. If that happens to be Linux (or even Windoze), then so be it...
  • Hindsight is 20/20 (Score:4, Insightful)

    by youknowmewell (754551) on Monday May 08, 2006 @10:32AM (#15285061)
    From the link to the Linus vs. Tanenbaum arguement:

    "The limitations of MINIX relate at least partly to my being a professor: An explicit design goal was to make it run on cheap hardware so students could afford it. In particular, for years it ran on a regular 4.77 MHZ PC with no hard disk. You could do everything here including modify and recompile the system. Just for the record, as of about 1 year ago, there were two versions, one for the PC (360K diskettes) and one for the 286/386 (1.2M). The PC version was outselling the 286/386 version by 2 to 1. I don't have figures, but my guess is that the fraction of the 60 million existing PCs that are 386/486 machines as opposed to 8088/286/680x0 etc is small. Among students it is even smaller. Making software free, but only for folks with enough money to buy first class hardware is an interesting concept. Of course 5 years from now that will be different, but 5 years from now everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5."
  • by WindBourne (631190) on Monday May 08, 2006 @10:33AM (#15285071) Journal
    didn't save the Titanic.

    It actually took hitting something like half the compartments to sink her. If it had hit just one less compartment, she would have stay afloat. In contrast, one hole in a none compartmentalized ship can sink it.

    That is no different than an OS. In just about any monolithic OS, one bug is enough to sink them.

  • A false dichotomy (Score:5, Insightful)

    by The Conductor (758639) on Monday May 08, 2006 @10:34AM (#15285081)
    I seem to find this microkernel vs. monolithic argument a bit a of a false dichotomy. Micorkernels are just at one end of a modularity vs. $other_goal trade-off. There are a thousand steps in-between. So we see implementations (like the Amiga for example) that are almost microkernels, at which the purists shout objections (the Amiga permits interrupt handlers that bypass the OS-supplied services, for example). We also see utter kludges (Windows for example) improve their modularity as backwards compatibility and monopolizing marketing tactics permit (not much, but you have to say things have improved since Win3.1).

    When viewed as a Platonic Ideal, a microkernel architechture is a useful way to think about an OS, but most real-world applications will have to make compromises for compatibility, performance, quirky hardware, schedule, marketing glitz, and so on. That's just the way it is.

    In other words, I'd rather have a microkernel than a monolithic kernel, but I would rather have a monolithic kernel that does what I need (runs my software, runs on my hardware, runs fast) that a micokernel that sits in a lab. It is more realistic to ask for a kernel that is more microkernel-like, but still does what I need.

  • by WindBourne (631190) on Monday May 08, 2006 @10:37AM (#15285092) Journal
    OpenBSD's security strength has NOTHING to do with the kernel. It has to do with the fact that mulitple trained eyes are looking over code. The other thing that you will note is that they do not include new code in it. It is almost all older code that has been proven on other systems (read netbsd, apple, linux, etc). IOW, by being back several revs, they are gaining the advantage of everybody else as well as their own.
  • Re:Oh Dear (Score:5, Insightful)

    by igb (28052) on Monday May 08, 2006 @10:40AM (#15285118)
    It's tempting for people who work in fields where performance matters to assume it matters for everyone, all the time. Do I need my big-iron Oracle boxes to be quick? Yes, I do, which is why they are Solaris boxes with all mod cons. Do I need the GUI on my desk to be pleasant to use? Yes, which is why it's increasingly a Mac that I turn to first. Sure, a G4 Mac Mini isn't quick. But there's a room full of Niagaras, Galaxies and 16-way Sparc machines to do `quick' for me.

    All I ask is that the GUI is reasonably slick, the screen design doesn't actively give me hives and the mail application is pleasant. Performance? Within reason, I really couldn't care less.

    ian

  • Re:The thing is... (Score:5, Insightful)

    by crawling_chaos (23007) on Monday May 08, 2006 @10:44AM (#15285145) Homepage
    Compartmentalization had very little to do with the advent of the container ship. Titanic was partially compartmented, but they didn't run above the waterline, so that the breach of several bow compartments led to overtopping of the remainder and the eventual loss of the ship. Lusitania and Mauretania were built with full compartments and even one longitudinal bulkhead because the Royal Navy funded them in part to use as auxilliary troopships. Both would have survived the iceberg collision, which really does make one wonder what was in Lusitania's holds when those torpedoes hit her.

    Comparments do interfere with efficient operation, which is why Titanic's designers only went halfway. Full watertight bulkheads and a longitudinal one would have screwed up the vistas of the great dining rooms and first class cabins. It would also have made communication between parts of the ship more difficult as watertight bulkheads tend to have a limited number of doors.

    The analogy is actually quite apt: more watertight security leads to decreased usability, but a hybrid system (Titanic's) can only delay the inevitable, not prevent it, and nothing really helps when someone is lobbing high explosives at you from surprise.

  • by audi100quattro (869429) on Monday May 08, 2006 @10:44AM (#15285151) Homepage
    That friendly conversation is hilarious. "Linus: ...linux still beats the pants of minix in almost all areas"

    "Andy: ...I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design :-)"

    The most interesting part: "Linus: The very /idea/ of an operating system is to use the hardware features, and hide them behind a layer of high-level calls. That is exactly what linux does: it just uses a bigger subset of the 386 features than other kernels seem to do. Of course this makes the kernel proper unportable, but it also makes for a /much/ simpler design. An acceptable trade-off, and one that made linux possible in the first place."
  • Re:Feh. (Score:3, Insightful)

    by panthro (552708) <mavrinac AT gmail DOT com> on Monday May 08, 2006 @10:50AM (#15285189) Homepage

    The industry has better and more important things to worry about.

    Like what? Reliability and security ought to be paramount. The IT industry (relating to multipurpose computers, anyway) is currently a joke in that area - compare to virtually any other industry.

  • How many times have we all heard that the proper way to develop software is:

    First make it work, then make it fast

    Specifically:

    Write it as simply and cleanly as you can,

    THEN check performance,

    THEN optimize, but ONLY where measurement tells you to.

    Judging by the performance improvements over time, this is what the OS X team has been doing. Their stuff has been getting bigger, with more functionality, AND faster on the same hardware, with each release. If anyone else has been doing that, I haven't heard of it.
  • Titanic (Score:4, Insightful)

    by PIPBoy3000 (619296) on Monday May 08, 2006 @10:53AM (#15285217)
    I think the real question is what risks computers are facing these days. The Titanic had multiple compartments (up to a point), but the ice berg tore along the side, ripping off rivets and letting water pour in multiple compartments at once.

    How is kernel compartmentalization going to protect against users installing spyware and doing things they're already authorized to do?
  • by joshv (13017) on Monday May 08, 2006 @10:54AM (#15285221)
    I never really understood why buggy drivers constantly restarting is a desirable state. Say what you will about the monolithic kernel, but the fact that one bad driver can crash the whole works tends to make people work much harder to create solid drivers that don't crash.

    In Andrew Tanenbaum's world, a driver developer can write a driver, and not even realize the thing is being restarted every 5 minutes because of some bug. This sort of thing could even get into a shipping product, with who knows what security and performance implications.
  • by Duncan3 (10537) on Monday May 08, 2006 @11:11AM (#15285334) Homepage
    Actually, it's been proven over and over that microkernel designs dont HAVE to be slow. Read the Liedtke paper on IPC in L3 from 1993 as one example.

    The problem is the hardware is optimized for something else now. Also, modern programmers that only know Java can't code ASM and understand the hardware worth a damn. I should know, I have to try and teach them.

    And yes, all people care about is speed, becasue you cannot benchmark securrity, and benchmarks are all marketing people understand, and gamers need something to brag about.

  • Re:Metaphors eh? (Score:4, Insightful)

    by misleb (129952) on Monday May 08, 2006 @11:15AM (#15285361)
    The problem I have with the compartmentalized ship metaphor is that I question the value of being able to run a system that has one compartment "breached." The system may technically still run, but is it goign to be of any use in such a state? Aren't you going to want to reboot it anyway or is the theory that you can restart a component without rebooting? Is this realistic? Seems to me that a system would get into a pretty funky state depending on which component failed.

    -matthew
  • Virtualization (Score:5, Insightful)

    by jefu (53450) on Monday May 08, 2006 @11:16AM (#15285366) Homepage Journal
    I suspect that virtualization may well signal the rise of the microkernel (exokernel?) again.

    It seems reasonable to think that a tiny microkernel built for virtualization and able to support multiple virtual os's with minimal overhead is really going to be a very attractive platform. If we then get minimal, very application specific kernels to run on top of it for specific needs, we could get an environment in which various applications (http servers, databases, network servers of other sorts, browsers) could run in secure environments which could leverage multi-processor architectures, provide for increased user security, make inter-os communications work nicely and generally be a Good Thing. Certainly that would not prohibit running complete unix/MS/??? systems from running as well. (Granting, of course, that OS vendors go along with the idea, which some of the big players may find economically threatening.)

    Could be very fun stuff and make viable setups that are currently difficult or impossible to manage well.

  • by Anonymous Coward on Monday May 08, 2006 @11:19AM (#15285389)
    His research is no more PRACTICAL than it was 20 years ago. Sure, in theory it is great that you can isolate faults and restart servers. In practice, what good does it do when restarting a crashed filesystem server process means that all the real user programs crash because their file descriptors have become invalid?

    To use his metaphor, what good does sealing off the one leaking compartment do if crew can not survive without it? Sure the ship may still float, and you can replace the crew, but they are still just as dead as if the ship had sunk.

  • by voodoo_bluesman (255725) on Monday May 08, 2006 @11:25AM (#15285443) Homepage
    Certainly there would be a logging facility to capture that sort of event. Yeah, it might not blow up the machine, but a bouncing driver *should* make a lot of noise.
  • Some hurdles... (Score:2, Insightful)

    by multimediavt (965608) on Monday May 08, 2006 @11:32AM (#15285491)
    After reading the paper and contemplating some issues, I do believe a microkernel-like approach is favorable for what Tanenbaum wants to focus on; reliability and security. I say "microkernel-like" because micro is a relative term when you think about the growing complexity of applications and devices a modern operating system has to deal with. I think his TV analogy falls flat on its face UNLESS you are willing to tailor individual operating system distributions to vertical markets. This is not at all practical in the open-source/FOSS space, and even less so in the commercial OS space. Who wants to write different kernels and services for every possible use a mixed set of hardware and software could be put to? His more reliable and secure consumer product examples are just that, embedded and highly specialized versions of an operating system. It can be done in the consumer electronics space because the cost of doing this is passed on to the customer and figured into the retail price. Since the OS and the hardware are removed from each other in all but a few cases, i.e. Apple and Mac OS, Sun and Solaris, and the now defunct SGI and IRIX, it is extraordinarily difficult (and costly) to achieve Tanenbaum's goals.

    So, to return to the microkernel-like statement, I think that modern OSes *ARE* trying to achieve the goals that Tanenbaum aspires to. The points he brings up *ARE* being addressed in Linux, Windows, and Mac OS X. The contention is that they will not happen overnight, and they will only happen faster as more negative feedback is fed into the mechanisms of change, i.e., the project development community or corporate entity doing the work. But, corporate entities in particular have to balance the equation. They can't just sacrifice performance and compatibility for the sole sake of reliability and security. I don't believe that Tanenbaum would disagree with that point. We are seeing more compartmentalization (modularization if you will) within the structure of most OS kernels. I think his wrapper idea for device drivers has merit. Personally, I'd rather see a common driver framework developed for categories of devices to help minimize the number of driver specific wrappers that would need to be created manually (455 seems like a tremendously large number). Virtualization is growing in popularity in the IT community at large. I think there are some huge benefits to this that will alter OS development in the future, the near future for some things. His two more holistic approaches are novel. I think the Multiserver approach may have more of a chance than the Laguage-Based approach, but time will tell which theory takes hold in practice, or if a new or hybrid approach may emerge.

    All-in-all a good read and definitely thought and discussion provoking.
  • by Red Flayer (890720) on Monday May 08, 2006 @11:43AM (#15285576) Journal
    "So, let's all agree with Andy, then go on using the best tools for our purposes."

    That's looking at it from an end-user standpoint. The problem with that view is that the better method will never become viable.

    To extend your evolution metaphor, you're limiting yourself to a subset of the genepool. Sure, a species that has already been selected for / adapted to that particular niche would outcompete *now* in that niche; but that does not mean that another species allowed to adapt to that niche wouldn' out-compete the one that's already there -- especially should conditions change in that niche (as constantly happens with technology).

    To give a biological example, look at the large animals of the Americas. They evolved to fill niches in the absence of humans. Once humans came over, they were all killed off or died out most likely as a result of human interference -- they hadn't the traits to survive in the new niche (with the exception of the buffalo). Yet big animals in Africa survived alongside human hunters -- they would have been better suited to the 'new' American niche.

    My point is that just because something has the most developed tools for the job *now* doesn't mean that it's lineal successors would be the best tool for the job *later*. Who knows what we're missing if we limit ourselves to the current development lines?
  • by Anonymous Coward on Monday May 08, 2006 @11:45AM (#15285597)
    Both would have survived the iceberg collision, which really does make one wonder what was in Lusitania's holds when those torpedoes hit her.

    A while back, I did a little bit of research into the Lusitania's sinking, and concluded there was ample evidence to lead one to believe she was carrying munitions. The original design of the ship had deck guns, and while I believe they were removed for extra speed, the Lusitania had indeed been used for smuggling munitions in the past using civilians as a cover. However, Google came up with this link [pbs.org] containing some evidence that pointed to a coal dust explosion, and not munitions. But, the Germans probably would have sunk it regardless, since they knew of Lusitania's dual purpose.
  • Re:How hard... (Score:3, Insightful)

    by Sique (173459) on Monday May 08, 2006 @11:47AM (#15285611) Homepage
    Check out mkLinux and L4Linux. The efforts are made. The userland linux service on a microkernel is a reality.
  • Has anyone tried? (Score:3, Insightful)

    by Spazmania (174582) on Monday May 08, 2006 @11:53AM (#15285666) Homepage
    Why are TV sets, DVD recorders, MP3 players, cell phones, and other software-laden electronic devices reliable and secure but computers are not?

    Well, the nice thing about software in rom is that you can't write to it. If you can't inject your own code and unplugging and replugging the device does a full reset back to the factory code then there is a very limited about of damage a hacker can do.

    Then too, sets capable of receiving a sophisticated digital signal (HDTV) have only recently come in to wide-spread use. To what extent has anyone even tried to gain control of a TV set's computer by sending malformed data?

  • by jthill (303417) on Monday May 08, 2006 @11:53AM (#15285670)
    Microkernels are just one way to compartmentalize. Compartmentalization is good, yadda yadda momncherrypie yadda. We've known this for what, 20 years? 30? 40? Nobody suspects it's a fad anymore. The kinds of faults VM isolation guards against aren't the kinds of faults that worry people so much today. Panics and bluescreens aren't solved, but they're down in the background noise. Experience and diligence and increasingly good tools have been enough to put them there and will remain enough to keep them there, because the tools are getting better by leaps and bounds.

    "In the 1980s, performance counted for everything, and reliability and security were not yet on the radar" is remarkable. Not on whose radar? MVS wasn't and z/OS isn't a microkernel either, and the NCSC didn't give out B1 ratings lightly.

    One thing I found interesting is the notion of running a parallel virtual machine solely to sandbox drivers you don't trust.

  • Re:Metaphors eh? (Score:2, Insightful)

    by mikiN (75494) on Monday May 08, 2006 @12:05PM (#15285755)
    Simple. With a compartmentalized ship you can repair or replace the faulty compartment while still at sea. With a single-compartment ship (with damage way below the waterline) you have to take it to the dry dock to do repairs (that is, if you're able to keep it afloat that long anyway...)
  • by kbob88 (951258) on Monday May 08, 2006 @12:12PM (#15285821)
    Performance may be better in the long run with a microkernel. Sure, there is bound to be a performance penalty in message passing (and checking) with a modularized architecture. But since the developers would (hopefully) spend a lots less time tracking down bugs through a massive kernel, perhaps they could devote a lot more time to perfomance improvement work in the code? I have to imagine that kernels like Linux, OSX, and XP have lots of old, nightmare code that is horribly inefficient; the developers probably just haven't gotten around to improving it because they're too busy tracking down bugs and security breaches, or implementing drivers for the latest gadget.
  • And I'm not talking a MINIX-style example which is only good for classroom study -- I'm talking about a production-level operating system which can handle real-world task loads.

    Until such a thing exists for mainstream use, his comments are intellectually interesting but not really of much practical use.
  • by pavon (30274) on Monday May 08, 2006 @12:16PM (#15285863)
    You left out the first step:

    FIRST design the system, and make sure the algorithms and architecture are sufficiently straightforward and efficient.

    No amount of optimizing will save you if your system is slow by design, and there is no place where this was more true than in the early microkernals. That is why the microkernel architecture was rejected by linux and windows kernel developers.

    The Mach kernal has some fundamental effeciency issues, and while it has been improved since it's introduction, there are limits to how fast it can get. As such, Darwin is slower at many things compared to some monolithic kernels, and will continue to be slower unless they do some major revamping (ie redesign and rewrite the kernel) under the hood, not just tweeking.

    In the end it is about trade-offs. The "make it right, then make it fast" approach depends on your idea of right. If you really want an example of "doing it right" with regards to security and stability, look to QNX or OpenBSD. None of the desktop OSes compare to those, because they all made compromises. Windows NT and Linux both decided that the loss of efficency was not worth the gains in stability / security that you get from having a microkernel. And honestly, this particular decision hasn't greatly harmed the stability / security of those systems as a whole.
  • by klaun (236494) on Monday May 08, 2006 @12:24PM (#15285929)
    proven on other systems (read netbsd, apple, linux, etc). IOW, by being back several revs, they are gaining the advantage of

    How can they be including code from Apple and Linux? Wouldn't that produce licensing incompatibilities?

  • Re:Feh. (Score:1, Insightful)

    by Anonymous Coward on Monday May 08, 2006 @12:28PM (#15285971)
    There was some attempt to address this in Mach 3

    Let me guess. They added more blades.

  • by Lord Ender (156273) on Monday May 08, 2006 @12:29PM (#15285975) Homepage
    Wrong. OpenBSDs strength is partially because of their testing and code review policy, but ALSO because of design issues (like kernel memory management).

    Certain types of security flaws are much harder to exploit when the OS addresses memory in unpredictable ways.

    Other design principles, which encourage access log review, aid to the security of the system without having anything to do with code review.
  • by Pfhorrest (545131) on Monday May 08, 2006 @12:37PM (#15286058) Homepage Journal
    Yeah, you got to be careful with analogies.

    When it comes to security, imagine aliens trying to take over your ship...


    This has got to be the best juxtaposition of two sentences ever found on Slashdot.
  • by OwnedByTwoCats (124103) on Monday May 08, 2006 @12:45PM (#15286136)
    Kernels also crash from drivers causing the hardware to do Very Bad Things. The USB driver can DMA a mouse packet right over the scheduler code or page tables, and there isn't a damn thing that memory protection can do about it. CRASH, big time. A driver can put a device into some weird state where it locks up the PCI bus. Say bye-bye
    What if the USB driver
    • couldn't
    DMA a mouse packet over scheduler code (which ought to be read-only at the MMU) or the MMU's page table.

    That is what Tannenbaum's research is asking. Can such a system be built? Does it perform? What are the tradeoffs? Does the end result offer enough benefits (reliability and security) to overcome the costs (performance)?

  • Kernels don't often crash for reasons related to lack of memory protection.

    I do believe that Tanenbaum was addressing security in his article, not kmem protection. His point was that the segregation of the servers prevents a hole in these programs from opening an elevated privledge attack. Furthermore, he points out that the elevated permissions of the kernel are likely to be far more secure due to the miniscule size of the kernel itself.

    You make an interesting point about the stability of the kernel, but that wasn't his point in the slightest.
  • Re:How hard... (Score:3, Insightful)

    by rg3 (858575) on Monday May 08, 2006 @01:13PM (#15286374) Homepage
    Another example of this approach is libusb. Instead of providing drivers for USB devices inside the kernel, you can do that with libusb. It gives you an interface to the USB system. Many scanner and printer drivers use it, and the drivers are included in the CUPS or SANE packages.
  • by antientropic (447787) on Monday May 08, 2006 @01:23PM (#15286454)

    I think it is clear that Linux won that argument.

    This is not at all clear. By what metric do you claim that Linux won that argument? Popularity? Then surely the Windows kernel wins even more.

    Truth is, just because one technology is superior to another (in terms of, say, stability, maintainability, whatever) doesn't mean that it will immediately win in the marketplace. I think that Linux became a success because of other factors, such as that it was easy for people to contribute, and because it conservatively copied a 1970s design which everybody who wanted to contribute already understood.

    The performance argument is of course rapidly declining in importance. I would gladly spend a little bit of performance for increased stability. In the 60s some people claimed that high-level languages would never fly - too slow. Turns out that there are more important things than CPU cycles.

  • by Jon Kay (582672) <.jkay. .at. .pushcache.com.> on Monday May 08, 2006 @02:08PM (#15286889)

    I played alot with AmigaDOS, a message-passing OS (it was a port of Tripos, an early research message-passing OS), as a teenager. That experience cured me of message-passing interest, because I found myself spending 75% of my time dealing with message-passing coding rather than dealing with the underlying hacks I was trying to perpetrate.

    Not only did one have to write more code to make an MP call (comm overhead code), but the bugs had a way of showing up in that snippet and being harder to debug. The tiniest change in a driver's interface meant an hour of coding, vs the ten minutes I saw later for BSD Unix. At that, I was lucky. If I'd been dealing much with nontrivial synchronization and threading, I expect I would've seen more like the factor of ten coding slowdown I've always seen dealing with threading problems (and to be fair, most ukernel code doesn't have to either, it's just that there are more threads, more sync points, and thus more potential for trouble).

    The basic problem is that modularization is a largely orthogonal problem from threading, address spaces, or messaging. If you split up modules into different threads, then now you don't just have to solve modularization, but you also have to solve threading, messaging, and address space problems, too. Now, address space separation seems like it might save some debugging troubles, and in fact, successful "monolithic" operating systems in fact deliver the simplest form of that. I've been a little surprised that attempts to push farther on that like Electric Fence or multi-address-space OS' with traditional system call architectures have gone nowhere. But they have, so the difficulties must exceed the return somehow.

    Thus, I haven't been surprised to see ukernel project after ukernel project fail. The idea is at least forty years old, and has seen many smart people try to take it on - if a ukernel was going to succeed broadly, it would've happened by now.

    If you like ukernel OS' even after reading this, I say go try one, and try hacking something in. Just watch how much time you spend actually writing code implementing the hack vs MPI / threading drudgery.

  • Re:QNX ! (Score:4, Insightful)

    by Kristoph (242780) on Monday May 08, 2006 @03:42PM (#15287697)
    The QNX Neutrino kernel is a very good microkernel implementation (albeit not as purist as, say the Ka micro-kernel line, but the fact that it is not open makes it unusable.

    The sheduler, for example, is real time only so for non-real time applications is questionable at best. A simple problem to address in the open source world but, apparently "not a high priority" for the manufacturer of this fine technology.

    -rant-

    I fail to understand the point of closed source kernel implementations. The kernel is now a commodity.

    -/rant-

    ]{
  • by Chandon Seldon (43083) on Monday May 08, 2006 @04:01PM (#15287838) Homepage

    Trusted computing merely checks that the code hasn't changed since it was shipped. This verifies that no new bugs have been added and that no old old bugs have been fixed.

  • Re:Feh. (Score:3, Insightful)

    by jadavis (473492) on Monday May 08, 2006 @04:21PM (#15287987)
    But I'd even more prefer to see the driver written correctly to start with!

    Microkernels actually may help with that as well. If it is very obvious to the OS -- and to the user -- which drivers are crashing, that will provide incentive for the hardware vendors to write drivers correctly. Right now there is no accountability, so as long as the whole system works most of the time, users will buy it. But with microkernels, if new hardware comes out and you have review sites saying "That hardware driver is crashing left and right", users won't buy it. Nobody can point fingers anymore.

    In particular, nobody will point fingers at MS Windows when the real problem is crappy 3rd party drivers.
  • by JulesLt (909417) on Monday May 08, 2006 @04:42PM (#15288148)
    We've heard it many times, but I do sometimes wonder what the consequences of Knuth's piece of advise are.

    While it stops developers tweaking every possible piece of code to be an unreadable high-performance mess, I've also seen it used as an excuse to 'not think about performance now' at design time. Even when you are showing them evidence that they are repeating a known performance problem - and some performance problems require major restructuring - i.e. the stuff you can't fix by tuning the code inside a class, but because some dimwit designer is working at such an abstract level, they can't see that calling a web-service located in China, once per object, for thousands of objects, was NEVER going to work.

    'Premature optimisation' is a usefully vague phrase - you can only know it was premature with hindsight. Rant over.

    (But yes, it's nice that Apple improve performance with each release, although with Tiger it looks like that has been at the cost of needing more memory).

Money is the root of all evil, and man needs roots.

Working...