Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Tanenbaum-Torvalds Microkernel Debate Continues 534

twasserman writes "Andy Tanenbaum's recent article in the May 2006 issue of IEEE Computer restarted the longstanding Slashdot discussion about microkernels. He has posted a message on his website that responds to the various comments, describes numerous microkernel operating systems, including Minix3, and addresses his goal of building highly reliable, self-healing operating systems."
This discussion has been archived. No new comments can be posted.

Tanenbaum-Torvalds Microkernel Debate Continues

Comments Filter:
  • by JPribe ( 946570 ) <.jpribe. .at. .pribe.net.> on Monday May 15, 2006 @01:30PM (#15335820) Homepage
    When did we collectively forget that everything has its place...I doubt I'll ever see anything but a monolithic kernel on my desktops. No different than any given OS having its place. Windows and Ubuntu (until something better) will live on my desktops, not on my server. Why can't we just all get along?
  • Plug central (Score:1, Insightful)

    by Anonymous Coward on Monday May 15, 2006 @01:39PM (#15335899)

    While I generally agree with the points he makes, I did think his MINIX plug was a bit disingenuous:

    Virtually all of these postings have come from people who don't have a clue what a microkernel is or what one can do. I think it would raise the level of discussion if people making such postings would first try a microkernel-based operating system and then make postings like "I tried an OS based on a microkernel and I observed X, Y, and Z first hand." Has a lot more credibility.

    The easiest way to try one is to go download MINIX 3 and try it.

    Well, no. You can't seriously believe that running MINIX is going to magically give you expertise that lets you talk about operating system kernel design. You can't even use it to show that microkernels aren't that slow, because MINIX is going to seem way faster than your average Linux distribution loaded down with KDE or GNOME because you are comparing apples to oranges.

    It just seems like it's a stupid, obvious plug, and I think that's beneath him.

  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday May 15, 2006 @01:41PM (#15335907)
    From what I've seen of this "debate", it's all about what each group believes is (are) the most important aspect(s) of the kernel.

    Oblig auto analogy:
    If hauling cargo is your primary objective, then you'll probably view motorcycles as badly designed while seeing vans and trucks as "better".

    Only time (and code) will show which approach will result in all of the benefits of the other approach without any unacceptable deficiencies.
  • Re:Still Debating (Score:5, Insightful)

    by GReaToaK_2000 ( 217386 ) on Monday May 15, 2006 @01:42PM (#15335919)
    yeah, and in that same vein we'd all have Betamax players.

    I am NOT implying that uKernels are better, I am playing devils advocate.

    Not everything that "wins" is the best... Look at Windows :-D!
  • Re:Still Debating (Score:2, Insightful)

    by TheBishop613 ( 454798 ) on Monday May 15, 2006 @01:43PM (#15335928)
    Using that same logic, shouldn't everyone just abandon Linux as Windows is clearly better? If Linux were superior the vast majority would be using it today.

    Personally, I don't care either way in the micro/macro kernel debate. As long as we have people still interested in both its a win-win situation for us computer enthusiasts.
  • by Anonymous Coward on Monday May 15, 2006 @01:44PM (#15335935)
    "TVs don't have reset buttons. Stereos don't have reset buttons. Cars don't have reset buttons."

    They may not be labeled "reset" but they *do* have them. And, no offense, but I like having a reset button.
  • is if we can make a functional distro (i.e. Ubuntu) on top of Minix 3. Is it possible? What must be changed?
  • You're on to something...you are very close to the cache. Why are we "debating" this when the asnwer seems very clear once one takes a step back: They (the kernels) can exist in harmony, each in its own place. Tanenbaum makes a decent showing of examples about where and why micros are used. This isn't a "which is better" argument. This should be a "where is one better utilized than the other in situation X" debate. That flamewar I could tolerate. Bottom line is that neither will replace the other, at least in a timely enough manner that it is worth wasting time over now.
  • You-betcha. I honestly think Mr. Tanenbaum is wasting his time in replying to Slashdot. If the last article proved anything, it's that the majority of responders were stuck on the whole "Linus 'won' this over a decade ago, so STFU!" (No one really 'won' the argument, but that's beside the point.)

    There were a couple of good replies in there, but they all got drowned out in the noise. Soooo, I think it's a better idea to focus on how Minix might be made a viable OS rather than arguing the same nonsense all over again. As several of the posters here have already proven, they're not reading Tanenbaum's arguments anyway. So why should we expect this time be any different than the last?
  • by Zontar_Thing_From_Ve ( 949321 ) on Monday May 15, 2006 @02:03PM (#15336094)
    He also likes to get into flame wars with Linus Torvalds when he gets bored.

    Really? And what exactly do you base this on? According to the article, which it's clear that you did not read, Tanenbaum simply had a recent article printed in IEEE Computer and someone on Slashdot posted a link to it, which caused Linus to weigh in with his 2 cents about something that was never directed at him. It sounds more to me like Linus is obsessed with proving that macrokernels are the only way to go. Why does he even care? It's not like Minix is a threat to Linux. If he believes so strongly that microkernels are wrong, he should just let Tanenbaum and company waste their time on them instead of endlessing arguing the same points he made years ago.
  • The Question Is (Score:5, Insightful)

    by logicnazi ( 169418 ) <gerdes@iMENCKENnvariant.org minus author> on Monday May 15, 2006 @02:29PM (#15336348) Homepage
    A simple way to put the question is this:

    If you were given the choice between rebooting your machine every 3 months or so for updates/driver install or never rebooting your machine and but taking a 3-5% performance hit (I think this is what the most efficient uKernels waste on address space switches) which would you choose.

    I know my answer. For embedded systems/media center type stuff I don't care about the 3-5% performance hit. I don't ever want to screw with them.

    For my computer I don't care about rebooting every 3 months or so. I want that extra little bit of speed.
  • by Miniluv ( 165290 ) on Monday May 15, 2006 @02:37PM (#15336412) Homepage
    You would be one of those uninformed pontificators Andy so eloquently railed against.

    "For small embedded environments where speed or device support isn't a main concern. Micro-kernels will excel for their stability but take a look around and that's not reality or what we have today. We have lots of different hardware, lots of different interfaces and to manage that all via objects it'll just be extremely large."

    And none of that has anything to do with monolithic versus microkernel, except perhaps tangentially. Microkernels do not ask each device driver to be a server all its own with zero code reuse, they use generic servers to wrap drivers for specific hardware while still isolating them from kernel space. This means there's no functional difference to the driver programmer from a monolithic to a microkernel architecture, either way you look at the driver interface and write the necessary code.

    "If you think the linux kernel is big the relevant code for this would be numerous times larger. It just pushes the code from the kernel into userspace and you will definitely need more code to manage and access data structure"

    Why do you suddenly need more code to the same thing? Andy's point is that when you stop sharing data structures, and instead start passing messages from one discrete server to another through well defined interfaces you reduce the amount of complexity (and therefor code) involved in protecting the coherency of those data structures. You will end up with more interfaces, but thats not necessarily a bad thing. I'd gladly trade all of the critical section protection logic for some nice interface logic. Especially since making the latter work reliably is a hell of a lot easier to do, and gives each subsystem the freedom to rework their internals without requiring me to lift a finger.

    "If you can isolate your facets and only plan on supporting X number of devices/platforms/chipsets/etc and don't expect any blazing performance. Microkernels are great. Beyond that? With the rate that technology moves, it just becomes a management nightmare."

    There's still no credible evidence to suggest that microkernel performance is that horrible, especially with modern clock speeds. Aside from gaming and large scientific compute clusters, very little being done today on a computer uses any significant measure of their speed. We've already covered how you're totally off base on device support (i.e. its orthogonal to the entire debate), and you throw "management nightmare" out there without bothering to define it, let alone defend it.

    Large unix systems are already complex as hell to manage. A lot of that complexity is "hidden" in the kernel, which while fine for desktop users is a big pain for system administrators, and would be exposed for manageability in a microkernel setup.

    As for OS X and its performance, its not horribly slow. Especially considering that your complaint almost certainly centers around PPC performance not x86, where it was hampered by lower clock speeds that were not counterbalanced by better IPC in any significant fashion. OS X's memory hunger has little to do with the kernel and lots to do with their operating environment, and all of the gee whiz graphical functionality that OS X brings along with it.

    Ultimately though, OSX performance is a success story because on a G3 700mhz with 256M of ram its actually useable. Have you tried running Windows XP on a similar setup? Tried turning all of the eye candy on? Bet you didn't like the way it performed either.
  • by Tiro ( 19535 ) on Monday May 15, 2006 @02:40PM (#15336442) Journal
    I doubt I'll ever see anything but a monolithic kernel on my desktops.
    Do you realize that Mac OS X has not a monolithic kernel?
  • by igb ( 28052 ) on Monday May 15, 2006 @02:43PM (#15336460)
    What do you mean by ``good performance''? Will an arbitrarily chosen microkernel run on a 3GHz Opteron as fast as, say, SunOS 3.0 on a 15Mhz 68020 with 4MB of RAM? Clearly it will. And that was pretty fast at the time. What performance hit is acceptable in exchange for reliability is a difficult question, but in a lot of spaces a 90% hit would be acceptable, and I can think of applications where a 99% hit would be acceptable if the microkernel did indeed deliver the reliability that's claimed. After all, running at only 25% of the potential performance (and no-one's claiming that's the hit) is only 3 years on Moore's law. Vista's how many years late?

    More to the point, ``because it's faster'' has been the bane of Unix. To see that in stark relief, look at the shambles of NFS being in the kernel. Rather than fix the generic problems of providing a user-space nfsd, we saw a race into the kernel for a cheap my-code-only win, plus the horror of system calls that never return. Look at the vogue for in-kernel windowing systems (Suntools, for example) although X mercifully killed that off. Repeatedly we've seen massively complex and invasive kernel subsystems produced, when a generic solution to the problems that going into the kernel allegedly solves would have benefitted everyone for longer.

    You've got a problem. You decide to solve it with a kernel extension. Now you've got two problems.

    ian

  • by Miniluv ( 165290 ) on Monday May 15, 2006 @02:43PM (#15336462) Homepage
    Its a chicken/egg situation. Until the underlying mechanisms needed for self-healing are there, we won't get self-healing systems. Until the user space code for self-healing is there, nobody thinks its worthwhile to support self-healing mechanisms. Thankfully a few folks realize that if they build it, people will come.

    Also, your API metaphor is a little bad. While you're right about the end result, saying that this invalidates the utility of the API is wrong imho. The advantage of having the API remains, because you can always go FIX the userland code. Take away the good API and you become well and truly screwed.
  • by Miniluv ( 165290 ) on Monday May 15, 2006 @02:46PM (#15336498) Homepage
    Uhm, I'm pretty sure niche doesn't mean exceptionally widely deployed.

    QNX is everywhere, you just don't realize it. ATMs run it, lots of medical equipment runs it, lots of other embedded apps that you don't even think of run it.

    The examples Andy cites prove that in fact the microkernel concept has won in every single field where stability has gone beyond being something people wanted to something they demand. As soon as the general public realizes computers don't HAVE to crash, they'll win there too.
  • Re:Plug central (Score:5, Insightful)

    by podperson ( 592944 ) on Monday May 15, 2006 @02:48PM (#15336518) Homepage
    You can't seriously believe that running MINIX is going to magically give you expertise that lets you talk about operating system kernel design.

    It's apparent from this thread that one needs no expertise whatsoever to talk about operating system kernel design, so running MINIX should if anything overqualify you.
  • by slashdotmsiriv ( 922939 ) on Monday May 15, 2006 @03:03PM (#15336636)
    "His book on computer organization was one of the worst pieces of crap I've ever been forced to drop cash on. He frequently goes off on tangents and spends more time trying to be clever than clearly explaining the material. I fucking hate Tanenbaum, even though I like micro-kernels."

    Allow me to disagree with you: the worst piece of crap was his book on computer networks. Just a bunch of meaningless buddles of sentences that had nothing to do with systems design principles. Good thing Peterson's and Kurose's books are now pushing that crap out of university classes and to oblivion.

    And no Tanenbaum is not among the elite of ystems researchers, not even close. Just check his publication record.
  • Nobody watches the third one.
  • by Miniluv ( 165290 ) on Monday May 15, 2006 @03:29PM (#15336847) Homepage
    "It certainly can do. A bolt which is used on buses around the world is in a niche compared to, say, windscreen glass."

    While I see your point, and agree to an extent, its a poor metaphor (windscreen glass is a pretty niche application of glass, wouldn't you say?).

    My point was to refute the implied "QNX isn't anywhere important" statement rather than the exact meaning of niche.

    "But MY computer never crashes (Linux); so what else has it to offer? Security? Got that too."
    Thats wonderful, and my data center full of linux boxes do crash. Usually because of bad device drivers. As for security, while linux is certainly secure in many respects, it lacks the top to bottom security centric design that is much easier with a microkernel.

    "I was under the impression that QNX's real killer feature was its real-time abilities. Isn't that a niche feature? How many people would notice the effect of going from current generation Windows and Linux to a hard-real-time version?"

    That is, but this isn't about QNX vs Win/Lin, but instead micro versus monolithic. In this respect people wouldn't the difference either, until they sat back and thought about how often they reboot now compared to before. Its the best sort of upgrade though, the sort you never notice.
  • by microbee ( 682094 ) on Monday May 15, 2006 @04:14PM (#15337289)
    Before we get into arguments or understanding arguments, two most important things to note:
    - AST is a prefessor. His interest in doing research and building the best systems for the *future* that he believes in.
    - Linus is an engineer. His interest is building a system that works best *today*.

    We simply need both. Without pioneering work done before in other OSes (this included failures), Linux wouldn't have been like this today. The greatest reason for its success it not it's doing something cool, but it's doing things that are proven to work.

    So who is right? I'd say both. Linus has said this in 1992: "Linux wins heavily on points of being available now."

    Linus admits microkernels are "cooler", but he didn't (doesn't) believe in it *today* because none of the available microkernels could compete with Linux as a *general purpose* OS. It's funny how AST listed "Hurd" as one of the microkernels - it totally defeats his own arguments. The fact is Hurd is still not available today despite it was started before Linux.

    Many people talk about QNX. Sure, in many cases (especially mission critical, RTOS, where reliablility is so much more important than performance and usability) microkernels are better, but we really shouldn't compare a general-purpose OS with real-time or special purpose OS.

    So we go back to the old way: code talks. So far microkernel proponents keep saying "it's possible to do microkernel fast, etc" but the fact is they have never had an OS that could replace Linux and other popular OS that everybody could run on their desktop with enough functionality. There are two possible reasons:
    1. Lack of developers. But why? Do people tend to contribute to Linux because Linus is more handsome (than Richard Stallman that is)? There gotta be some reasons behind it other than oppotunities right?
    2. Monilithic kernels are actually more engineerable than microkernels, at least for today.

    Maybe 2 is actually the real reason?

    Think about it.
  • by LWATCDR ( 28044 ) on Monday May 15, 2006 @04:17PM (#15337331) Homepage Journal
    "Linus 'won' this over a decade ago, so STFU!"
    Hey Linus did win this. He was right and NOTHING has changed in the last ten years!
    Computers are not that much faster than they where back then and the need for security is no different that then!
    Yes I am so kidding. Linus won this because at the time his goal was to get out a Unix clone that ran on the 386 as quickly as possible. Doctor Tanenbaum on the other hand was interested in a Unix clone that would run on cheap hardware and that made a very good learning tool. For his goal Minix was the better system.
    Now we live in world of Gigs. It is common to have many gigs of hard drive space, at least a gigabyte of ram, and multigigahertz multi-core cpus. Not to mention that even the cheap built in graphics chip sets would blow the doors off of any video card you could get in 1995.
    For all but the biggest FPS gaming freak our computers are fast enough. What we want now is reliability, security, and ease of use. I use Linux every day. I depend on Linux. What I will not do is give up hope on something better than what we are using today. New idea's should be explored.

    I am also a little bit disapointed how little respect Doctor Tanenbaum has gotten on Slashdot. Linus compiled the first versions of Linux using Gcc running under Minix. I am pretty sure that Linus read Doctor Tanenbaum's book and probably learned a lot about how to write an OS from it. When it comes to computer science Tanenbaum's name is right up there with Wirth and Knuth. Of course the odds that any of the people that use STFU in a post have ever read Knuth, Wirth, or Tanenbaum is probably not worth measuring.
    Even if you are not convinced that Tanenbaum's methods are correct, his goals of a super reliable, self-healing, and secure OS are correct.
  • by monkeyGrease ( 806424 ) on Monday May 15, 2006 @04:21PM (#15337376)
    > Why can't we just all get along?

    Have you read the article? Tanenbaum basicly starts out by saying this is not a 'fight', but a technical discussion. Communication and debate is an important part of research and development. That's what is being attempted here, at least at face by Tanenbaum. There may be antagonism behind the scenes, or bias in presentation, but that is just human. The primary intent is to advance the state of the art, not fight.

    All this 'what's the point' or 'we have this now' type of talk really bugs me. Everything can always be improved, or at least that is the attitude I'd like to stick with.

    > When did we collectively forget that everything has its place

    Another key component of research and development is to question everything. Not throw everything away and always start over, but to at least question it. Just because monolithic kernels rule the desktop now does does prove that monolithic kernels are inherently the best desktop solution.

    In effect it is sometimes good to not even recognize a notion of 'everything has its place'.
  • by metamatic ( 202216 ) on Monday May 15, 2006 @04:57PM (#15337738) Homepage Journal
    Okay, I spent 2 years working as a engineer in the OSF's Research Institute developing Mach 3.0 from 1991. Let me answer Linus's question in a simple fashion. What Mach 3.0 bought you over Mach 2.5 or Mach 2.0 was a 12% performance hit as every call to the OS had to make a User Space -> Kernel -> User Space hit. This was true on x86, Moto and any other processor architecture available to us at the time. Not one of our customers found this an acceptable price to pay and I very much doubt they would today.

    Really? I think that since typical desktop CPUs these days are 100x faster, and your performance penalty is therefore 100x smaller, the situation might be a bit different now.

    I mean, people run Firefox, even though it's easily 15% slower than Opera. They run OpenOffice on Windows, even though it's slower than Microsoft Office. They run ext3, even though it's 15% slower than ReiserFS.

    Basically, a 15% performance hit is nothing on a modern system if it gains you stability, security and functionality.

  • by kv9 ( 697238 ) on Monday May 15, 2006 @06:07PM (#15338403) Homepage
    ...until someone makes a microkernel unix system that's more than just a proof of concept.

    you mean like Tanenbaum [minix3.org] (slashdotted, try later) did?

    FTFA [cs.vu.nl]:

    So **PLEASE** no more comments like "If Tanenbaum thinks microkernels are so great, why doesn't he write an OS based on one?" He did.

    i don't reall know what you mean by proof of concept

    again, FTFA:

    It is definitely not as complete or mature as Linux or BSD yet, but it clearly demonstrates that implementing a reliable, self-healing, multiserver UNIX clone in user space based on a small, easy-to-understand microkernel is doable. Don't confuse lack of maturity (we've only been at it for a bit over a year with three people) with issues relating to microkernels.

    i know this is slashdot, and RTFA is some kind of mortal sin, but please at least try.

  • by m874t232 ( 973431 ) on Monday May 15, 2006 @07:07PM (#15338762)
    -- Whether your commute is long or short is largely unrelated to whether you choose to drive a gas guzzler or fuel miser.
    -- European settlement is anything but uniform; I suggest you have a look at a map, or at least a night-time satellite photo.
    -- Except for maybe Iceland, individual European nations can't change to alternative fuels by themselves--Europe is far more integrated than you seem to think.
    -- You're confusing cause and effect; it's not that US settlement patterns require cheap gasoline, it's that cheap gasoline and bad public policy caused current US settlement patterns some time in the 1950's and 1960's. This process can be reversed.

    The US simply chooses to waste gasoline for a variety of political and ideological reasons. If the US wanted to, it could move back to an energy-efficient transportation infrastructure comparable to, or better than, Europe within a few decades.
  • Let's see... (Score:3, Insightful)

    by argent ( 18001 ) <peter@slashdot.2 ... m ['.ta' in gap]> on Monday May 15, 2006 @07:18PM (#15338832) Homepage Journal
    On the microkernel side we have Minix 3, 15 years after the first not-really-open-source-but-code-available microkernel UNIX systems.

    On the monolithic kernel side we have ... what? 15 years after the first not-really-open-source-but-code-available monolithic kernel UNIX systems we had... hmmm... things like MiNT, and bits of BSD, but even Net/1 was a few years in the future and Minix wasn't even out.

    I think, after you allow for the 20 year head start, microkernels aren't doing that badly.
  • by Pseudonym ( 62607 ) on Tuesday May 16, 2006 @01:11AM (#15340220)
    Linux has built a system, it works and it's used everywhere. Microkernels are all niche [...]

    One of Tanenbaum's central points is that Linux is not used everywhere. In particular, it's not used anywhere that hard-real-timeness, seriously paranoid robustness (e.g. in those applications where a hardware failure should not result in a reboot) etc are important.

    The word "niche" is, much like "legacy", often used in places where a more overt dismissal would rightly be seen as unfair. The fact that Linux can do everything that you want it to do doesn't mean that you are the whole world, or that any site that doesn't look like yours is "niche", as opposed to "everywhere".

  • by ajs318 ( 655362 ) <sd_resp2@earthsh ... .co.uk minus bsd> on Tuesday May 16, 2006 @08:51AM (#15341491)
    There is a simple reason why microkernels do not work in practice: the abstraction layer is in the wrong place.

    <simplification>A hardware driver doing output has to take raw bytes from a process, which is treating the device as though it were an ideal device; and pass them, usually together with a lot more information, to the actual device. A driver doing input has to supply instructions to and read raw data from the device, distil down the data and output it as though it came from an ideal device.</simplification>

    In general, the data pathway between the driver and the process {which we'll call the software-side} is less heavily used than the data pathway between the driver and the device {which we'll call the hardware-side}.

    <simplification>In a conventional monolithic kernel {classic BSD}, a hybrid kernel {Windows NT} or a modular kernel {Linux or Netware}, device drivers exist entirely in kernel space. The device driver process communicates with the userland process which wants to talk to the device and with the device itself. All the required munging is done within the kernel process. In a microkernel architecture, device drivers exist mainly in user space {though there is necessarily a kernel component, since userland processes are not allowed to talk to devices directly}. The device driver process communicates with the ordinary userland process which wants to talk to the device, and a much simpler kernel space process which just puts raw data and commands, fed to it by the user space driver, on the appropriate bus.</simplification>

    Ignore for a moment the fact that under a microkernel, some process pretending to be a user space device driver could effectively access hardware almost directly, as though it were a kernel space process. What's more relevant is that in a microkernel architecture, the heavily-used hardware-side path crosses the boundary between user space and kernel space.

    And it gets worse.

    <simplification>In a modular kernel, a device driver module has to be loaded the first time some process wants to talk to the device. {Anyone remember the way Betamax VCRs used to leave the tape in the cassette till the first time the user pressed PLAY? Forget the analogy then} which obviously takes some time. The software-side communications channel is established, which takes some time. Then communication takes place. The driver stays loaded until the user wants it removed. Then the communication channel is filled in and the memory used by the module is freed, which obviously takes some time.

    In a microkernel architecture, a user space device driver has to be loaded every time some process wants to talk to the device. The software and hardware side communications channels have to be established, which take some time. Then communication begins in earnest. When that particular process has finished with the device, both channels are filled, and the memory used by the driver is freed; which takes time. Between this hardware access and the next, another process may have taken over the space freed up by the driver, which means that reloading the user space driver will take time.</simplification>

    It makes good practical sense to put fences in the place where the smallest amount of data passes through them, because the overheads involved in talking over a fence do add up. That, however, may not necessarily be the most "beautiful" arrangement, if your idea of beauty is to keep as little as possible on one side the fence. It also makes sense for device drivers which are going to be used several times to stay in memory, not be continuously loaded and unloaded. {Admittedly, that's really a memory management issue, but no known memory manager can predict the future.}

    Ultimately it's just a question of high heels vs. hiking boots.

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...