Torvalds on the Microkernel Debate 607
diegocgteleline.es writes "Linus Torvalds has chimed in on the recently flamed-up (again) micro vs monolithic kernel, but this time with an interesting and unexpected point of view. From the article: 'The real issue, and it's really fundamental, is the issue of sharing address spaces. Nothing else really matters. Everything else ends up flowing from that fundamental question: do you share the address space with the caller or put in slightly different terms: can the callee look at and change the callers state as if it were its own (and the other way around)?'"
Code talks (Score:5, Insightful)
Linus is a pragmatist. He didn't write Linux for academic purpose. He wanted it to work.
But you can always prove him wrong by showing him the code, and I bet he'd be glad to accept he was wrong.
comments i liked (Score:3, Insightful)
Re:Code talks (Score:5, Insightful)
Small, fast, real-time. http://en.wikipedia.org/wiki/QNX [wikipedia.org]
Microkernels and the future of hardware (Score:4, Insightful)
By 2010 I suspect at least desktops are 4-CPU systems and as the numbers of cores increase one of the large drawbacks of microkernels raises it's ugly head: microkernels turn simple locking algorithms into distributed computing-style algorithms.
Every game developer tells us how difficult it is to write multi-threaded code for even our monolithic operating systems (Windows, Linux, OSX). In microkernels you constantly have to worry how to share data with other threads as you can't trust them to give even correct pointers! If you would explicitly trust them, then a single failure at any driver or module would bring down the whole system - just like in monolithic kernels but with a performance penalty that scales nicely with the number of cores. What's even worse is that at a multi-core environment you'll have to be very, very careful when designing and implementing the distribution algorithms or a simple user-space program could easily crash the system or gain superuser privileges.
It's not really a microkernel. (Score:4, Insightful)
MacOS has that sharing address space with the monolithic BSD kernel. So a semi-microkernel and a monolithic kernel are firmly bolted together. That's only a microkernel if your degree is in marketing.
Re:Obvious (Score:5, Insightful)
You are forgiven for being wrong, but not for spouting off nonsense despite knowing that you don't know what you're talking about, apparently applying the principal "if my argument involves M$ doing the wrong thing, it must be right".
While neither NT nor Mac OS X are true microkernels, the architecture of both is strongly inspired by microkernel ideas. Like Linus, the developers of these kernels recognized the practical difficulties involved in making full-on microkernels work, but unlike Linus, instead of throwing in the towel completely and doing full-on monolithic kernels, they created cleanly seperated layers interacting via well-defined interfaces whenever they practically could.
If you talk to kernel programmers, most will express a high degree of respect for the NT kernel, which is based on the DEC VMS kernel. It mostly the poor design of systems that sit on top of the kernel that has earned Windows its reputation.
Re:Code talks (Score:4, Insightful)
The whole discussion of Windows vs anything else is totally pointless. All popular OSes are Windows.
Linus is a pragmatist. He didn't write Linux for academic purpose. He wanted it to work.
That's true, and that's a good point. However, it's much easier to start a project if you already have some good people, even if the code is entirely from scratch. Therefore, making the point in a place like the kernel development lists is a good idea, because that's a good place to recruit people.
Certainly in the case of OSes, there really isn't much of an opportunity for something like Linux to emerge from one person's efforts. As far as I can tell, Linux originally worked because enough people were interested in helping him early on in a hobby, doing things like sending him an actual copy of the POSIX specs, and he was mostly able to get it to where it actually duplicated Minix's functionality, and exceeded it in some cases.
In fact, there was such a shortage of good OSes for this machine that really, Linux succeeded because it wasn't Minix and wasn't DOS. In fact, one has to wonder -- could an open Minix have done what Linux did? It's possible that, given all the programmers who eventually decided to work on Linux, the problems with microkernels could've been solved. Similarly, if all the programmers working on Linux suddenly decided to do a microkernel, it would succeed and it would replace Linux.
But, that isn't going to happen. Not all at once, and probably not ever, unless it can be done incrementally.
in other news (Score:4, Insightful)
Entire comment (Score:5, Insightful)
Name: Linus Torvalds (torvalds AT osdl.org) 5/9/06
___________________
_Arthur (Arthur_ AT sympatico.ca) on 5/9/06 wrote:
I found that distinction between microkernels and "monolithic" kernels useful: With microkernels, when you call a system service, a "message" is generated to be handled by the kernel *task*, to be dispatched to the proper handler (task). There is likely to be at least 2 levels of task-switching (and ring-level switching) in a microkernel call.
___________________
I don't think you should focus on implementation details.
For example, the task-switching could be basically hidden by hardware, and a "ukernel task switch" is not necessarily the same as a traditional task switch, because you may have things - hardware or software conventions - that basically might turn it into something that acts more like a normal subroutine call.
To make a stupid analogy: a function call is certainly "more expensive" than a straight jump (because the function call implies the setup for returning, and the return itself). But you can optimize certain function calls into plain jumps - and it's such a common optimization that it has a name of its own ("tailcall conversion").
In a similar manner, those task switches for the system call have very specific semantics, so it's possible to do them as less than "real" task-switches.
So I wouldn't focus on them, since they aren't necessarily even the biggest performance problem of an ukernel.
The real issue, and it's really fundamental, is the issue of sharing address spaces. Nothing else really matters. Everything else ends up flowing from that fundamental question: do you share the address space with the caller, or put in slightly different terms: can the callee look at and change the callers state as if it were its own (and the other way around)?
Even for a monolithic kernel, the answer is a very emphatic no when you cross from user space into kernel space. Obviously the user space program cannot change kernel state, but it is equally true that the kernel cannot just consider user space to be equivalent to its own data structures (it might use the exact same physical instructions, but it cannot trust the user pointers, which means that in practice, they are totally different things from kernel pointers).
That's another example of where "implementation" doesn't much matter, this time in the reverse sense. When a kernel accesses user space, the actual implementation of that - depending on hw concepts and implementation - may be exactly the same as when it accesses its own data structures: a normal "load" or "store". But despite that identical low-level implementation, there are high-level issues that radically differ.
And that separation of "access space" is a really big deal. I say "access space", because it really is something conceptually different from "address space". The two parts may even "share" the address space (in a monolithic kernel they normally do), and that has huge advantages (no TLB issues etc), but there are issues that means that you end up having protection differences or simply semantic differences between the accesses.
(Where one common example of "semantic" difference might be that one "access space" might take a page fault, while another one is guaranteed to be pinned down - this has some really huge issues for locking around the access, and for dead-lock avoidance etc etc).
So in a traditional kernel, you usually would share the address space, but you'd have protection issues and some semantic differences that mean that the kernel and user space can't access each other freely. And that makes for some really big issues, but a traditional kernel very much tries to minimize them. And most importantly, a traditional kernel shares the access space across all the basic system calls, so that user/kernel difference is the only access space boundary.
Now, the real problem with split acce
Re:Distributed not that hard. (Score:3, Insightful)
Sort of. In the scanerio you describe, the program was, in fact, not proved to be correct because the people who did the proof failed to take into account the real requirements for the system.
If you don't even know your requirements, no methodology to implement those requirements is going to work reliably.
Re:Linus Quote - "not arguing against it at all" (Score:3, Insightful)
Here, it seems, the means justify the ends. Linus basically says "I won't take any challenges".
Linus tells me that we can never write a proper scalable OS for a NUMA machine, or a modular system that can serve well to parallel I/O systems and the like. I highly disagree.
Because these things are not pipe dreams, they have been done. IBM guys have made amazingly abstract and modular OS stuff and they've been using them for years, so I think it's rather pathetic to say that there is only one true path to OS implementation. Why not admit that it is the only path that you have any experience in?
Re:Code talks (Score:5, Insightful)
One of the biggest problems I continually have with technical people (whether that's computer techs or engineers) is that they tend to overemphasize the syntax and semantics of what people say. They tend to latch on to a specific phrase and then rip it apart rather than taking the meaning of the whole (which is the important part) and finding problems in the whole. Most particularly, they tend to find it incomprehensible that a single phrase might have multiple meanings.
Part if this is doubtlessly due to exposure to highly precise technical jargon, but it is inappropriate to apply strictness of meaning inherent to, say, Python, to everyday language. Even in a technical debate.
A hybrid kernel in simplest terms is a kernel is a combination of two discrete other types of kernels. Plain English tells you that. It makes no sense to try to wrestle with whether WinNT is a monolithic or microkernel. It's a semantic debate that serves only to label the object, and it doesn't describe it or aid in understanding it. If you say WinNT is a microkernel, you then have to ignore the non-essential code objviously running in kernel mode and that doesn't help understanding. If you say WinNT is a monolithic kernel, you have to ignore the userland processes that are really system services. Again, that's no aid to understanding.
Stop complaining about the language and forcing labels on things. Labeling is not understanding.
Re:Distributed not that hard. (Score:4, Insightful)
You're right, it *doesn't* say anything about the code efficiency, or the runtime per se
The microkernel may be more elegant, more pristine in the lab
I'm sorry, I'm with Linux on this one.
Provably correct doesn't mean "good"
Also, there is nothing about a microkernel that makes it more inherently provably correct than a monolithic kernel. Even going back to Parnas' notation that we used years ago, and thinking about the structure of the Linux kernel, it would be pretty easy to go through the exercise and prove it correct/incorrect
Re:Linus Quote - "not arguing against it at all" (Score:5, Insightful)
The analogy of centralisation vs. local autonomy is not totally accurate either. Both the monolithic and the microkernel are centralized, except that in the first case there a large beaurocratic structure and in the second case it just a dictator and a couple of "advisors". If the dictator or the king is chosen well, the system will be more predictable and will work much better. If case of the large beaurocratic system, if some of its members get corrupted [and they will because there are so many of them] the whole system will fail. It is like saying that a small bug in the mouse driver will freeze and crash the system with a monolithic kernel. Good thing if the system was only running Doom at the time and not controlling a reactor, or administering a drug. If the same happens in the microkernel system, the kernel will reload the driver, raise an alarm, or in general -- be able to take the system to a predictable predetermined state. Going back to the analogy is it is like having the dictator execute a corrupted staff member and replace him immediately.
The real point: keep OS designers honest (Score:5, Insightful)
Andy likes microkernels because they force you to do that. Time spent on design leads to insight, which may well point to better and cleaner ways to do the task you originally set out to acomplish.
Linus hates microkernels because they force you to do that. Time spent on design is time lost getting working code out the door, and working code will give you experience that will point to better and cleaner ways to do the task you originally set out to acomplish.
Re:Microkernels and the future of hardware (Score:3, Insightful)
The early days of Fortran were before the 70's. Given the extremly tight ram constraints you'd probably have to implement a non-recursive iterative form which is FAR more complex. And this is Fortran we're talking about, not known for being the cutest language out there - and if we're referring to pre fortran66 then you're only branching construct is the 3way arithmatic IF statment. Now given that and considering your only method of debugging is taking a heap dump and looking through punch cards... I'd say yeah, it probably was too hard to implement.
It's pretty easy to say that your typical "oops I forgot a semicolon so I'll recompile" CS student with pretty much no ram constraints for the problem can do it as a homework project. Have him do the iterative form with nothing but if and goto, plus each mistake making him looking through a core dump and waiting until not only the problem is solved, but (s)he could schedule a time when they could try to run their program again? I think that's beyond most CS students.
Re:Linus Quote - "not arguing against it at all" (Score:4, Insightful)
This is very true.
Consider a filesystem driver in a monolithic kernel. If a dozen or so processes are all doing filesystem calls, then, assuming proper locking and in-kernel pre-emption, there's no problem - each process that executes the call enters kernel mode and starts executing the relevant kernel code immediately.
OK, here's where things start getting a little tricky. The whole locking setup in a monolithic kernel is pretty tricky. Early multi-processor kernels often took the course of "one big lock" at the top of the call stack - essentially only one process could be executing in the kernel. Why? Because all that "proper locking" is tricky. Took years to get this working right. Of course it's done now in Linux so you can take advantage of it, but it wasn't easy.
Now consider a microkernel. The filesystem driver is a separate server process. Executing a system call means sending a message to that server and waiting for an answer.
OK, now here, you're kind of running off the rails. What is a "message"? There is no magical processor construct called a "message" - it's something that the OS provides. How messages are implemented can vary quite a bit. What you're thinking of is a messaging system ala sockets - that is the message would be placed onto a queue and then a process switch would happen sometime and the server on the other end would read messages out of the queue and do something. That's how microkernels are usually presented conceptually so it tends to get stuck in peoples' heads.
However, messages can be implemented in other ways. For example, you could make a message be more like a procedure call - you create a new stack, swap your address table around, and then jump into the function in the "server". No need to instantiate threads in the "server" anymore than there is a need to instantiate threads within a monolithic kernel. The server would essentially share the thread of the caller. I've worked on microkernel architectures that were implemented just this way.
If the number of data structures that you can directly access is smaller, the amount of locking that you have to take into account is smaller. Modularity and protection makes most people's tasks easier.
Many of the arguments made for monolithic kernels are similar to the arguments you used to hear from Mac programmers who didn't want to admit that protected memory and multi-tasking were good things. Mac programmers liked to (as I used to say) "look in each other's underware". Programs rummaged about through system data structures and other apps data structures sometimes, changing things where they felt like it. This can be pretty fun sometimes and you can do some really spiffy things. However, set one byte the wrong way and the whole system comes crashing down.
Re:Code talks (Score:3, Insightful)
Windows even had GDI in user space before, and later moved into the kernel for performance reasons, and GDI in user space didn't provide more stability
You have to be joking. It was a massive step backwards in stability. NT 3.51 was rock solid, NT 4 was far more flaky.
Not only that, but with GDI in kernel all sorts of resource limits came in that just weren't there before. Writing heavy graphics was much more of a pain - no matter how careful you were with GDI, other programs could consume limited kernel resources and your graphics calls would start to randomly fail.
Image sizes (bitmaps) that you could allocate were also constrained by kernel limits. Not in a way you could handle nicely in the app either - CreateCompatibleBitmap() simply fails once the kernel memory is maxed out, something you can't predict. On NT 3.51 you just added more memory to your system.
We had people still on 3.51 even in (I think) 2000, because NT4 wouldn't let them edit images as large as they needed - eventually the whole application graphics engine had to be re-architected to work with the gdi kernel limits.
Re:Distributed not that hard. (Score:5, Insightful)
Umm, doesn't that mean while you've prooved that the 10k microkernel lines correct, you'd still have ~6 million lines of code sitting outside the microkernal waiting to be prooved? I can't see how a microkernel can magically do with 10k everything Linux is doing with 6 million lines (especially as by the definition of microkernel, than there's no way it could).
Re:Distributed not that hard. (Score:5, Insightful)
Java and microkernels (Score:4, Insightful)
That's interesting because those are exactly my thoughts every time I hear the arguments people use to defend microkernels: Java is to microkernels as C/C++ is to monolithic kernels.
Linus Torvalds summed it well when he mentioned that microkernels are simpler only when data flow goes in one direction only. It's very hard to get a function to fill a complicated data structure for you if you cannot work with pointers. Passing a reference will do only for simple structures, it will not work if there are structures within structures, it is very hard to do if the called function must itself pass some subset of that structure to another function. And for operating systems where one must contend with multiple access and locking, it's almost impossible to do without a performance penalty.
Let's face it: pointer manipulation is necessary because there are real life problems that are more complex than textbook examples. If there weren't, inventing the C language wouldn't have been necessary, we could have stuck with Fortran all along.
Re:Linus Quote - "not arguing against it at all" (Score:4, Insightful)
Well maybe that's how *you* would design *your* Microkernel. And yes, it would suck.
The way I would design the filesystem driver, would be to accept a request, add it to a queue of pending requests to serve. If there are no initiated requests, find the request that can most efficiently be served based upon your preferred policy (closest seek time, for example, or first come first serve, your choice), and initiate that request. Add some smarts for multiple devices, so multiple requests can be initiated at the same time to different devices. When data comes back, answer the requesting process with their data. Rather than sitting around blocking on a request, go grab more requests from other processes and queue them up. No need to block. When an initiated request comes back, send back the data to the requesting process, and everyone's happy. Just because things are separated out into different processes, doesn't mean that they can't do some asynchronous juggling to be efficient. Add multi-threading, and the coding becomes a bit easier; but multi-threading isn't necessary to rely upon to have this work well.
I'm pretty sure the monolithic kernels do things somewhat similarly; build a request queue, service that queue. They could also block until they're done other requests, but that would be bad design. Don't assume a Microkernel Filesystem server has to suffer from similarly bad design.
Re:Code talks (Score:1, Insightful)
Humpty Dumpty smiled contemptuously. `Of course you don't -- till I tell you. I meant "there's a nice knock-down argument for you!"'
`But "glory" doesn't mean "a nice knock-down argument,"' Alice objected.
`When I use a word,' Humpty Dumpty said in rather a scornful tone, `it means just what I choose it to mean -- neither more nor less.'
`The question is,' said Alice, `whether you CAN make words mean so many different things.'
`The question is,' said Humpty Dumpty, `which is to be master -- that's all.'
Re:Linus Quote - "not arguing against it at all" (Score:1, Insightful)
When there are hundreds of device drivers, each interacting with each other, and the fact that a lot of hardware out there have their own set of flaws and sometimes produce unpredictable behavior, the best bet would be to write code that is "as correct as you know it", and then test test and test.
Of course, on niche systems where the hardware can be controlled, proving it may be possible, but still if we're talking about mainstream OS'es like linux, the approach just wouldn't work.
Re:Microkernels and the future of hardware (Score:1, Insightful)
Proving correctness & why it doesn't work (Score:3, Insightful)
The problem is that it doesn't match the way most people work right now.
Check out this brilliant paper by Alistair Cockburn (spoken as Co-burn) - Characterizing People as Non-Linear, First-Order Components in Software Development [cockburn.us]. Over and over in this paper he says:
If you've ever worked on a software project with more than four people, didn't the personality and skills of the people involved make more of a difference than any methodology, abstraction, or even the language used? That's always been true in my experience.
Microkernels can still be optimized (Score:3, Insightful)
Agreed, but these are the same people who don't really care if they're at the bleeding edge of technology and have to deal with super-l33t video driver 0.8.2b crashing every so often.
It is just a false premise, just because some monolithic (or hybrid) kernels are unreliable does not mean that it is necessary or better to use microkernels to get reliability.
I'm afraid the false assumption has fallen upon you here. It's proven, time and again, that microkernels make it simpler to guarantee security and reliability. Yes, microkernels hurt performance a bit. However, it's throwing the baby out with the bathwater to discard microkernel architecture because of performance. By discarding microkernel architecture, you also discard the architectural segregation of system services, which makes for a very simple way to segragate effort when you've got say, an open-source project with developers located around the globe. Microkernels also ease maintenance, again, because the kernel remains small.
A HUGE misstep that the monolithic kernel camp has made by pointing fingers at microkernels' performance is this: smart coders write good, clean, testable, reliable, secure programs first, and optimize later. One should never, EVER attempt to optimize code on the first pass. Write it, test it, fix it, THEN optimize it. Leave the clearly non-optimal code in during the first few passes, just for the sake of maintenance. Then, you'll have a good, clean codebase from which to begin profiling and optimizing.
I can personally testify to spending exorbitant amounts of time debugging architecturally un-sound code, written intentionally so, because a wise architectural decision would've resulted in lesser performance. Out of curiosity, I usually run profilers over this code. Most often, these architectural decisions result in such a negligible performance increase that it pales in comparison to the maintenance nightmare that ensues from it.
There's nothing magical about microkernels that prevents them from being optimized, just like any other program. The benefits of the microkernel architecture certainly outweigh the performance hit in my book.
I recommend sodium chloride (5mg) if you disagree with any of the above.
Multiple-processes: micro vs monolithic (Score:3, Insightful)
Re:The real point: keep OS designers honest (Score:3, Insightful)
And working code will prevent you in most cases from ever revsiting the design.
Continuation Passing Style (Score:3, Insightful)
There are some ways [haskell.org] to convert normal function calls to CPS.
And there is something called monads [wikipedia.org] used to convert imperative algorithms to functional style.
And yes, continuations [mit.edu] can be a very powerful technique.
However, CPS functional code is still coding an algorithm. Any way to compute something is an algorithm. May be you should name your critic "I dislike imperative algorithms, and I like CPS functional algorithms."
My take (Score:3, Insightful)
IBM is shipping it. Novell, RedHat, WindRiver, LinuxWorks, Motorola, Sharp, Sony, Hell even I'm shipping it in embedded products. It is easy to "prove it works" as alluded to in another post.
Microkernels are also shipping from QNX and, uh and, oh I'm sure there are a few more. (Not knocking QNX, considered it but tossed it, due to cost and liscensing.)
Is one more secure or stable than another, is really the wrong question.
The question is really is the "System" designed with microkernel more or less stable or secure or functional then the alternative.
I think it has, to my satisfaction, been settled. From RevolutionOS the movie (BuyIt!) Stallman is asked why HURD is so far behind Linux. His answer, (paraphrased, sorry RMS) Turns out a microkernel is very difficult to pull off because of the constant stream of messages required for the simplest of tasks. This forced ovehead only makes the Kernel more secure not the system, if the "drivers" keep crashing out and restarting you could go months without noticing critical flaws. "But the kernel is rock solid" doesn't really help if I can't ship the "System", does it. The only evidence you need, is the development pace of Hurd or even QNX to show this.
I respect the professor and his work, but it was an inspiration for a much more scalable design that clearly is superior for the rapid development a modern OS is expected to have.
As an engineer I see the beauty, but as a Production Engineer I can also see the added complexity a microkernel brings.
Of course you could, argue theoretically that I'm wrong or prove it by making a GNU/Minix distribution to compete in the real world with Linux. Almost 15yrs and a flood of Students haven't helped Professor T, produce it yet. Admittedly not his goal, but come on I know students and CompSci students have a knack for carrying their favorite teacher/classes with them throughout their career and it shows up in their projects.
Re:Obvious (Score:3, Insightful)
So, did VMS have a graphics subsystem in the kernel as well?
No. In NT 3.1, the graphics subsystem ran in user-space. In NT 4.0, it was moved into kernel-mode to avoid the performance hit of the context switch. As this history suggests, the actual architectures of the executive and the graphics subsystem are not tightly coupled. They share an address space for performance reasons, not in order to share state. (To be clear, I don't think this is a great thing, and it is a violation of microkernel principals. But the programmers made the smallest possible departure from microkernel principals to achieve their performance requirement.)
I find it highly amusing that many of the same people who defend the monolithic Linux kernel architecture that tightly couples so many subsystems to the kernel also attack Windows for running the graphics subsystem in kernel mode.
Re:Microkernels can still be optimized (Score:3, Insightful)
It is not proven at all and the top kernel programmers in the world agree. If they didn't, this discussion wouldn't be occuring. If you mean that the code is simpler to audit because there is less of it you are mistaken. A microkernel architecture only involves less code in the kernel, if you consider all the code needed to provide the functionality of the macrokernel it is replacing then a microkernel involves MORE code and greater complexity. Therefore it is MORE DIFFICULT to audit.
"By discarding microkernel architecture, you also discard the architectural segregation of system services"
Of course you want to segregate services logically for maintainance. That is already done with macrokernels. There is no reason to run those services in a seperate memory space just to get a logical seperation. A logical seperation is all that is needed for distributing maintainence.
"A HUGE misstep that the monolithic kernel camp has made by pointing fingers at microkernels' performance is this: smart coders write good, clean, testable, reliable, secure programs first, and optimize later."
False. You pick a overall design that satisfies that your primary design concerns first. The rest of your statement is true about IMPLEMENTING that design. Not designing a memory manager for performance in the first place is a preposterous concept, you pick the fastest performing design, implement that design in a stable and reliable manner and then go back to optimize individual parts of code. You do not pick a slow as shit but ultra reliable design, implement in the most reliable way possible, and then go back and try to optimize the poor design.
"Most often, these architectural decisions result in such a negligible performance increase that it pales in comparison to the maintenance nightmare that ensues from it."
We are talking about the kernel, not a text editor. Performance is critical and there is no such thing as a negligible performance gain. Although I would hardly call either macro or micro kernels unsound designs.
"There's nothing magical about microkernels that prevents them from being optimized, just like any other program. The benefits of the microkernel architecture certainly outweigh the performance hit in my book."
The benefits of a microkernel architecture remain theoretical. The performance hit is not. Andy himself has previously estimated a 'negligable' overall performance loss of about 30% using an optimized microkernel versus an optimized macrokernel. I am hardly prepared to trade 30% performance for theoretically increased reliability and stability. Especially when the practical macrokernel implementations don't have stability or reliability problems. You can't get more reliable than 'never goes down' (except for maintanaince and hardware failures of course) and my linux DESKTOPS meet that standard already; let alone servers and mission critical systems.