Forgot your password?
typodupeerror

Torvalds on the Microkernel Debate 607

Posted by ScuttleMonkey
from the never-met-a-pointer-i-could-trust dept.
diegocgteleline.es writes "Linus Torvalds has chimed in on the recently flamed-up (again) micro vs monolithic kernel, but this time with an interesting and unexpected point of view. From the article: 'The real issue, and it's really fundamental, is the issue of sharing address spaces. Nothing else really matters. Everything else ends up flowing from that fundamental question: do you share the address space with the caller or put in slightly different terms: can the callee look at and change the callers state as if it were its own (and the other way around)?'"
This discussion has been archived. No new comments can be posted.

Torvalds on the Microkernel Debate

Comments Filter:
  • Re:Code talks (Score:5, Interesting)

    by microbee (682094) on Wednesday May 10, 2006 @03:16AM (#15299249)
    "Hybrid" kernel? Sorry, I just don't buy this terminology (as Linus put it, it's purely marketing).

    Windows NT is monolithic. So is OS X. Anyone claims it to be microkernel please show me the proof other than "it is based on Mach".
  • by ultranova (717540) on Wednesday May 10, 2006 @03:28AM (#15299274)

    The whole argument that microkernels are somehow "more secure" or "more stable" is also total crap. The fact that each individual piece is simple and secure does not make the aggregate either simple or secure."

    Individual pieces aren't really any simpler either. In fact, if you want your kernel to scale, to work well with lots of processes, you are going to run into a simple problem: multitasking.

    Consider a filesystem driver in a monolithic kernel. If a dozen or so processes are all doing filesystem calls, then, assuming proper locking and in-kernel pre-emption, there's no problem - each process that executes the call enters kernel mode and starts executing the relevant kernel code immediately. If you have a multiprocessor machine, they could even be executing the calls simultaneously. If the processes have different priorities, those priorities will affect the CPU time they get when processing the call too, just as they should.

    Now consider a microkernel. The filesystem driver is a separate server process. Executing a system call means sending a message to that server and waiting for an answer. Now, what happens if the server is already executing another call ? The calling process blocks, possibly for a long time if there's lots of other requests queued up. This is an especially fun situation if the calling process has a higher priority than some CPU-consuming process, which in turn has a higher priority than the filesystem server. But, even if there are no other queued requests, and the server is ready and waiting, there's no guarantee that it will be scheduled for execution next, so latencies will be higher on average than on a monolithic kernel even in the best case.

    Sure, there are ways around this. The server could be multi-threaded, for example. But how many threads should it spawn ? And how much system resources are they going to waste ? A monolithic kernel has none of these problems.

    I don't know if a microkernel is better than monolithic kernel, but it sure isn't simpler - not if you want performance or scalability from it, but if you don't, then a monolithic kernel can be made pretty simple too...

  • by Inoshiro (71693) on Wednesday May 10, 2006 @03:29AM (#15299283) Homepage
    "You can do simple things easily - and in particular, you can do things where the information only passes in one direction quite easily, but anythign else is much much harder, because there is no "shared state" (by design). And in the absense of shared state, you have a hell of a lot of problems trying to make any decision that spans more than one entity in the system."

    I think you're looking at this the wrong way around.

    There has been a lot of research into this over the past 40 years, ever since Dijkstra first talked about coordination on a really big scale in the THE operating system. Any decent CS program has a class on distributed programming. Any decent SW architect can break down these different parts of the OS into weakly-connected pieces that communicate via a message passing interface (check out this comment [slashdot.org] by a guy talking about how Dragonfly BSD does this).

    It's obvious that breaking something like your process dispatcher into a set of processes or threads is silly, but that can be easily separated from the core context switcher. Most device driver bottom halves live fine as a userland process (each with a message-passing interface to their top-halves).

    If you're compiling for an embedded system, I'm sure you could even entirely remove the interface via some #define magic; only debug designs could actually have things in separate address spaces.

    The point I'm trying to make is: yes, you can access these fancy data structures inside the same address space, but you still have to serialize the access, otherwise your kernel could get into a strange state. If you mapped out the state diagram of your kernel, you'd want the transistions to be explicit and synchronized.

    Once you introduce the abstraction that does this, how much harder is it to make that work between processes as well as between threads in the kernel? How much of a benefit do you gain by not having random poorly-written chunks pissing over memory?

    How about security benefits from state-machine breakdowns being controlled and sectioned off from the rest of the machine? A buffer overflow is just a clever way of breaking a state diagram and adding your own state where you have control over the IP; by being in a separate address space, that poorly written module can't interact with the rest of the system to give elevated privileges for the attacker (unless, of course, they find flaws in more of the state machines and can chain them all together, which is highly unlikely!).

    Clearly there is a security benefit as much as there is a consistency benefit. Provably correct systems will always be better.
  • by Anonymous Coward on Wednesday May 10, 2006 @03:31AM (#15299285)
    Hi folks,

            I worked two years for a society that was developing its own micro-kernel system, for embedded targets. I was involved in system programing and adaptation of the whole compiler tools, based on GCC chain.
            Linus is right: basic problem is address space sharing, and if you want to implement memory protection, you rapidly falls into address space fragmentation problem.
            The main advantage of the system I worked on wasn't really its micro-kernel architecture, but the fact that its design allowed to suppress most of glue code that is needed between a C++ program and a more classic system.
            In my opinion, micro-kernel architecture has the same advantage and drawbacks that so-called "object-oriented" programing scheme : it is somewhat intellectually seducive for presentations but it is just a tool.
            It would certainly be intersting for Linux to provide the dynamic link management specificities of a micro-kernel system, for instance to allow someone to quickly modify IP stack for its own purpose, but should the whole system being design that way ? I am not sure.
            If you want to have an idea of the problem encountered with programing for these systems, one can look at the history of the AmigaOS, which have a design very close to a micro-kernel one.
  • Re:Code talks (Score:5, Interesting)

    by Stephen Williams (23750) on Wednesday May 10, 2006 @03:35AM (#15299299) Journal
    That said, perhaps a monolithic kernel is better suited to the open-source development process, which would seem counterintuitive at first because it discourages modularization

    Not necessarily. Despite being a monolithic design, Linux is pretty modular. Device drivers, filesystems, network add-ons etc. are separate enough from the core of the kernel that they don't even need to be statically linked into it, but can be loaded as modules into a running kernel, as I'm sure you know.

    It's not a microkernel approach because all the modules are loaded into the kernel's address space. They're bits of extra functionality that are dynamically grafted to the monolithic kernel image, so to speak. Nevertheless, it's still a modular approach to kernel design.

    -Stephen
  • by Hast (24833) on Wednesday May 10, 2006 @03:44AM (#15299322)
    assuming proper locking and in-kernel pre-emption, there's no problem - each process that executes the call enters kernel mode and starts executing the relevant kernel code immediately. If you have a multiprocessor machine, they could even be executing the calls simultaneously.

    That's a pretty big assumption. Or rather, you have basically taken all the hard parts of doing shared code and said "Let's hope someone else already solved this for us".

    The filesystem driver is a separate server process. Executing a system call means sending a message to that server and waiting for an answer. Now, what happens if the server is already executing another call ? The calling process blocks, possibly for a long time if there's lots of other requests queued up.

    Sooooo, it's easy to have someone else handle the multi-process bits in a monolithic design. But when it comes to writing services for microkernels suddenly everyone is an idiot?

    Besides, as Linus pointed out, when data is going one way microkernels are easy. And in the case of file systems that is really the case. Sure multiple processes can access it at once, but the time scale on handling the incoming signals is extremely fast compared to waiting for data from disk. Only a really, *really* incompetent idiot would write such a server which blocked until the read was finished.
  • by ingenthr (34535) on Wednesday May 10, 2006 @03:45AM (#15299324) Homepage

    Don't take C's poor support for threading and tools to build/debug threaded code to mean that writing threaded code isn't possible. Other platforms and languages have taken threads to great extremes for many years, and I'm not necessarily referring to anything Unix (or from Sun).

    This reminds me of the story (but I don't know how true it is) that in the early days of Fortran, the quicksort algorithm was widely understood but considered to be too complicated to implement. Now 2nd year computer science students implement it as a homework project. Threads could be considered similar. Anyone who has written a servlet is implicitly writing multithreaded code and you can very easily/quickly write reliable and safe threaded code in a number of modern languages without having to get into the details C forces you into. It's the mix of pass-by-reference and pass-by-value with only a bit of syntactical sugar that creates the problems, not the concepts of parallelism.

    On the other hand, I agree with you that we'll see increased parallelism driving increases in computing capabilities in the coming years. It was mathematically proven some time ago, but Amdahl's law is now officially giving way to Gustafson's law [wikipedia.org] (more on John Gustafson here [sun.com]). Since software codes are sufficiently complex these days (even the most simple of modern programs can make use of parallelism-- just think of anything that touches a network), it's those platforms that exploit this feature which stand to deliver the best benefits to it's users.

  • by Sique (173459) on Wednesday May 10, 2006 @03:47AM (#15299331) Homepage
    In the end it boils down to the old question centralisation vs. local autonomy. Centralisation is fine for keeping state, it is fine for enforcing a thoroughly similar approach to everything, it helps with 'single points of contact'. Local autonomy helps with less administrational effort, with clearly defined information paths and with clear responsibilities, thus with keeping problems locally.

    Both approaches have their merits, and in the real world you will never see a purely central organisation or a purely localized organisation. Every organisation is somehow swinging between both extrema, going more central at one point "to leverage synergies and increase efficiency", or is starting outsourcing and reorganizing itself into profit centers, to "overcome bureaucracy, to clearly define responsibilities and to cut down on administrational spending".

    The limits are given by the speed information is created, sent and decoded within the different organisational paths. An increase in Inter Process Communication speed will help with a more modularized microkernel approach, an increase in number and complexity of concurrent requests demands a more centralized kernel.

    In the end it boils down to the fact, that transactions have to be atomar operations, either being executed completely or rolled back completely if not finished. Centralized systems are inherently transactional, especially if they are executing tasks sequentially. The limit is given with the numbers of transactions that can be executed per time unit. Parallel execution demands operations to be as independent of each other as possible, thus increasing design efforts, but once the task is (nearly) interlock free, a modularized approach helps with faster, better maintenable code.
  • by Gorshkov (932507) <admgorshkov@ya[ ].com ['hoo' in gap]> on Wednesday May 10, 2006 @03:51AM (#15299344)
    Provably correct systems will always be better.

    Well, I could certainly argue THAT one.

    Years ago, I was a lead analyst on an IV&V for the shutdown system for a nuclear reactor - specifically, Darlington II in Ontario, Canada.

    This was the first time Ontario Hydro wanted to use a computer system for shutdown, instead of the old sensor-relay thingie. This made AECB (Atomic Energy Control Board) rather nervous, as you can understand, so they mandated the IV&V.

    I forget his first name - but Parnas from Queen's University in Kingston had developed a calculus to prove the correctness of a programme. It was susinct, it was precice, it was elegant, and it worked wonderfully.

    ummmmm ..... well, kind of. About 3/4 of the way through the process, I asked a question that nobody else had thought of.

    OK, so we prove that the programme is correct, and it'll do what it's supposed to do .... but how long will it take?

    You see, everybody had kinda/sorta forgot that this particular programme not only had to be correct, but it had to tell you that the reactor was gonna melt down BEFORE it did, not a week afterwards.

    The point is, that there is often much more involved in whether or not a programme (or operating system) is usefull than it's "correctness"
  • Re:Obvious (Score:1, Interesting)

    by Anonymous Coward on Wednesday May 10, 2006 @04:00AM (#15299371)
    The windows kernel is surprisingly advanced technologically. It's not the bloated patched up QBasic mess that people assume. The problem with windows is not the kernel, but Microsoft's management of the software development cycle and lack of vision.

    Sorry, but I have to ask. You speak as though you worked at MS. I worked at HP in 1994. At that time, the group next to me ported NT to the PA-RISC. While I was not part of that group, I got to look over the code and talk to several of the engineers who did it. And I can tell you, that it was total crap. Not a little bit, but honest to god total crap. It became one of the bigger jokes to go an look through the code and see what was there. They had obvious buffer overflows, uninitialized pointers, etc issues in a number of areas. The press, and people like you, were pushing coders from there as being good to great and it was obvious that they had a number a idiots on their kernel.

    While I did not go through the kernel (just examined a few files), the porters did. They pointed out how archiac it was, as well as badly coded.

    The truth was that it was fully ported to the pa. But HP never offered it, because it was so bad, that there was no way that it was going to be stable. So, what exactly do you (and press members) base your assement of windows on? Why do you state that it is advanced tech? What frame of reference do you base it on? Or are you just trolling here?

  • Re:Hybrid kernels??? (Score:5, Interesting)

    by jackjeff (955699) on Wednesday May 10, 2006 @04:09AM (#15299394)
    Depends on what you mean by Micro Kernel and Monolithic.

    True, the kernel of MacOS/X - Darwin, aka XNU, for performance reasons run the Mach and BSD layer both in superuser space to minimize the lattency.

    Maybe this is what you call a hybrid kernel: http://en.wikipedia.org/wiki/Hybrid_kernel [wikipedia.org]

    You may call XNU whatever you wish but the fact remains:
    - it's not a monolithic kernel by design
    - it has Mach in it and Mach is some sort of microkernel. Maybe it does not reach "today's" standards of being called a microkernel but it was a very popular microkernel before.

    So maybe the things running on top of Mach ( http://developer.apple.com/documentation/Darwin/Co nceptual/KernelProgramming/index.html [apple.com] ) are conceptually "different" from what the services of microkernel should be, and they do share indeed the address space, but this is very very very different from the architecture of a traditional monolithic kernel such as Linux

    This guy ( http://sekhon.berkeley.edu/macosx/intel.html [berkeley.edu] ) recently tested some stats software on his Mac running OS X and Linux, and found out that indeed MacOS X had performance issues, very likely due to the architecture of the kernel.

    There's even a rumor that says that since Avie Tevanian left Apple ( http://www.neoseeker.com/news/story/5553/ [neoseeker.com] ), some guys are now working for removing the Mach microkernel and migrate to a full BSD kernel in the next release of the operating system.

    And now my personal touch. I agree with Linus when he says that having small components doing simple parts on their sides and putting them together with pipes and so on, is somehow the UNIX way and is attracting (too lazy to find the quote). However as he demonstrates later, distributed computing is not easy, and there's also the boundray crossing issue. I guess he has a point when he says this is a problem for performance and the difficulty on designing the system... So if performance is what you indeed expect from a kernel, then you must stop dreaming of a clean-centralized good software architecture like those we have for our high oo-oriented software.

    But the truth is that, although developing a monolithic kernel is an easier task to do from scratch than a microkernel, I guess the entry ticket (learning curve) for a monolithic kernel developes is more expensive. The main reason being, "things ARE NOT separated". Anyone, anywhere in the kernel could be modifying the state of that thing, for non obvious reason, even if there's a comment that says "please don't do that" or it shoulld not be the case etc.... Microkernel can obviouisly provide some kind of protection and introspections to these things, but have always hurt performances to do so.

    Now it has everything to do on what you expect. Linux has many many many developpers and obviously can afford having a monolithic design that changes every now and then and you may prefer a kernel that goes fast than one whose code is clearn, well organized and easy to read. But the corrolary of that observation is that for the same reasons, grep, cat, cut, find, sort, or whatever unix tools you use with pipes and redirection are similarly a cleaner but YET INEFFICIANT design. However, it's been proven (with time) to be a good idea..

    I think things that are "low level" will be bound to have a poor spagehtti software architecture because performance matters and the code is smaller.. but the higher level you go, the less performance matters, and the more code maintenance and evolutivity matters... Everything is a tradeof: good design practice depends on the type of problems your software tackles.

    That said, it does not mean no progress can be made in kernel developments. Linux already uses a somewhat different C lang
  • by drgonzo59 (747139) on Wednesday May 10, 2006 @04:25AM (#15299425)
    Tovarisch Gorshkov, to prove that the program is correct ("covert channel analysis" and such.) might take up to a year and that is only if there are less than 10k lines of code and no more, but that doesn't mean that the program will _run_ slow. The time and methods used to prove correctness don't necessarily say anything about the speed of the program during runtime.

    So correct systems will always be better, because you know it is correct and you know the limits (want it to run faster -- just buy faster hardware ). On the other hand, if the program hasn't been been proved to work correctly, even though it might be blazingly fast, one day it might just stop working and your control rods will end up being stuck half way through, all because there is a "off-by-one" error in some stupid serial driver or something like that...

  • Re:Linus Quote (Score:1, Interesting)

    by Anonymous Coward on Wednesday May 10, 2006 @04:30AM (#15299440)
    My favorite Linus quote:
    I also claim that Slashdot people usually are smelly and eat their boogers, and have an IQ slightly lower than my daughters pet hamster (that's "hamster" without a "p", btw, for any slashdot posters out there. Try to follow me, ok?).
    -- Linux [kerneltrap.org]
  • Re:Obvious (Score:1, Interesting)

    by Anonymous Coward on Wednesday May 10, 2006 @04:33AM (#15299445)
    If you ever actually _coded_ anything in the NT Kernel you'd be eating your words. NT is a mess. A microkernel that turned into a monolith out of necessity. And it's _hardly_ clean in there [[shudder]].
  • by drgonzo59 (747139) on Wednesday May 10, 2006 @04:38AM (#15299458)
    The whole argument that microkernels are somehow "more secure" or "more stable" is also total crap. The fact that each individual piece is simple and secure does not make the aggregate either simple or secure."

    I know as a good and faithful /.-ers we should worship Linus and take all of his words as gospel, but in this case I think he is talking out of his arse. Microkernels are "more secure" and "more stable" because only one component needs to work well -- the microkernel, it's main job is to enforce security policies and that is it. If it works correctly it will be able to bring the system to a certain state during the failure of any of the other components.

    Microkernels are used and have been used for a long time in "real" and "serious" operating systems, not just toy examples. Everytime /.-ers fly over the Atlantic it is a microkernel OS in all probability that makes sure they don't crash and burn. The size of those microkernel is kept at no more than 10k lines -- and even so it can take years to prove its correctness. It would be impossible to do it with Linus's kernel. So if Linus and others are so against the microkernel acrhitectures I would want to see them trust their lives to a Linux 2.6 -- put their lives were their mouth (or code) is, so to speak.

  • by master_p (608214) on Wednesday May 10, 2006 @05:04AM (#15299534)
    The only way the monolithic vs microkernel debate will go away is if CPUs provide a better way of sharing resources between modules.

    One solution to the problem is to use memory maps. Right now each process has its own address space, and that creates lots of problems. It would have been much better if each module had its own memory map, ala virtual memory, so as that the degree of sharing was defined by the O/S. Two modules could then see each other as if they belong to the same address space, but other modules would be inaccessible. In other words, each module should have its own unique view of the memory.

    Of course the above is hard to implement, so there is another solution: the ring protection scheme of 80x86 should move down to paging level. Each page shall have its own ring number for read, write, and execute access. Code in page A could access code/data in page B only if the ring number of A is less than or equal to the ring number of B. That's a very easy to implement solution that would greatly enhance modularity of operating systems.

    A third solution is to provide implicit segmentation. Right now 80x86 has an explicit segmentation model that forces inter-segment addresses to be 48 bits wide on 32-bit machines (32 bits for the target address and 16 bits for the segment id). The implicit segmentation model is to use a 32-bit flat addressing mode but load the segment from a table indexed by the destination address, as it is done with virtual memory. Each segment shall have a base address and a limit, as it is right now. If a 32-bit address falls within the current segment, then the instruction is executed, otherwise a new segment is loaded from the address and a security check is performed. This is also a very easy to implement solution that would provide better modularization of code without the problems associated with monolithic kernels.

    There are various technical solutions that can be supported at CPU level that are not very complex and do not impose a big performance hit. These solutions must be adopted by CPU manufacturers if software is to be improved.
  • by Paul68 (262479) on Wednesday May 10, 2006 @05:06AM (#15299537)
    My carreer started in Operating System Research, this was circa 1993. Even in those days there were many people addressing the shared memory issue and coming up with good ways to share memory and address the context switch issue. However this took some overhead and did not make it to the mainstream because of that.

    Today the CPUs are much faster and even sacrificing 10%-20% of CPU power is not considered too much if it results in a system that is (more) stable and easier to maintain. e.g. a device driver can no longer bring down the entire system and a spyware program can no longer sniff all keys pressed...

    I must admit to have lost contact with that field of research but even the old results are promising, with today's CPU speeds.
  • by drgonzo59 (747139) on Wednesday May 10, 2006 @05:29AM (#15299594)
    The microkernel may be more elegant, more pristine in the lab ..... but it's slow by design

    Today most of the software that is used to fly planes (both fighter jets and passenger) is based on a microkernel architecture. So microkernels are not just lab toys, real and mission critical systems are run by microkernel architectures.

    The speed problem can often be solved just buy getting a faster hardware. The main reason Linus rejected microkernels back in the day was because the cost of context switches was prohibitive. Today hardware is lot faster (roughtly Moore's law), so context switches will be alright on a 3GHz Pentium IV machines while it would not be doable on a 33Mhz machines.

    Also, there is nothing about a microkernel that makes it more inherently provably correct than a monolithic kernel.

    Theoretically you are right. But in practice Linux 2.6 is 6 million lines of code and a typical microkernel is less than 10k. It can already take up to a year to check the correctness of a 8k lines of code microkernel and there will be an exponential demand for resources as the code size increases. So in reality it will not be possible to check the linux kernel for correctness.

  • by penguin-collective (932038) on Wednesday May 10, 2006 @07:12AM (#15299816)
    Microkernels like Mach have been unsuccessful because putting everything into separate address spaces makes a lot of things quite difficult.

    C-based monolithic kernels like Linux and UNIX run into software engineering problems--it gets harder and harder to ensure stability and robustness as the code mushrooms because there is no fault isolation.

    The solution? Simple: get the best of both worlds through language-supported fault isolation (this can even be a pure compile-time mechanism, with no runtime overhead). It's not rocket science, it's been done many times before. You get all the fault isolation of microkernels and still everything can access anything else when it needs to, as long as the programmer just states what he is doing clearly.

    C-based monolithic kernels were a detour caused by the UNIX operating system, an accident of history. UNIX has contributed enormously to information technology, but its choice of C as the programming language has been more a curse than a blessing.
  • Re:Entire comment (Score:5, Interesting)

    by putaro (235078) on Wednesday May 10, 2006 @07:31AM (#15299858) Journal
    Well, as someone who has been involved in the development of both monolithic and micro kernels, I suspect that I do know something about the subject.

    Linux, despite being monolithic, has nice layers inside the kernel and clean interfaces too.

    I think you missed Linus' point which I agreed with as well. The real thing you want out of a micro-kernel is memory protection between components of the kernel. The rest is just window dressing.

    Linux does *not* have that.

    Don't confuse run-time separation with interface separation. The latter is a language feature, not a system feature - you could still have a wild pointer and modify private members of any classes directly.

    Let's take a look at OOP and *what* your address spaces are doing for you. Now, in a language like C++, the internal structures of an object are only partially protected. As you say, you can go ahead and cast a pointer to an object to a char * and do anything you feel like to it. The memory protection between objects is not enforced fully.

    Now, if you look at Java or C#, the runtime is a virtual processor and it keeps you from violating the rules that an object defines on its data structure. The memory protection is *very* fine grained as it is on the field level rather than on the page level. You cannot (repeat cannot) go modifying the internal structures of objects if they are not marked as being accessible to you.

    Having spent 10 years as a kernel developer on a day-in, day-out basis, my frame of mind when I stopped doing OS development for a living was very C based. Since then I've spent a lot of time doing OO development and I think that I've broadened my horizons a bit.

    When you look at the way micro-kernels are usually conceptually designed, it's from a C/Unix mind set. Separation is done on a "server" basis and the servers export API's. As you try to add more functionality to the server its API starts getting bigger and bigger and uglier and uglier. For example, you might have a file system server. Locking a file would mean adding a call to the API to lock a file. If you try to make something like a "buffer cache server" which all of the file systems could share it's going to have a nasty API and be slow to boot or it won't be able to enforce memory protection well because the conceptual memory protection is being done on a process level.

    When you look at this from an OO perspective, what you see is that the objects being dealt with are "servers" and they are too large. They need to be decomposed into their functional pieces and additional objects exposed. A "buffer cache server" would hand out "buffer objects" which had a memory protection level, locks, etc. built in.

    Building a kernel that run inside of some protected runtime environment similar to the JVM would enable you to do this. If it were popular enough the features needed to make it really fast would get moved down into the hardware. As it is, I think that the speed of the kernel is kind of a red herring. In general the kernel needs to do fast I/O and fast switching between user tasks and for any other functions the speed probably doesn't matter much. When I was doing kernel development on supercomputers, most supercomputer kernels were single-threaded, even though the machines were multi-processor and things still ran pretty quickly. That's because most supercomputer apps spent very little time in the kernel. I believe this is true for most desktop apps as well. Business and "server" apps tend to spend more time in the kernel, but mostly because they are doing lots of small I/O's.

    Unfortunately there's not a lot of room for innovation in the OS arena so we may never see what could be done. That's one of the reasons why I got out of OS development.
  • by Entrope (68843) on Wednesday May 10, 2006 @07:47AM (#15299909) Homepage
    Today most of the software that is used to fly planes (both fighter jets and passenger) is based on a microkernel architecture.
    Sort of -- in the same way that OS X and similar systems are "based on a microkernel architecture". ARINC-653, which drives that software architecture, specifies a partitioning kernel that separates safety-critical tasks from non-safety-critical tasks (or other safety-critical tasks). Most DO-178B compliant software vendors then run a monolithic kernel in each partition. The partitioning kernel is usually more like an extremely rigorously verified virtualization service than a traditional microkernel.
  • kick out an ABI (Score:2, Interesting)

    by mennucc1 (568756) <d3@tonelli.sns.it> on Wednesday May 10, 2006 @08:02AM (#15299938) Homepage Journal
    Truth is, the linux kernel is getting too big, it takes ~400MB of harddisk to compile a standard kernel 2.6 for a distribution; moreover this size of kernel makes for difficult decisions for distributions, that either exclude some part of it from binaries, or ship huge packages, most of whose are useless to most users. Just consider that the Debian packages went from ~10Mb for kernel 2.4 to ~15Mb for 2.6
    The kernel is becoming too big, and some parts of it (think: hamradio, USB gadgets ...) may well live outside of it. If SANE can manage scanners from userspace, why does webcams live in kernel space??
    I am still craving for the day when Linus will define a "kernel ABI" for driver modules, and some parts of the kernel source will get kicked out of the .tar.bz2
    I believe this day will come; and there are good reason to believe it
    • suppose FOSS dream comes true, and linux ever become the mainstream desktop OS, and every vendor supplies FOSS drivers for their hw.... it will not make sense to ship any single gadget/protocol driver in the same .tar.bz2
    • even today, it does not make sense to ship drivers forever, for hw that is now unavailable to buy; but at the same time it would be unfair to just drop the code to people who still own that hw.
    • having 220MB of source code without a published and enforced ABI for modules means that any change to some parts of the kernel, such as memory management, force almost everybody to rewrite their code ; this, in the long term, may foster innovation. It would be much better if there were some stable ABIs for drivers for lesser demanding drivers (such as webcams).
    Summarizing, IMHO the current monolithic situation cannot scale up forever.
  • by Jerk City Troll (661616) on Wednesday May 10, 2006 @08:59AM (#15300200) Homepage

    Whenever this issue comes up, I swear to myself that proponents of microkernel architectures created the term which they use to address their opponent. The terms used to discuss this are heavily loaded. “Microkernel” sounds lean, quick, and simple, while by subjective contrast “monolithic” sounds bulky, old, and unwieldy. I think that when engaging in this debate, it is best that we prefer to at least use “unified kernel” in place of “monolithic”, being it is more accurate and contrasts with “microkernel” objectively. The term most people use for kernels like Linux and NT seems to imply that there is no logical separation of components and that all pieces are somehow a gigantic (dare I say monolithic) glob and that is nonsense.

  • by Anonymous Coward on Wednesday May 10, 2006 @09:12AM (#15300296)
    >> Only a really, *really* incompetent idiot would write such a server which blocked until the read was finished.

    > "This sounds like a veiled reference to something; would you care to name it?

    Minix
  • by Gorshkov (932507) <admgorshkov@ya[ ].com ['hoo' in gap]> on Wednesday May 10, 2006 @11:34AM (#15301290)
    The point I'm trying to make is that if the spec is wrong (or if you don't even have a spec) then your likelihood of producing a reliable and secure -- but complex -- system is practically nil. At least with a "provably correct" system, you know that if your spec is right, then your results will be right. If your system isn't provably correct, then your system will probably be still broken if your spec is wrong, but even if it's right, your implementation still might be broken.

    Case in point - again, from the same IV&V.

    The boys at Hydro had no idea wtf they were doing. They wern't incompotent by a long shot - they were very good, very bright, and very conscientious. But their background was basically analog design (all electrical engineers), and they wern't overly familiar with software - and it showed.

    When I was going over the spec and their timing measurements (the requirements were stated in the form of "maximum time from A to B shall be XXX milliseconds max" etc) on my initial perusal, I came across the statement that one particular sensor was required to react "in a reasonable period of time".

    I nearly shit my pants. We're talking about a reactor shutdown system here .... there was a lot of debate within the company I was working for as to how I would document my reaction to that statement and it's appearance in the spec. We finally settled on "I am unaware of any quantitative definition of 'reasonable'"
  • Re:in other news (Score:3, Interesting)

    by adamy (78406) on Wednesday May 10, 2006 @02:21PM (#15302740) Homepage Journal
    Ah, but what you are missing is that your program only spends 4% in kernel space because of how well tuned the kernel is.

    We are curently doing some work on the sys_open function in hte linux kernel. When we screw it up, it takes so long for a machine to boot, it looks like it has frozen. This is because the loader often tries to find the correct location for a library via brute force: "OK, I'll try /lib/libmything.so. Nope, ok how about /ib/tls/libmything.so? Nope, ok..." If sys_open does not fail fast, the system does not find libraries in a timely manner, human sacrifice, dogs and cats living together, mass hysteria.

    It should only spend 4% in user kernel space, because the real work is to be done in user space, and kernel space is administrative overhead as far as your program is concerned. But the same is true of everyone's program. If there are 500 process on the system, and each spend 4% in Kernel space, a much greater speed up comes from optimizing the kernel space code than any one particular program. A 1% slowdown perprogram time 500 programs becomes a 500% slowdown in...wait, somewhere my math go weird. Anyway...

    The same goes for profiling and optimizing your own apps. Profile to find out where you are spending the most time, optimize those functions first. We know that the kernel code is going to get exercized heavily. So this is where we need to optimize.

How many QA engineers does it take to screw in a lightbulb? 3: 1 to screw it in and 2 to say "I told you so" when it doesn't work.

Working...