Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Linux Kernel Goes Real-Time 156

Several readers wrote to alert us to the inclusion of real-time features in the mainline Linux kernel starting with version 2.6.18. (Linus Torvalds had announced 2.6.18 on September 19.) Basic real-time support is now mainline. This will ease the job of developers of embedded Linux applications, who for years have been maintaining real-time patch sets outside of the mainline kernel. The announcement was made by TimeSys Corp., a provider of developer services. Much of the work was done by Thomas Gleixner at TimeSys and Ingo Molnar at Red Hat.
This discussion has been archived. No new comments can be posted.

Linux Kernel Goes Real-Time

Comments Filter:
  • What about media? (Score:4, Interesting)

    by stonedyak ( 267348 ) on Saturday October 14, 2006 @05:53PM (#16439265)
    The article only talks about these new patches from the embedded devices point of view. So can anyone tell me if these new features would be useful for improving the responsiveness of media applications in Linux? I'm talking about video/audio playback, as well as authoring and recording.

    What other benefits would the desktop see from this?
    • Re:What about media? (Score:3, Informative)

      by norkakn ( 102380 ) on Saturday October 14, 2006 @06:12PM (#16439379)
      No. RT has a pretty big speed penalty.
      • by deanj ( 519759 ) on Saturday October 14, 2006 @09:03PM (#16440379)
        Not if the process you're using is a realtime process, which is what you'd want anyway.
        • by nebosuke ( 1012041 ) on Saturday October 14, 2006 @11:10PM (#16440973)
          Embedded real-time operating systems and what you're referring to (a process priority level) are two entirely unrelated things. The process priority setting we call 'realtime' simply tells the scheduler that the designated process always preempts other processes whenever it asks for processing time (e.g., it will consume all of your CPU unless/until it sleeps). A real-time OS, on the other hand, is an OS designed such that the length of time any given system call takes to complete is deterministic. This is necessary for embedded systems so that you can make guarantees about the performance and state of the system. Being able to do this is a requirement for many types of mission-critical software, such as avionics, medical device control, heavy equipment control, etc.
          • by deanj ( 519759 ) on Sunday October 15, 2006 @12:28AM (#16441369)
            Yes, I understand what a realtime OS is. I worked on kernel teams (for non-embedded UNIX systems) for several in the 90s. I didn't mention anything about embedded real-time OSes, because it's irrelevant to this discussion.

            Anyway, that was exactly my point.

            The realtime process in a realtime UNIX kernel can help an audio application by being able to field interrupts in a deterministic fashion.

            Furthermore, a fully preemptive kernel (ie, all the kernel data structures protected) with multiple processes being able to be in kernel context at the same time can bring even regular (non-realtime) computer tasks.

            If you think Linux is zippy now, wait until they do the really hard work to make that kernel fully preemptive. The systems will REALLY fly then.
      • by Ingo Molnar ( 206899 ) on Sunday October 15, 2006 @06:41AM (#16442685) Homepage
        RT has a pretty big speed penalty.

        I can definitely say that unlike some other approaches, the -rt Linux kernel does not introduce a "big speed penalty".

        Under normal desktop loads the overhead is very low. You can try it out yourself, grab a Knoppix-based PREEMPT_RT-kernel live-CD from here: http://debian.tu-bs.de/project/tb10alj/osadl-knopp ix.iso [tu-bs.de].

        From early on, one of our major goals with the -rt patchset (which includes the CONFIG_PREEMPT_RT kernel feature that makes the Linux kernel "completely preemptible") was to make the cost to non-RT tasks as small as possible.

        One year ago, a competing real-time kernel project (RTAI/ipipe - which adds a real-time microkernel to 'above' Linux) has done a number of performance tests to compare PREEMPT_RT (which has a different, "integrated" real-time design that makes the Linux kernel itself hard-real-time capable) to the RTAI kernel and to the vanilla kernel - to figure out the kind of overhead various real-time kernel design approaches introduce.

        (Please keep in mind that these tests were done by a "competing" project, with the goal to uncover the worst-case overhead of real-time kernels like PREEMPT_RT. So it included highly kernel-intensive workloads that run lmbench while the box is also flood-pinged, has heavy block-IO interrupt traffic, etc. It did not include "easy" workloads like mostly userspace processing, which would have shown no runtime overhead at all. Other than the choice of the "battle terrain" the tests were conducted in a completely fair manner, and the tests were conducted with review and feedback from me and other -rt developers.)

        The results were:

        LMbench running times:

        | Kernel............ | plain | IRQ.. | ping-f| IRQ+p | IRQ+hd|

        | Vanilla-2.6.12-rc6 | 175 s | 176 s | 185 s | 187 s | 246 s |
        | with RT-V0.7.48-25 | 182 s | 184 s | 201 s | 202 s | 278 s |

        (Smaller is better. The full test results can be found in this lkml posting [theaimsgroup.com].)

        I.e. the overhead of PREEMPT_RT, for highly kernel-intensive lmbench workloads, was 4.0%. [this has been a year ago, we further reduced this overhead since then.] In fact, for some lmbench sub-benchmarks such as mmap() and fork(), PREEMPT_RT was faster.

        (Note that the comparison of PREEMPT_RT vs. I-pipe/RTAI is apples to oranges in terms of design approach and feature-set: PREEMPT_RT is Linux extended with hard-realtime capabilities (i.e. normal Linux tasks get real-time capabilities and guarantees, so it's an "integrated" approach), while ipipe is a 'layered' design with a completely separate real-time-OS domain "ontop" of Linux - which special, isolated domain has to be programmed via special non-Linux APIs. The "integrated" real-time design approach that we took with -rt is alot more complex and it is alot harder to achieve.)

        See more about the technology behind the -rt patchset [redhat.com] in Paul McKenney's article on LWN.net [lwn.net], and on Kernel.org's RT Wiki [kernel.org].

          • Thanks - i forgot about these follow-up numbers that show even lower overhead for PREEMPT_RT: 0% lmbench overhead when compared to the vanilla kernel.

            As a summary, i'm not all that worried about the performance impact. It is real, it will hit certain workloads more than others, but it's alot less than what was feared.

            I'd still advise a generic distro against enabling it unconditionally, but a more specialized one can enable it no problem. These are the preemption kernel config options offered by the -rt kernel:

            ( ) No Forced Preemption (Server)
            ( ) Voluntary Kernel Preemption (Desktop)
            ( ) Preemptible Kernel (Low-Latency Desktop)
            (X) Complete Preemption (Real-Time)

            The first 3 config options are upstream already, and - as Thomas' article points it out - much of the "foundation" of PREEMPT_RT is upstream already too: generic interrupt code, generic time of day subsystem, hrtimers subsystem, the lock validator, the generic spinlock code, priority inheritance enabled mutexes, robust and PI-futexes, etc. Roughly half of what used to be in -rt is in the 2.6.18/19 kernels already.

            Generic distros that want to scale from small boxes to really large multi-CPU boxes should use "Server" or "Desktop" preemption. Desktop-oriented distros could use "Low-Latency Desktop" too. Carrier-grade and embedded ones could use "Real-Time" preemption.

      • by Julian Morrison ( 5575 ) on Sunday October 15, 2006 @09:17AM (#16443337)
        Realtime can be divided into hard realtime, which gives deterministic guarantees on latency, and soft realtime, which gives no guarantees but can be expected to deliver low latency more often than not. Hard RT is the one that's slow, because it does things like fixed-timeslice round robin scheduling. Linux, when not built as a layer above a specialist microkernel, is soft RT.
    • Re:What about media? (Score:3, Interesting)

      by iotaborg ( 167569 ) <exa@soft h o m e . net> on Saturday October 14, 2006 @06:16PM (#16439403) Homepage
      The desktop probably won't see much from having a realtime kernel. A RTOS is great for integrating certain types of hardware, such as data acquisition systems, which can be time dependent. We use RTAI Linux at the lab for time-accurate data acquisition and control system in robotic applications. Most 'desktop' processes do not really depend on time; video and audio playback do, though without a RTOS video/audio play just fine. I do not believe you need kHz-MHz resolution for any desktop application for timing.
      • by Henk Poley ( 308046 ) on Sunday October 15, 2006 @04:14PM (#16445575) Homepage
        The desktop probably won't see much from having a realtime kernel.

        Well, the changes to the kernel to support some sort of usable realtime system have improved the responsiveness of the kernel greatly. Probably none of the programs on my desktop will ever need hard realtime. But boy, do I like it if my programs keep responsive under heavy I/O from other programs.

        I do not believe you need kHz-MHz resolution for any desktop application for timing.

        Well, to each his own. Like I said probably none of the programs I run will use realtime 'contracts' with the kernel. But I expect this change to be as import as using a "server grade" operating system on the desktop. Just like everybody likes stable computer programs (and OSes), they also like responsive programs. And OSes will need to support that too, and for this support they'll need to change, for the better.
    • Re:What about media? (Score:4, Informative)

      by SmileR.se ( 673283 ) <`smiler' `at' `lanil.mine.nu'> on Saturday October 14, 2006 @06:17PM (#16439409) Homepage
      No. Real-time is about being able to guarantee that tasks complete within a certain deadline (and ofc being able to predict this deadline), not doing things fast.
      • Re:What about media? (Score:5, Interesting)

        by fossa ( 212602 ) <pat7@gmx. n e t> on Saturday October 14, 2006 @06:39PM (#16439547) Journal

        But desktop users, or at least me, don't care about doing things "fast". I care about things feeling fast. I care about latency. So, do these patches help the audio on my computer not skip when I move a window? I've tried the premptive kernel patches in the past with no noticeable difference. How are these patches similar, different, or complimentary to Ingo Molnar's (whose patches are mentioned in the article). Thanks.

        • by jedimark ( 794802 ) on Saturday October 14, 2006 @08:54PM (#16440311) Homepage
          Realtime patches make it a lot more practicle for stuff like audio processing - sound comes in line-in, gets worked over and send straight out to line-out - all in less than 5ms. I've found these 2.6 patches better than the realtime-lsm stuff.

          As for your sound playback is skipping...

          You can up the pci latency on your audio card.. (using setpci) but that wont solve everything.. for instance sound still has to load from the harddisk, so the latency for the controller has to be up too.

          Something could be sharing an IRQ with your soundcard or something else that makes things clag up.. (For instance, a pci network card can interfere with onboard sound card plugged into a certain PCI slot)

          Of course, your computer could just be too slow. and some chipsets just plain suck. (eg older VIA stuff, which I unfortunately is in the majority of my computers)

          My soundblaster audigy card cacks out at 5ms, so I took it out tried a whole bunch of s/h crud from my soundcard box, and now use an old SB Vibra128 (CT4810) and get near zero latency for music recording. So I guess the type of soundcard can have an impact on stuff like that.. I found CMI soundchips to be the absolute crummiest in terms of latency (and have had plenty of skips with them in ordinary desktop machines)
        • by kimvette ( 919543 ) on Saturday October 14, 2006 @08:57PM (#16440341) Homepage Journal
          If you're doing transcoding or 3D rendering, you want it to be fast, and you certainly do not want non-reenntrant libraries (like some shared Gnome or KDE GUI libraries) hogging processor time. Hopefully a realtime kernel will help to ensure that the background threads receive the processor allocation they should be getting.
        • by deanj ( 519759 ) on Saturday October 14, 2006 @09:10PM (#16440417)
          You're not going to see anything really great until they finally bite the bullet and make the kernel fully pre-emptive. The hacks in there now to have multiple processes in kernel context aren't nearly as effective as fully semaphoring the data structures in the kernel but 1) It's hard, and 2) the kernel jocks doing the kernel these days haven't been too happy about it.

          But sooner or later, they'll have to. Multi-core/multi-processor single box systems will greatly benefit from it.
        • Re:What about media? (Score:1, Interesting)

          by Anonymous Coward on Saturday October 14, 2006 @10:23PM (#16440747)
          Yeah, like the way Amarok "locked up" my computer for several hours yesterday. All I did was tell it to rescan the collection, and first the KDE desktop stopped working (clock wouldn't update the time, can't switch desktops, and stuff like that) then everything running from X stopped listening to the mouse and keyboard events. It took 30 minutes to ssh in from the LAN here, and when I accomplished that feat I couldn't do anything. Later on even ssh stopped working: it would accept a connection then immediately close it.

          Eventually my computer came back to me, but not after trying to kill the amarok collection scanner a few times.

          As far as I can see, the problems are several - when amarok starts scanning it does something nasty to KDE and doesn't stop it until the scanning is finished. Secondly, when scanning an AAC file, it seems to loop reading the same data from disk forever. Thirdly when scanning files, the disk drives are somehow monopolised by the scanner, so it's impossible for anything else to read the disk in a timely manner. That causes processes to run slowly, and loadavg builds up (due to cron, incoming server connections, and stuff). When loadavg gets high it's possible to SSH in but it takes many minutes and anything which requires disk I/O appears to hang. When loadavg gets very high, sshd detects this and drops connections (or maybe it's tcp_wrappers doing it).

          Applications should not be able to monopolise the computer to the extent that everything else just freezes. If it's a problem with the disk I/O I'd like to see what reconfiguration of the hardware will fix it - this is just two SATA drives on the inbuilt SATA adapter, in a RAID-1 array.

        • by code4fun ( 739014 ) on Saturday October 14, 2006 @11:51PM (#16441185)
          You probably need a faster machine or faster graphics card.
        • by Znork ( 31774 ) on Sunday October 15, 2006 @06:41AM (#16442683)
          As far as I can tell, it looks pretty much like the same patches that CCRMA uses (but I could be wrong), which should in theory help, in the event that your problem is related to latency issues.

          However, in my experience, under any normal circumstances (ie, machine produced in the last decade, not doing massive multitrack music composition and playback), you're vastly more likely to run into driver problems causing skips. Any non-prehistoric machine (500MHz+) should easily be able to playback an MP3 and not skip while moving windows, even without realtime audio.

        • by Breakfast Pants ( 323698 ) on Sunday October 15, 2006 @02:20PM (#16444823) Journal
          If you are running a window manager in that makes dragging occur in outline mode on X, and you have audio playing in Firefox in a flash applet (e.g. with youtube or pandora), no matter what your kernel is you will get skipping in your audio. This is because when dragging around in outline mode in X nothing is allowed to draw to the screen, and either firefox or flash just basically lock up.
      • Re:What about media? (Score:5, Interesting)

        by novus ordo ( 843883 ) on Saturday October 14, 2006 @06:54PM (#16439655) Journal
        You are right, to a certain extent, but take audio/video work as an example. If I am using my computer as a synthesizer, I want to be able to hear the keys as they are being pressed, not 0.5-2 sec later. Even if the mixing part would take longer than the time I press my next key, the kernel should process that key anyway. Music is a weird thing.. a couple of milliseconds difference is enough to change the perception of a note. So in this case, it will 'seem' faster, even though it will have less throughput(context switches the mixing process out even though it has the mixing data in the cache which conveniently gets invalidated, swapped, etc.) I guess it gives certain applications the super-duper dependent on time factor that would otherwise be overlooked in the ways kernel measures "fast."
        • Re:What about media? (Score:3, Interesting)

          by glarbl_blarbl ( 810253 ) <glarblblarbl@gm[ ].com ['ail' in gap]> on Saturday October 14, 2006 @07:12PM (#16439743) Homepage Journal
          Music is a weird thing.. a couple of milliseconds difference is enough to change the perception of a note.
          Actually, the human ear is capable of detecting two discrete notes if they are separated by at least 30 milliseconds. If you set your digital delay to anything shorter than that, it won't sound like an echo - it will sound like the delayed signal is part of the original signal. Your example of a synth producing a note half a second after you hit a key is spot-on though, you'll notice that and be pretty annoyed if you're trying to do an overdub.

          http://ccrma.stanford.edu/planetccrma/software/ [stanford.edu] has offered a real-time kernel for as long as the project has been around, for the above reasons - and if you're trying to record 8 tracks of audio at 24b/96kHz you'll quickly realize that the kernel has to be able to handle all of that data precisely, or your drum kit recording is going to sound awful.

          • by tuba_dude ( 584287 ) <tuba.terry@gmail.com> on Saturday October 14, 2006 @07:33PM (#16439851) Homepage Journal
            Interesting project page, I'll have to check it out later. I agree about the pains of delays in digital recording, because even when they're unnoticable live, they're a pain in the ass to account for. (Why doesn't somebody add a routine to track and account for that delay in a multitrack recording program?) I've always had an issue with people say "the human X can't detect Y" where X is a sense organ and Y is a related phenomenon of relatively minute proportions. A lot of people tell me they can't hear the difference between 128kbps MP3s (I'm unsure of the details of the format, but even at 320kbps, isn't there a change in the waveform? That could techincally be detected too.) and the original CDs and that sort of thing, but for different people, they'll be able to discern different things. I can hear the hum from almost any CRT in a relatively quiet room, for instance. Sure, most people won't notice, just like most people won't notice the 3 or 4 viruses they probably have on their computers right now. I don't know, I'm just complaining. It's just frustrating when someone tells me that I can't detect something that I know I can. Just a tip for any current or would-be audio software programmers out there: Don't tell a musician what he can or can't hear. Either adjust your program based on the feedback or explain why the descrepancy is there. Sorry, enough ranting for me.
            • A lot of people tell me they can't hear the difference between 128kbps MP3s (I'm unsure of the details of the format, but even at 320kbps, isn't there a change in the waveform? That could techincally be detected too.) and the original CDs and that sort of thing, but for different people, they'll be able to discern different things.


              If you're really interested, I have some spectrograms of what OGG and MP3 do to a waveform. The data rate is undoubtedly important, but you also have to take into consideration the sample rate (which controls how many samples per second the decoder has to interpolate a waveform and side effects thereof) and the number of quantization bits available, which controls the resolution of the individual samples. Plus, MP3 has a lowpass filter which is somewhere between 16kHz and 19kHz, which for certain musical passages is really noticeable, though not so much so for just voice.

              AFAIK, MP3 is by nature always a lossy codec, with no option for lossless encoding. If that's true, no matter how much information you want to make available to the encoder, you'll never be able to match the original waveform.
            • Re:What about media? (Score:2, Interesting)

              by glarbl_blarbl ( 810253 ) <glarblblarbl@gm[ ].com ['ail' in gap]> on Saturday October 14, 2006 @08:08PM (#16440013) Homepage Journal
              I know what you mean, and IMO many times sensory limitations are generalized to the entire population while extraordinary indviduals who spend time training their senses will be able to exceed them. I had the benefit of learning from professionals in audio production in a collegiate setting, and the I bet most people can't hear the difference between DVD quality and CD quality, and most of the consumer gear doesn't help there... I guess it's splitting hairs, but audio professionals who truly respect their craft will always strive to present the musicians' intent in the most transparent manner possible. I've always thought that breaking rules didn't mean much until I knew what they were.
              • by glarbl_blarbl ( 810253 ) <glarblblarbl@gm[ ].com ['ail' in gap]> on Saturday October 14, 2006 @08:17PM (#16440065) Homepage Journal
                Weird. /. seems to have edited my post, it's plain old text from now on, dammit.. That what I get for skipping preview I guess. Here's what I meant to say:

                I know what you mean, and IMO many times sensory limitations are generalized to the entire population while extraordinary indviduals who spend time training their senses will be able to exceed them. I had the benefit of learning from professionals in audio production in a collegiate setting. The ~30ms rule was one I heard from many of my instructors and verified for myself a number of times (on analog and digital gear in state of the industry studios). Since I graduated 9 years ago I have been actively training my ears, I have also been a multi-instrumentalist for nearly twenty years. So take from that what you will.

                I bet most people can't hear the difference between DVD quality and CD quality, and most of the consumer gear doesn't help there... I guess it's splitting hairs, but audio professionals who truly respect their craft will always strive to present the musicians' intent in the most transparent manner possible. I've always thought that breaking rules didn't mean much until I knew what they were.
          • by u38cg ( 607297 ) <calum@callingthetune.co.uk> on Monday October 16, 2006 @11:13AM (#16452977) Homepage
            I'm not an expert on audio perception, but I do know that I can play and hear up to 24 distinct notes per beat at 120bpm, which by my calculations works out at just under 21ms. That's playing a monophonic (wind) instrument, though.
      • Re:What about media? (Score:3, Interesting)

        by bazald ( 886779 ) <bazald@z[ ]pex.com ['eni' in gap]> on Saturday October 14, 2006 @06:58PM (#16439665) Homepage
        While you are right, you seem to have missed his point entirely. One of the biggest applications of real-time scheduling for normal users (rather than pilots and nuclear plant controllers) *is* media. Pick up a text book on operating systems, and scheduling of media applications (audio, video, ...) is practically the first thing they get to.

        I for one believe that it could be immensely useful for Linux to provide the option of switching to a real time scheduler when starting a video playing - and for it to be able to tell you that there is no chance it will be able to run certain videos (HD for example) on older hardware.
        • by wish bot ( 265150 ) on Saturday October 14, 2006 @09:34PM (#16440515)
          This is a neat idea and a good point. However, you may find that you get an OS X situation, where the scheduling works brilliantly for media but is pretty hopeless for tasks like the infamous mySQL database test. I have heard of people who recompile their own scheduling into OS X for use with databases that increases the performance dramatically. Maybe there could be different flavours of Linux distributions - media, or development - that are more suited to each task.
      • Re:What about media? (Score:5, Informative)

        by flithm ( 756019 ) on Saturday October 14, 2006 @06:59PM (#16439667) Homepage
        I get what you're saying, but if I didn't know better what you said could be a little misleading. To be totally correct a real time operating system has nothing to do with task completions, but rather with providing the ability (often mathematically proven) to respond to interrupt and thread context switches within a pre-determined (and known) time constant. This is generally referred to as the interrupt or thread switch latency.

        And actually the grand parent is right to wonder about whether or not this will allow for a more responsive feeling system, because in general, it will. Much more so than the pre-emptive kernel routines already in = 2.6.17.*. If you've ever used a real time OS you'll know what I mean -- nothing feels so nice to use. The processor can be pegged, and have a ridiculous load average of say... 100 or something (100 tasks are trying to use 100% of the CPU), and you won't really notice any sluggishness... but, of course, tasks will just take a long time to complete.

        It's not without it's downfalls though. Obviously being able to guarantee low latency interrupt responses is going to require some overhead. You'll definitely slow your computer down by using a real time OS, although unless you're a gamer you might not even notice.

        All that said and done, unless you're using your computer for some very specific things like embedded devices, critical applications (medical, power station management, etc), and audio / video stuff, you'll probably never notice the difference.

        I know a lot of audio people are happy about the real time patches because the delay between turning a dial, or moving the mouse, and the noises that come out of the speakers is quite noticeable even with really small delays.
        • Re:What about media? (Score:3, Interesting)

          by Pseudonym ( 62607 ) on Saturday October 14, 2006 @08:23PM (#16440101)

          Real-time is all this and more.

          Getting tasks to complete on time is actually not something that an operating system can guarantee: you can simply lie about how long your job will take. (But, like your compiler, if you lie to your operating system, it will get its revenge.) Or you can design your hardware so that it swamps the system with so many interrupts that it can't service any user tasks. This kind of problem is a property of the system as a whole, and the best that the kernel can do is guarantee that if you play by the rules, it will too.

          Latency (be it interrupt delivery, signal delivery, context switch or whatever) is something that the kernel can guarantee for the most part, but even then, in an OS like Linux, where hardware drivers are part of the kernel, you also rely on those drivers to play nice. This means, for example, that it must not disable interrupt delivery around a loop.

          The other main things you need are things like the ability to fix data in memory (which Linux has had for a while), fixed-priority tasks and the ability for a task not to be preempted by a task of the same priority.

    • For Audio Work! (Score:2, Informative)

      by reaktor ( 949798 ) on Saturday October 14, 2006 @06:25PM (#16439469)
      This is good for anyone who does audio work in Linux. Until now, you had to patch the kernel to get a low-latency kernel. This is the big news- for Linux audio users!
    • Re:What about media? (Score:5, Informative)

      by Bacon Bits ( 926911 ) on Saturday October 14, 2006 @06:48PM (#16439619)

      Desktops and most servers do not get any benefit from a RTOS. RT makes it so that the system purposefully downgrades less-useful things like user input for maximum priority things like, say, polling a fetal heart monitor every n milliseconds or responding to an automobile collision to deploy an airbag.

      RTOS in Linux is primarily useful for Linux-based routers. However, seeing that QNX has been in the industry for 25 years, has an extremely good reputation (it's the de facto standard in the auto industry), and is already POSIX compliant, Linux still has a long way to go. The price for QNX might be USD $10,000+, but if you actually have a need for a RTOS, licensing cost is not a major obstacle.

      • by eggstasy ( 458692 ) on Saturday October 14, 2006 @09:52PM (#16440615) Journal
        Desktops do benefit from a RTOS. RT makes it so that the system purposefully downgrades less-useful things like buggy apps that go into infinite loops for maximum priority things like, say, user input.
        • by Bacon Bits ( 926911 ) on Sunday October 15, 2006 @03:07AM (#16441981)
          Yes, but adding in RT checks adds a lot of extra processing. Essentially, the system will have to stop and check to see if it's time for an RT operation when it would otherwise be doing work. And if the RTOS is doing multiple things in real time, well, it just gets that much more complicated.

          If a normal OS would *really* benefit from RT operations, don't you think that Linux, AIX, or Windows wouldn't have already implemented it? RT processing is only necessary in very specific applications. Almost all of them are for industrial equipment, safety equipment, and medical equipment where a few milliseconds of real time is actually important. A normal OS is concerned with *how quickly* something will occur. A RTOS is concerned only with *when* something will occur. It sacrifices performance for predictability.
    • Re:What about media? (Score:4, Interesting)

      by javilon ( 99157 ) on Saturday October 14, 2006 @07:00PM (#16439677) Homepage
      For a long time, people using JACK [jackaudio.org] (a low-latency audio server, written for POSIX conformant operating systems such as GNU/Linux and Apple's OS X) have been using real time linux patches to achieve better results. So for this kind of pro audio users, using Linux workstations (not embedded linux) it is quite nice to have included in the official kernel.
    • by echusarcana ( 832151 ) on Saturday October 14, 2006 @08:58PM (#16440349)

      Contrary to the reply above, the answer is yes - multimedia is one application of a usage of real time signals (a timer signal). Basically, what it boils down to is the POSIX.4 standard. In MS Windows, the multimedia timers an alternative example (I believe). On most hardware, a clock resolution of about 970us is available. So you can get scheduling with an average error of half this (so your error is half a millisecond on average).

      Real time has several aspects besides just scheduling:
      -real time signaling queues (e.g. timers and other interprocess signals) and other inter-process communications
      -the ability to lock pages into memory
      -the ability to set real time priority (pri 33-43) above the kernel (20-32) and users (0-19). To say that another way - a real time process is the king of the system.
      -the ability to choose a kernel scheduling method (e.g. FIFO, round robin)
      -the granularity with which the kernel can be interrupted
      -the latency in switching tasks

      Real time has been traditionally a concern in control system applications where predictable scheduling is important or response to a hardware interrupt must be rapid. Multimedia and many gaming applications also fit this requirement.

      Real time is undermined where things like underlying hardware can disrupt predictable execution (e.g. I've seen this caused by some file system drivers). Ideally, real time applications must be designed to assume they are in complete control of the system and should not rely on the timing of any I/O activity during their execution. A real time process must sleep for anything else on the system to run. Bugs in real time applications can be very subtle to detect and may occur at low frequency.

      Adding full real time support for Linux is pretty important for both usefulness and credibility.

    • So can anyone tell me if these new features would be useful for improving the responsiveness of media applications in Linux?

      Well, Stanford University's Media Lab offers the Planet CCRMA Linux distro (based on Fedora Core). One of its features has been custom kernels (two "speeds," even) compiled with Mr. Molnar's real-time patch (which I think is much the same as what is being discussed in this article). Planet CCRMA terms it a "low latency" fix and it's important in certain types of audio production where you want to record a "live" track in sync with a track that's already been laid down. Too much lag in the system (just a few milliseconds) can make the whole deal unworkable when trying to do "sound with sound" recording (musicians do this a bunch). I can tell you from personal experience that Audacity's usefulness goes down a couple of notches without it. :)

      You can read more about the patched kernel here, down the page:

      http://ccrma.stanford.edu/planetccrma/software/ins talltwosix.html [stanford.edu]

      Hopefully, this will mean that the Media Lab folks at Stanford won't have to bother to maintain the custom kernels in the near future.

      * * * * *

      Although golf was originally restricted to wealthy, overweight Protestants, today it's open to anybody who owns hideous clothing.
      --Dave Barry

  • by ztransform ( 929641 ) on Saturday October 14, 2006 @05:53PM (#16439267)
    Many arguments are out there for and against Linux as a real-time embedded operating system. But with embedded processors getting more powerful and there is growing popularity with the Linux kernel anything that makes the job of an embedded developer easier (and standardised) is a good thing.
  • by j1m+5n0w ( 749199 ) on Saturday October 14, 2006 @06:01PM (#16439303) Homepage Journal
    There has been a substantial ongoing effort to provide real-time support in Linux for quite a few years now; what do these particular patches do that hasn't been done before?
    • by Nutria ( 679911 ) on Saturday October 14, 2006 @06:08PM (#16439355)
      what do these particular patches do that hasn't been done before?

      Make them an official part of the kernel.

      I'd ask if you RTFA, but of course it's pointless.

      • by j1m+5n0w ( 749199 ) on Saturday October 14, 2006 @06:42PM (#16439567) Homepage Journal

        Gleixner was the main author of Linux's hrtimer (high-resolution timer) subsystem, and has been a major contributor to Ingo Molnar's real-time preemption patch. The changelog for the 2.6.18 kernel reflects the addition of 136 patches authored by Gleixner, along with 143 from Molnar, who works for Red Hat.

        The 2.6.18 release includes real-time technology that will save individual kernel developers from having to maintain separate real-time kernel trees, according to TimeSys. Additionally, embedded Linux developers or normal desktop users wishing to build kernels capable of achieving millisecond-level real-time responsiveness will no longer have to apply patches.

        The article does not say what hrtimer and the real-time preemption patch actually do, nor does it say that those are the patches that were added to the kernel, merely that Gleixner worked on those as well as whatever those 136 patches are that made it into the kernel. So, what actually changed? Were internal APIs rearranged? Were long-held spinlocks replaced with shorter-duration spinlocks? Can the kernel preempt things it couldn't preempt before? Does the kernel export any new APIs to user space? What sort of improvements can we expect with these patches?

        The article was quite vague in its technical details, so I was hoping someone could fill them in.

        • by Nutria ( 679911 ) on Saturday October 14, 2006 @08:06PM (#16439989)
          What sort of improvements can we expect with these patches?

          We, those who surf the web, play music, write email, host web servers and run databases will see no improvments.

          Those who control tiny systems with ARM and MIPS CPUs will find life more consistent, because now there is The Official Real-Time Kernel, instead of a variety of non-similar patch schemes.

          • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Sunday October 15, 2006 @01:32AM (#16441613) Homepage Journal
            Those who play music will hear less skipping. Those who watch videos will see fewer frame jumps. If it was "hard" real-time, then you could guarantee that these would never occur at all, because you would be able to guarantee that the application always had the necessary timeslice to do what was needed.


            For servers, real-time guarantees each process (or each thread, depending on implementation) a deterministic amount of time, so guarantees that denial-of-service to those who are currently having queries processed is impossible. (You are guaranteed your time on the server, no matter what.) However, because execution time is bounded, it also guarantees that response time can never be below some pre-defined threshold, either. A lightly-loaded machine will provide precisely the same response times as a heavily-loaded one.


            For most servers, this is a pretty useless thing to guarantee - well, unless you're about to be Slashdotted. There are some exceptions. A server providing financial information would not really need to be able to respond faster off-peak, but must not fall over under stress. A networked storage device can't afford massive fluctuations in access time if it is to be efficient - mechanical devices don't have a zero seek-time. The better the device can guarantee an operation will take place at a specific time, the better the device can order events to make use of what time it has. Network routers that are hard real-time should never fail or deteriorate as a result of network congestion, it should merely hit the limit of what it can handle and not allocate resources beyond that point. (A router or switch that is not real-time will actually deliver fewer packets, once it starts getting congested, and can fail completely at any point after that.)


            Real-time is not going to be generally useful to the majority of individuals the majority of the time, although it will be useful to most individuals some of the time (eg: multimedia), and to some individuals a great deal of the time (eg: those setting up corporate-level carrier-grade VoIP servers, unmanned computer-controlled vehicles, DoS-proof information systems, maglev trains, etc). All classes of user will have some use for real-time. Even batch-processing needs some degree of real-time - jobs have a well-defined start-time and must never be given so much time that the next cycle of the same job is incapable of starting on time. It's not "hard" real-time, but it's still real-time in that it is still a very well-defined set of bounds on when a process is executed and how much time it gets.


            "No improvements" is therefore entirely incorrect. No noticeable improvements - maybe, depends on the situation. Some situations will improve significantly, others may actually deteriorate. No overall, on-average improvement would be closer to the truth. As for tiny systems - I run Fedora Core in full desktop mode on a MIPS, RiscOS was designed for the ARM, and I've seen plenty of embedded systems that need real-time on Opterons. For that matter, VxWorks - the "classic" hard real-time OS - is designed around the Motorola 68040 architecture and is often used by military and science labs to handle bloody huge applications (think of something of comparable size and complexity to Windows Vista as it was going to be before all the interesting bits got chopped out).


            Also, when you think of "tiny systems", you generally picture something the size of a matchbox that can be powered by a watch battery. In practice, data collection systems (a heavy use of real-time) will usually use VME crates that might easily have 16 CPUs, 128-bit instrumentation busses, a few tens of gigabytes of RAM, a fan that sounds like it was pulled out of a jet fighter, and power requirements that would seriously strain a domestic power grid. The last time I saw a truly small real-time system was at a micromouse tournament, although undoubtedly there are people who use them on tiny systems. The biggest money and the heaviest demands are to be found in areas needing much bigger iron.

  • Real-time OS (Score:3, Informative)

    by Hemogoblin ( 982564 ) on Saturday October 14, 2006 @06:09PM (#16439359)
    I didn't know what it was either, so here you go:

    http://en.wikipedia.org/wiki/Real-time_operating_s ystem [wikipedia.org]

    "A real-time operating system (RTOS) is a class of operating system intended for real-time applications. Examples include embedded applications (programmable thermostats, household appliance controllers, mobile telephones), industrial robots, industrial control (see SCADA), and scientific research equipment.

    A RTOS is valued more for how quickly and/or predictably it can respond to a particular event than for the given amount of work it can perform over time. Key factors in an RTOS are therefore minimal interrupt and thread switching latency."
    • by mnmn ( 145599 ) on Saturday October 14, 2006 @06:59PM (#16439669) Homepage
      Talk about Karma whoring.

      I'm pretty sure most anyone reading this article on Slashdot knows what an RTOS is.
      The remaining few know where wikipedia is
    • by novus ordo ( 843883 ) on Saturday October 14, 2006 @07:05PM (#16439707) Journal
      Realtime computing [wikipedia.org] is more concerned with latency than throughput.
    • by faragon ( 789704 ) on Saturday October 14, 2006 @07:17PM (#16439767) Homepage
      RTOS is all about predictability. Hard Real Time OS, such as LynxOS [wikipedia.org], VxWorks [wikipedia.org], or QNX [wikipedia.org], are able, by contract, to guarant you how many microseconds, milliseconds, or seconds you can expect for an operation to finish (as example, ethernet packet dispatching always below X ms, etc.).

      Anyone with experience on unix-like RTOSes is probably glad to see RT being introduced into the main branch, as the responsiveness of a RTOS is, by far, much more satisfying. The O(1) scheduler was a nice step forward. The incoming Linux RTOS-like additions, such as preemptible kernel, priority inversion avoidment (not sure if this has been included), etc. will use few more CPU cycles and a bit less disk and I/O throughput, but will boost the user experience. This second step for introducing RT characteristics will not imply Linux becoming Hard RTOS, just a Soft RTOS. In few words: there will few changes related mainly to kernel preemption, in line with TimeSys and BlueCat Linux (i.e. adapting the RT patching to the main branch). I hope that Hard RTOS support will be added in a far future, as it involves driver architecture rewriting (the 2.4 to 2.6 it is trivial in comparison). IMO, it's worth the effort (LynxOS Posix Desktop anyone? ;-).
  • Video playback (Score:1, Offtopic)

    by teslatug ( 543527 ) on Saturday October 14, 2006 @06:09PM (#16439363)
    I'm glad linux is getting more realtime capabilities, but what I wonder when will video playback not suck under Linux. Why does video playback get paused when you drag any window? Or how come you can't play two videos at the same time? Maybe these are X.org issues, I don't know, but they're annoying.
    • by henrikba ( 516486 ) on Saturday October 14, 2006 @06:14PM (#16439387)
      I guess you should look into your configuration. I can have several mplayer windows running at once. They don't pause when I move them (allthough they morph nicely because of xgl/compiz).
    • I use an NVidia card with it's binary blob driver and I can play back like 8 videos at once (not very useful, but there it is)

      So it's not linux itself, it's getting good drivers and making sure you use them.
    • by ltmon ( 729486 ) on Saturday October 14, 2006 @06:42PM (#16439569)
      It's probably due to your video drivers that you see such poor performance.

      With properly accelerated video playback you will have no problem. For example "Xv" playback on newer Ati cards uses hardware acceleration. Also, have a look at the XGL demonstration videos where they play videos overlapping, transparent, in the root window and deformed all at once.
    • by BiggerIsBetter ( 682164 ) on Saturday October 14, 2006 @06:49PM (#16439625)
      Odds are you're playing through the X11 interface, when you should be using the Xv interface. Switching depends on your video player application, but most all of them can do it (with a video driver that supports Xv - most popular ones do).
  • by Anonymous Coward on Saturday October 14, 2006 @06:26PM (#16439475)
    OK, this is very significant in many ways, but audio is one thing that will benefit a lot. This marks the day when Linux audio is the most viable choice for recording engineers. Projects like Ardour [slashdot.org] will offer lower audio latencies than any other system out there - including high end hardware solutions. With 1ms latency Linux will be insanely good for any kind of professional audio work.
    • by Overly Critical Guy ( 663429 ) on Saturday October 14, 2006 @06:43PM (#16439571)
      Ha, yeah right. Linux audio software is horrible compared to the professional alternatives, and most people already have ultra-low latencies through their high-end audio hardware and provided drivers.

      Wishful thinking, my friend.
      • by DrunkenPenguin ( 553473 ) on Saturday October 14, 2006 @07:26PM (#16439807) Homepage
        Hmm.. Really? Do you know what you are talking about? If what you're saying is true then I really have to wonder why high-end hardware mixer manufacturers are funding and using Ardour? These companies are one of the leading companies in the industry.

        "The Ardour project is happy to be able to announce the involvement and support of Solid State Logic (SSL), one of the most respected and trusted names in the field of audio technology. SSL has chosen to support Ardour's development and to promote the idea of its broader adoption within the audio technology industry."

        Solid State Logic [solid-state-logic.com]

        Harrison/GLW [ardour.org]

        • by Overly Critical Guy ( 663429 ) on Saturday October 14, 2006 @09:42PM (#16440557)
          Yes, I do know what I'm talking about. Ardour is a third-rate Pro Tools clone. Literally, every element in the interface is directly cloned from Pro Tools. It doesn't support the top plug-in formats, AU/VST/RTAS, and it requires going through that crappy JACK format, and there aren't alternatives for all the apps that typically interface with Pro Tools, such as Logic, Ableton Live, Reason, Final Cut Pro, etc. Ardour also only runs on Linux, and Linux consumes time like a whore consumes jizz. Pros just want to get up and running, not tinker with Ubuntu installs.

          It's safe to say Apple isn't worried about losing ANY market share in the content creation industry.
          • by joto ( 134244 ) on Sunday October 15, 2006 @01:09PM (#16444403)

            Ardour is a third-rate Pro Tools clone.

            Pro Tools has been the main tool for audio professionals for decades. Sounds like a good choice if you want to clone something. I'm sorry you can't get everything you want, but even though it's third-rate, at least it's improving. And it's not like it's the only thing out there for linux either.

            It doesn't support the top plug-in formats, AU/VST/RTAS

            Ardour supports LADSPA plugins. It doesn't support AU because that's mac only. It doesn't support VST because that's windows only. It doesn't support RTAS because it's not an open standard, and requires special hardware. It may work with VST through wine, and if a windows or mac version is ever made, it will most likely support native plugins there.

            and it requires going through that crappy JACK format

            JACK is not a format. Jack is to linux what ReWire is to windows. Since jack can be used to route audio between applications, it can also be used to route audio between your soundcard and your applications. In general this gives you more flexibility. If you don't want to know this, I can write you a 2-line script so you don't need to be aware of jack. However, it will still be there, giving you more flexibility than you ever had on windows, without any loss of performance. Just because it's different from what you're used to, doesn't mean it sucks. The only thing that sucks about jack is that it doesn't do MIDI!

            and there aren't alternatives for all the apps that typically interface with Pro Tools, such as Logic, Ableton Live, Reason, Final Cut Pro, etc.

            Well, if you can only be satisfied by having exactly the same applications as under windows, you know where to find them. I will not dispute that linux could need more audio applications, but we don't need clones of all the above. ProTools, Logic, and Ableton Live all more or less perform the same function. They allow you to play, record, and arrange audio and midi tracks. Final cut adds video tracks. Reason has a nice selection of synths and effects. What we need under linux is tools to do the above tasks: play, record and arrange midi, audio and video tracks; create audio; modify audio and/or video. Tools should focus on those tasks, not in whether it duplicates the function of Logic or Ableton Live.

            Ardour also only runs on Linux, and Linux consumes time like a whore consumes jizz. Pros just want to get up and running, not tinker with Ubuntu installs.

            That kind of "pro" also has near unlimited funds and can afford to fill their studio with hard- and software for millions of dollars. Surely they can afford someone to help them inserting an ubuntu disk. As a non-pro, and from experience, I know that windows consumes a lot more of my time than linux ever has. An ubuntu install just works, compared to windows that must be tweaked for days before it works well, and must be reinstalled every 6 months. As a hobbyist musician, and tinkerer, I'd be much more interested in a home-studio running linux, than one with windows. You'd be surprised how many computer musicians agree with me.

            It's safe to say Apple isn't worried about losing ANY market share in the content creation industry.

            They should. Everything seems to be going windows!

      • by Anonymous Coward on Saturday October 14, 2006 @07:30PM (#16439837)
        You mean horrible compared to the consumer alternatives. It's right up there with the professional alternatives - it is a viable one. In terms of stability, configurability and ability it trumps both levels of commercial software - the only thing it's missing are the presets. If you're savvy enough, or professionally experienced in audio engineering, you'll find every tool you need, you'll route audio and control signals between them with ease, and you'll set their parameters to ear-pleasing results. If you're not savvy enough, you'll have to wait a few years for people to add a "Pop Hit Vocal" button to the screen that turns the little knobs for you. That's the only advantage commercial software currently has, and, in the professional world, where people are replacing expensive dedicated hardware, rather than Acid and GarageBand, it's nowhere near as important as stability, configurability, and ability.
        • by Overly Critical Guy ( 663429 ) on Saturday October 14, 2006 @09:36PM (#16440521)
          Absolutely not. Ardour tries to be a third-rate Pro Tools clone but without the hardware support, plug-ins, and other features. It comes nowhere close to a Logic Pro or a Cakewalk Sonar. Now, if you want to compare Ardour to limited consumer products like Garageband, then sure. But high-end software that the pros use in the studios? Not a chance, and if you go to a professional studio in L.A., you won't be seeing copies of Ardour running. You'll be seeing Pro Tools on Macs with Logic MIDI front-ends.
          • by joto ( 134244 ) on Sunday October 15, 2006 @01:33PM (#16444549)

            Ardour tries to be a third-rate Pro Tools clone

            I'm sure ardour doesn't try to be a third-rate ProTools clone.

            but without the hardware support, plug-ins, and other features.

            The hardware support isn't needed anymore. Fast multicore processors have pretty much replaced the need for specialized hardware. Of course, dedicated DSP hardware is still better, but it's certainly not the future anymore. Besides ProTools hardware is expensive, and I doubt you can get specifications for it. Besides, it's not obvious what other kinds of hardware you can use. On the other hand, software audio plugins, and support for multichannel low-latency sound-cards is there.

            and if you go to a professional studio in L.A., you won't be seeing copies of Ardour running.

            And if you go to a professional office in L.A, you won't see Blue Gene, you will see a DELL. And you won't see "La Modernista Diamonds" by Caran d'Ache, but a BIC-pen. Does that tell you that you something? Perhaps that proof by reference to "the pros" doesn't work?

  • by otter42 ( 190544 ) on Saturday October 14, 2006 @06:49PM (#16439623) Homepage Journal
    My reaction is mixed. I use RTAI in my lab, and this doesn't seem to say anything whatsoever about the technologies used in the claimed real-time kernel. In fact, from the article I don't really know if it's real real-time, or "harder but still soft" real-time. Either way, it's great that the kernel is finally seriously integrating real-time, as it's a step in the right direction for getting real-time software to work quickly and painlessly on office distros such as Ubuntu.

    Anyone know if this is just the ADEOS micro-kernel patch being distributed as part of the vanilla kernel? If not, is it compatible with RTAI, Xenomai, Fusion, and RTLinux?
  • by Animats ( 122034 ) on Saturday October 14, 2006 @07:21PM (#16439783) Homepage

    Linux has made major progress in the real-time area. But it still doesn't have everything needed.

    Many drivers are still doing too much work at interrupt level. There are drivers that have been made safe for real time at the millisecond level, but that's not universal. Load a driver with long interrupt lockouts and your system isn't "real time" any more. This is the biggest problem in practice. There are too many drivers still around with long interrupt lockouts.

    The Linux CPU dispatcher is now more real time oriented. (Finally!)

    Interprocess communication in Linux is still kind of lame by real-time OS standards. The tight interaction between CPU dispatching and messaging still isn't there, although it's getting better. Interestingly, it's there in Minix 3, and of course, that's been in QNX for years. For simple real-time apps, though, this may not be an issue. Generally, though, if you find the need to use shared memory between processes, it's because your OS's messaging lacks something.

    While Linux has been getting down to millisecond level response, QNX has been moving towards microsecond level response. The goal moves. However, millisecond response is good enough for most applications. What really matters in real-time, though, is worst-case response, not average. Benchmarks for real time OSs put a load on the system and measure how fast it responds to a few million interrupt requests. The number people look at is the longest response time.

    Despite this, Linux is probably now good enough for most real-time applications that can tolerate a big kernel.

    • by TheMESMERIC ( 766636 ) on Saturday October 14, 2006 @11:38PM (#16441135)
      QNX speciality is real-time. Development effort is invested to keep the kernel light-years from the rest.
      OpenBSD speciality is security. I wonder if every single *NIX have then their own specific niche.
      Linux aims to be generic, Jack of all trades and then master of none?
      I don't know you tell me. Maybe Linux wins in portability, openess, ease? popularity? It's nice to see it evolving.
    • by Ingo Molnar ( 206899 ) on Sunday October 15, 2006 @04:31AM (#16442207) Homepage

      Linux has made major progress in the real-time area. But it still doesn't have everything needed.

      Many drivers are still doing too much work at interrupt level. There are drivers that have been made safe for real time at the millisecond level, but that's not universal. Load a driver with long interrupt lockouts and your system isn't "real time" any more. This is the biggest problem in practice. There are too many drivers still around with long interrupt lockouts.

      That's where my -rt patchset (discussed by Thomas in the article), and in particular the CONFIG_PREEMPT_RT kernel feature helps: it makes all such "interrupt lockout" driver code fully preemptible. Fully, totally, completely, 100% preemptible by a higher-priority task. No compromises.

      For example: the IDE driver becomes preemptible in its totality. The -rt kernel can (and does) preempt an interrupt handler that is right in the middle of issuing a complex series of IO commands to the IDE chipset, and which under the vanilla kernel would result in an "interrupt lockout" for several hundreds of microseconds (or even for milliseconds).

      Another example: the -rt kernel will preempt the keyboard driver right in the middle of sending a set of IO commands to the keyboard controller - at an arbitrary instruction boundary - instead of waiting for the critical section to finish. The kernel will also preempt any network driver (and the TCP/IP stack itself, including softirqs and system-calls), any SCSI or USB driver - no matter how long of an "interrupt lockout" section the vanilla kernel uses.

      Is this hard technologically? Yes, it was very hard to pull this off on a general purpose OS like Linux (the -rt kernel still boots a large distro like Fedora without the user noticing anything) - it's the most complex kernel feature i ever worked on. I think the diffstat of patch-2.6.18-rt5 speaks for itself:

      613 files changed, 22401 insertions(+), 7903 deletions(-)

      How did we achieve it?

      The short answer: it's done via dozens of new kernel features which are integrated into the ~1.4MB -rt patchset [redhat.com] :-)

      A longer technical answer can be found in Paul McKenney's excellent article on LWN.net [lwn.net].

      An even longer answer can be found on Kernel.org's RT Wiki [kernel.org], which is a Wiki created around the -rt patchset.

      • It's painful reading how that works. It's an achievement. "613 files changed." "It's the most complex kernel feature i ever worked on." But it's one of those things that, for legacy reasons, is much more complex and ugly than it should have been.

        It's sad how much effort today goes into working around bad early design decisions. After writing drivers for QNX, where drivers are just user programs with a few extra privileges, it's painful to see the contortions people go through to do them in the more primitive operating systems. The basic problem is that the original UNIX driver model was a "top part", called from read and write calls, and a "bottom part", called from interrupt level. Today, Linux drivers tend to have some threads of their own, they're loadable, and the locking primitives are better, but the headaches of the legacy architecture remain.

        The price of this is that drivers remain the biggest source of OS failure. OS developers pile hack on top of hack to try to fix this, the kernel gets bigger and bigger, and it still isn't airtight. Xen and Minix3 are starting to get it right. Drivers, even hardware drivers, are user programs in those operating systems.

        I know, it's not going to get fixed for another generation. QNX is going nowhere because the company that now owns it is in the car stereo business. Minix 3 is still a toy. The Hurd people totally blew it. Mach gave microkernels a bad name. But this actually can be done right, and it takes less code to do it right than the mountain of patches needed to do it the hard way.

        • by Ingo Molnar ( 206899 ) on Sunday October 15, 2006 @02:06PM (#16444745) Homepage

          It's painful reading how that works. It's an achievement. "613 files changed." "It's the most complex kernel feature i ever worked on." But it's one of those things that, for legacy reasons, is much more complex and ugly than it should have been.

          I think you misunderstood my point. The reason why our patchset is so complex and so large is because we want to do it right. The quick-and-ugly shortcut is alot smaller (and it has been done before), and it brings problems similar to the ones you outlined - but that's not the path we chose.

          Here is an (incomplete) list of kernel features/enhancements split out of -rt and merged upstream so far:

          - the generic interrupt code (genirq)
          - the generic time of day subsystem (GTOD)
          - the hrtimers subsystem
          - the lock validator (lockdep)
          - the generic spinlock code
          - priority inheritance enabled mutexes
          - robust and PI-futexes
          - SRCU
          - the mutex subsystem
          - irq handler prototype simplification (removal of pt_regs)
          - spinlock init cleanups
          - spinlock debugging improvements
          - voluntary preemption feature
          - latency-breaker enhancements

          Note that all those features originated from the -rt effort, but they have justification and use independent of real-time considerations. In other words: they make sense in a general purpose OS anyway. And better yet: our current judgement is that much of the rest is in this category too. So what we did and what we are doing are dozens of seemingly unrelated enhancements to the Linux kernel, which in the end enable hard real-time.

          Is such an approach more complex instead of a quick-and-dirty hard-realtime kernel feature? Sure it is - but in my opinion this is the only way it can stay maintainable in the long run. And as a happy side-effect we'll get a hard-real-time capable kernel that will run on virtually every piece of hardware on this planet. And since we've got all the time we need and no deadlines to meet, it can and will be done =B-)

        • I have the same dream you do, which is why I'm currently building on top of L4 with the intent of writing an OS that does it right. L4 is a microkernel that got it right, and makes it possible to build a non-braindead system.

          I can't guarantee I can pull it off, but I'm putting a hell of a lot of time into it, and I want the results really badly. Hopefully you'll hear from me in a year or two with an exciting announcement.
      • by gillbates ( 106458 ) on Sunday October 15, 2006 @08:25PM (#16447747) Homepage Journal

        Did you ever get it fixed? If so, how?

        (Disclaimer: I work with Linux in embedded multimedia players) One of the real problems I've had wrt to the IDE driver was that it would occasionally hold interrupts for 130 milliseconds! When you're playing video at 30 fps, 130 milliseconds is several frames. It creates a very difficult situation to work around.

        I found a solution, but given the time pressure, was never able to formally verify it. I am kind of curious as to what you did, and if it was similar to my approach.

  • by Anonymous Coward on Saturday October 14, 2006 @07:40PM (#16439867)

    I was at the MEDC 2006 [medc2006.com] and one of the presentations was about how Linux was comparing to Windows Embedded.

    We got bunch of benchmarks showing how much more consistent the kernel response time were from Windows CE (custom Kernel) versus the Linux kernel, and how -on average- the Windows kernel had shorter response time. Some people in the audience asked (more than once) if they had benchmark from the Linux kernel with the real-time patches applied ... and the answer was they were benchmarking a 'for desktop' Linux kernel versus a 'for real-time' Windows Kernel. I was expecting M$ to put biased numbers but I could not stand it and left the room after 10 minutes.

    So the great thing about this announcement is that next year they won't be able to pull this off. Honestly I would not be surprised to see them use a 2.4 kernel, or show us how the stock Desktop kernel with a crap-load of unnecessary modules take so much more resources than the optimized Windows kernel.

    (sorry to post Anonymously, I guess I'm a Coward)
  • by illuminatedwax ( 537131 ) <stdrange@nOsPAm.alumni.uchicago.edu> on Sunday October 15, 2006 @09:10AM (#16443309) Journal
    My name is Ingo Molnar. You SIGKILLed my parent process. Prepare to be made real-time.

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...