Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Time for a Linux Bug-Fixing Cycle 236

AlanS2002 writes "As reported here on Slashdot last week, there are some people who are concerned that the Linux Kernel is slowly getting buggier with the new development cycle. Now, according to Linux.com (Also owned by VA) Linus Torvalds has thrown his two cents in, saying that while there are some concerns, it is not as bad as some might have thought from the various reporting. However he says that the 2.6 Kernel could probably do with a breather to get people to calm down a bit."
This discussion has been archived. No new comments can be posted.

Time for a Linux Bug-Fixing Cycle

Comments Filter:
  • by WillerZ ( 814133 ) on Tuesday May 09, 2006 @07:36AM (#15291952) Homepage
    As a user, I preferred the old odd/even unstable/stable code split; I'd run .even at work and .odd at home.

    I suppose if you buy your linux off the shelf you can complain to your vendor, but for home users looking to do some DIY kernel building the new way is a bit worse. However, I suspect we're a dying breed...
  • by lostlogic ( 831646 ) on Tuesday May 09, 2006 @07:39AM (#15291963) Homepage
    The current system facillitates this as well -- I run 2.6.anything.somthinghigh on my servers and 2.6.anything at home and it works quite well. The -stable team are really providing an excellent service with their work beyond the 3rd dot, and they let the main line kernel move at a quicker pace than having the alternating odd/even system.
  • Re:question (Score:3, Insightful)

    by tomstdenis ( 446163 ) <tomstdenis@gma[ ]com ['il.' in gap]> on Tuesday May 09, 2006 @07:47AM (#15292004) Homepage
    Part of the problem is experience. Projects like GCC and the Kernel are split along two problems.

    1. The underlying technology is non-trivial

    2. The implementation often is dirty, quick and without consistent method.

    In the case of #1 it's the case that the technology is not trivial. How many people understand paging?

    In the case of #2 the code in many cases lacks comments, uses cryptic variables and the documentation [even doxygen style comments] are just not there.

    Those two issues fight against anyone willing to throw in a weekend to help out.

    Tom
  • Re:question (Score:5, Insightful)

    by zootm ( 850416 ) on Tuesday May 09, 2006 @07:54AM (#15292028)

    As the previous article pointed out, there's no lack of developers, just a lack of developer interest in fixing the bugs. Many of the larger contributors are paid by companies to ensure that specific features are put into (or at least developed for) the kernel. And let's face it; bug-fixing is not fun. Regardless of how hard-working the people are on average, bugfixing is generally the sort of thing that people shy away from unless the bugs directly affect them, especially when working voluntarily.

    All large systems have a danger of bugs creeping in over time, and it can be easy to let their numbers get out of control as time goes on. The fact that the people in charge are point it out now is basically an example of good management — attempting to address a concern before it becomes more serious.

  • As a software developer whose experience goes back more than 40 years, to the Stanford Time-Sharing System on the DEC PDP-1, I can assure you that the only way to keep the kernel API from changing is to kill the project. Just as you wouldn't expect a driver written for Microsoft's MS-DOS to be effective on a modern NUMA machine, you shouldn't expect any driver interface standardized today to be effective 10 or 20 years from now. An attempt to freeze the driver API would hamstring the kernel developers, making the kernel less interesting to work on. Somebody would fork it, to lift the compatibility restriction, and the new kernel would work much better with modern computers, causing everyone to migrate to it.

    The only way to keep Linux relavent it to let it evolve. Yes, that creates a burden on driver writers. Linux has a partial solution: keep your drivers in the kernel source tree, and test each kernel to be sure your driver still works. When it breaks the cause should be obvious, and easily fixed. If you are lucky, the person who changed the API will also update your driver, but you can't count on that, which is why you must test.

  • by Skuto ( 171945 ) on Tuesday May 09, 2006 @08:13AM (#15292103) Homepage
    2.6.16 fixed a critical vulernability over 2.6.15. It also breaks several network drivers.

    There was a time when you could grab the next stable kernel, for example when there was an exploit and you really had to, and you'd know you'd only get *more* stability. Now it's exactly the opposite. If you have to upgrade, you're just screwed.

    This started around the time they added reiserfs in the stable series although it was far from stable yet. It's not new in the 2.6 series, really. It's a wrong philosophy.

    Compare this to FreeBSD release engineering with RELENG, STABLE and CURRENT. FreeLinux anyone? :-P
  • by TerminaMorte ( 729622 ) on Tuesday May 09, 2006 @08:14AM (#15292109) Homepage
    While I'm not sure if he meant it this way, it sounds to me that he's saying that it's not considered terribly odd for Windows to crash; not that Windows constantly crashes.

    If a desktop user sees a blue screen of death (device driver, bad hardware, what have you) it's nothing incredibly shocking; we've grown used to it over the years.
     
    Linux has certainly crashed on me (mostly when trying out drivers that arn't exactly stable), and when it happens it is a much rarer (and stranger ;)) occurance.

    Certainly you agree that Windows (he didn't specify XP/2003, remember, just Windows in general) is known for problems like that more than Linux is?
  • by diegocgteleline.es ( 653730 ) on Tuesday May 09, 2006 @08:27AM (#15292165)
    The "stable/unstable" development model does not work so well with huge projects like the linux kernel is.

    With the old model, the linux kernel would start a unstable release and people would start adding stuff which not the care you'd put into merging something in a stable tree, is not tested a lot, etc...

    Now keep this for one, two years. When you decide to release the unstable tree as the next stable version you realize that your unstable tree is full of crap, and you need to waste months or years (Vista) trying to stabilize it. Even when you release the .0 version it's still unstable, so people has to wait even more months to start using it.

    The "new" development fixed that. In the current linux development model people is allowed to put new features in the kernel even if they're invasive. But programmers are not allowed to put crap in the kernel, they need to be VERY WELL tested (in the -mm tree) and reviewed, show numbers that back your words if neccesary, document things, etc. Of course no code is free of bugs, so the released version will not be 100% stable as current 2.4 is, but it's QUITE stable.

    Because the features are merged progressively, it's MUCH easier to find and fix bugs. Even if there're new features in every release, there're not a LOT of new features - it's much easier to find out what feature broke something between two releases. Compare it with a stable/unstable development model: People keeps adding things for years, when the user switches from 2.4.x to 2.6.0 his kernel doesn't boot. How do you find out who broke that with so many changes?

    IMO, from a Q/A POV, the new development model has more sense than a pure stable/unstable development model. It's about "progressive" vs "disruptive", and for projects with several millions of lines and so many contributors it may have sense. Of course, because new things got added there're always some bugs, which is what people is bitching about today. Maybe this could be fixed by leaving the current tree as "stable" and start a new tree - but instead of a "unstable" 2.7 tree, a 2.8 "stable" tree. A pure unstable release doesn't works that well with huge projects like the linux kernel. Remember the hell that FreeBSD 5.x was and how much has affected to the FreeBSD project, remember windows Vista. Maybe it works for some people, but I don't thing it's the best development model for such projects. Solaris is also using this model to some extent - they release things into opensolaris, but what you see in opensolaris is not the "official stable release", it only becomes "stable" after a while.
  • by tadmas ( 770287 ) <david AT tadmas DOT com> on Tuesday May 09, 2006 @08:31AM (#15292185) Homepage
    Any kernel with upwards of 2.5 million lines of code is going to be incredibly buggy, perhaps it's time to rethink and go back to the microkernel

    Splitting any software into external pieces is exactly the same as splitting the software into internal pieces. Microkernel is not the answer -- encapsulation is the answer.

    Besides, converting the kernel will not get rid of the bugs; it will just make different ones. 2.5 million lines is a lot to rewrite, and any rewrite will lose all the bugfixes already in place [joelonsoftware.com].

  • by xtracto ( 837672 ) on Tuesday May 09, 2006 @08:36AM (#15292204) Journal
    So, if you have a Linux kernel driver that is not in the main kernel
    156 tree, what are you, a developer, supposed to do? Releasing a binary
    157 driver for every different kernel version for every distribution is a
    158 nightmare, and trying to keep up with an ever changing kernel interface
    159 is also a rough job.
    160
    161 Simple, get your kernel driver into the main kernel tree (remember we
    162 are talking about GPL released drivers here, if your code doesn't fall
    163 under this category, good luck, you are on your own here, you leech

    164


    No, this sucks, I respect the GPL and other open source licenses (BSD) as well as closed source licenses. If nVidia or ATI or any other hardware manufacturer do not want to license their software as GPL it is their decision. The operating system MUST provide a standarized API.

    Whoever agrees with this does not have the right to whine that X or Y company does not provides drivers and support for Linux. It is a design flaw IMNSHO.
  • I can assure you that the only way to keep the kernel API from changing is to kill the project.

    You don't have to stop the API chaging, you just have to stop it changing all of the time. Doing that also give you the added benifit that third party vendors don't keep pulling their hair out because the kernel API keeps changing so they may be more included to actually release drivers in the first place.
  • by Rattencremesuppe ( 784075 ) on Tuesday May 09, 2006 @08:48AM (#15292260)
    2.6.16 fixed a critical vulernability over 2.6.15. It also breaks several network drivers.

    Stable driver APIs anyone?

    Oh wait ... stable driver APIs promote binary drivers ... EVIL EVIL EVIL

  • by mlwmohawk ( 801821 ) on Tuesday May 09, 2006 @08:48AM (#15292263)
    I have read this piece before, and while I think it is very good, it and I both agree that a "binary interface" is a bad idea. I am not suggesting that at all. I am suggesting that, as part of the kernel, define a stable API.

    Look at the current APIs, augment or "bless them."
    Don't access structures, use macros.
    Bless tried and true interfaces, and make damn sure no one changes them without keeping backward compatibility.
    Assign temporary status to "experimental" interfaces.

    Maybe create a synthetic API layer analogous to Windows NDIS sort of thing, where common peripherals can just code to that and be done. That way, the vast majority of simple devices will just come along for the ride.

    There are lots of steps that can be taken. At issue is a fact of life people ignore: The strategies and skills used to attain success, are not the same as those needed to maintain and continue succeeding.

    To use a marathon metaphor, Linux is no longer sprinting to catch up, we are in the game. As such, we need to recognze and understand we can't sprint forever, we need to settle down and pace ourselves, this is a long race, and the winner will be the one that plans ahead.

    When the Linux kernel was small, changes could be made to the whole source tree easily. As it gets larger and larger, one obscure change in one section of the kernel may not generate an error or even a warning, but may break a driver you didn't even know about. That is exactly what we are seeing.

    Linux is no longer a small and simple kernel.

  • by diegocgteleline.es ( 653730 ) on Tuesday May 09, 2006 @08:53AM (#15292280)
    Any kernel with upwards of 2.5 million lines of code is going to be incredibly buggy

    You mean that a microkernel is magically going to implement the same funcionality than linux, with all the thousand of driver, with its support for docens of hardware platforms, in less of 2.5 millions of lines of code?

    Sure, a "microkernel" itself doesn't takes a lot of code. But BECAUSE it's a microkernel, drivers, filesystems, networks tacks etc. need to be implemented as servers. Implementing servers that implement the same funcionality than linux has today would take more of 2.5 milliones of lines, for sure. And those servers can have bugs, you know. And hardware bugs exist - it's completely possible (too easy, in fact) to hang your machine by touching the wrong registers no matter if you're using a microkernel or not.

    Also, I don't understand why a microkernel would be magically more maintainable than a monolithic kernel. As far as I know, software design is something that doesn't depends in whether you pass messages or not. Sure, a server running in userspace can't take the system down. But that's completely unrelated to modularity and mainteinability. Microkernels were in fact invented because people though that hardware complexity wouldn't allow to continue running monolithic kernels, ignoring the fact that it's perfectly possible to write a mainteinable monolithic kernel with modular design - which is how Linux, Solaris internals etc. are today - just like it's completely possible to write a unmainteinable, non-modular microkernel. It all boils down to software design. And guess what: Current general-purpose monolithic kernels (linux, *BSD, Solaris, NT, Mac OS X - no, a operative system that implement drivers, filesystems and network stacks in kernel space it's not a microkernel) have had a lot of time and resources ($$$) to become mainteinable and modular, extensible, etc.

    It's fun how when a monolithic kernel has a bug it means microkernels are better, like a microkernel model magically makes coders bug-free, or like it's not possible to write a microkernel server with a bad API that forces all driver developers to patch their drivers to fix a security bug. I'd love to hear what development model would use the Hurd/QNX/whatever guys to maintain six millions of lines of code, be it driver for a monolithic kernel or drivers implemented as microkernel servers.
  • No, this sucks, I respect the GPL and other open source licenses (BSD) as well as closed source licenses.

    Agreed. Open source is a choice, and not chosing to OS a driver code package does not immediately or synonymously make a company evil. Most people want Linux to start playing in the same space as Windows (well, at least OSX) in terms of user numbers. This will never happen unless hardware vendors are allowed to create binary drivers for their products.

    Look at the video card space - drivers can sometimes mean a 20% boost in performance. Allowing the competition to get a look at these drivers means that you don't have an awful lot of IP to keep the business profitable.

    If anyone ever wants Linux to be more than a hobbyist desktop OS, it will have to allow for the use of binary drivers. It's too late to put it into a hardware lock-in cycle like OSX (which does allows binary drivers) - Linux on the desktop will have to run on comodity hardware, and so for anyone to ever consider it seriously, it will have to be allowed to play with whatever hardware I want to purchase - and in order to do that, it will have to play with binary drivers nicely.

    My two cents (and parent poster's)... but pretty rooted in logic.

  • by diegocgteleline.es ( 653730 ) on Tuesday May 09, 2006 @08:59AM (#15292312)
    With FreeBSD 5.x, if you had a working system

    I'm not saying you couldn't choose a stable FreeBSD version - you can run a 2.4 kernel if you don't like 2.6, aswell.

    I was talking about development models. 5.X was a disaster, and this is something that even the core FreeBSD developers have accepted (they have changed a bit their development model to avoid the 5.x disaster again, you know): Too many time, too unstable, too many time to stabilize. 6.1 (which was released today, BTW) is great, sure. That doesn't means the development model is the best
  • Re:question (Score:1, Insightful)

    by Anonymous Coward on Tuesday May 09, 2006 @09:00AM (#15292315)
    floats in the kernel.....you must be mad!!!
  • by MROD ( 101561 ) on Tuesday May 09, 2006 @09:01AM (#15292321) Homepage
    I disagree.. mostly.

    There needs to be a stable API for drivers PER MAJOR RELEASE so that the driver maintainers can keep stable, well tested and debugged drivers.

    The API should be allowed to change with every major kernel revision but any change should be made with a great deal of thought and, unless it's very difficult to do, the old API should be supported for backward compatability.

    Not only this, but I would argue that it would be good hygene to separate the core kernel from the drivers. Doing this would make developers think hard about the bounderies between the two and not have one polluting the other. It would also make the developers think long and hard about whether changing the API for something is such a good idea just because it would be useful for the "ACME USB SLi Graphics card programming port widget" interface.

    The the kernel is the kernel, the drivers are merely plug-ins to virtualise the hardware, the two should be as separate and distinct as they are logically.
  • by just_another_sean ( 919159 ) on Tuesday May 09, 2006 @09:03AM (#15292332) Journal
    The operating system MUST provide a standarized [sic] API.

    People who code free software MUST not do anything unless they feel like it. Sure some of them might get paid by Company X to develop Driver Y or Application Z but they do so on the shoulders of what's already been put in place by free software developers.

    If Linus and the rest of the kernel developers decide at some point to provide an ABI that proprietary companies can use to build their drivers, all the while clinging to their dated business methodologies and obsession with "IP", then great, that's their choice. It might take a Herculean effort to get all those copyright holders to agree and do it but if they can then that's up to them.

    Conversely, if they choose not to, they under no obligation to provide anything. Nobody on the kernel team, IMHO, ever got together and said "we need to start coding and provide some free software so companies with no interest in participating in the process can take our free software and make some money selling hardware". They do it for themselves, their friends and family, their community. Whether or not ATI and NVIDIA want to be a member of that community entitles them to exactly nothing.
  • Re:question (Score:3, Insightful)

    by eraserewind ( 446891 ) on Tuesday May 09, 2006 @09:49AM (#15292660)
    Well, I have to disagree. I need to fix kernel bugs all the time as part of my job (not in Linux), and it's no big deal, even though I am far from being a kernel developer. There is nothing magical about kernels. They are just C with little bits of assembler thrown in here and there. Of course there is easy to read code and hard to read code, but that is largely unrelated to whether something is in a kernel or not.
  • by mapkinase ( 958129 ) on Tuesday May 09, 2006 @10:35AM (#15293004) Homepage Journal
    Seems like consensus is that there is

    1) lack of motivation to fix bugs (boring and/or difficult)
    2) complexity of the Linux kernel code

    Given this observation I wonder if refactoring is needed for the kernel code to make it more readble and if it is needed then how open source projects are approaching it?

    I would assume that the project leader would have to make some kind of a team to do that. What I have mostly heard about OS projects is that they are started by a single person who codes something workable then depending on the prospects of the future use and general interest more developers are joining it in a very free, relaxed manner.

    Refactoring seems to be a different issue: one needs to redo a whole bunch of functions without getting any intermediate working results. How is OS community is dealing with the problem of refactoring or how would it deal with it?
  • by 0xABADC0DA ( 867955 ) on Tuesday May 09, 2006 @11:37AM (#15293484)
    The real advantage of a *good* microkernel is that normal people can write drivers and modules to extend it. If you compare the filesystems available for FUSE [sourceforge.net] compared to what's available by compiling into the kernel you get a good idea:

    Kernel: ext2/3, reiser3/4, jfs, xfs, minix, romfs, cdrom, fat, ntfs, proc, sysfs, adfs, ffs, hfs, BeFS, jffs (flash), cramfs, qnx4 fs, smb, cifs, andrew fs, plan 9

    FUSE:

    FunFS: network filesystem,
    KIO Fuse Gateway: mount anything kde can talk to as a filesystem
    Bluetooth FS: bluetooth functions as files
    mcachefs: caches files locally from another filesystem ie nfs/smb
    Fusedav: mounts WebDAV shares as fs
    GmailFS: uses gmail for storage
    CvsFS: view a version as fs
    SshFS: mount sftp as fs
    WikipediaFS: edit wikipedia articles as files
    FuseCompres: transparent compression
    FuseFTP: ftp filesystem (written in perl)
    GnomeVFS2: mount anything nautilus can view
    archivemount: mount tar, cpio, tar.gz archives read/write ...

    Notice any difference? In the kernel, everything is pretty much either some long-standing standard or developed by some large corporation. The user-mode ie microkernel-style ones are developed by single people because they saw a need -- and so they actually do something useful like mounting ftp or zip files, or using gmail. These things are useful and different whereas once I've picked the fs for my drives I couldn't give a crap about any other fs in the kernel. None of them do anything even remotely interesting.

    So that's the real advantage of a microkernel. Somebody wrote a useable filesystem in perl for heaven's sake. Yes, you can get some of the same benefits by turning a monolithic kernel like linux into basically a big/slow/ugly microkernel in certain areas like fs for instance. But with a good microkernel or safe kernel you get these same benefits everywhere with the advantage that your "archive filesystem" is much, much faster.
  • by diegocgteleline.es ( 653730 ) on Tuesday May 09, 2006 @11:58AM (#15293679)
    Notice any difference? In the kernel, everything is pretty much either some long-standing standard or developed by some large corporation

    Yes I notice a difference: The filesystems in the kernel tree are general-purpose, performance-critical filesystems, meanwhile a fuseftp filesystem is quite the contrary.

    Noticed how FUSE is a linux thing that allows people to write filesystems in userspace *despite of being a monolithic kernel*, giving users all the advantages of a microkernel without any of the disadvantages? Did you already noticed how this same approach is already used for some driver, like all the usb drivers implemented in userspace in top of libusb [sf.net], X.org 2D drivers or CUPS printing drivers?
  • by drew ( 2081 ) on Tuesday May 09, 2006 @12:21PM (#15293957) Homepage
    The current situation is ridiculous, though, regardless of whether or not you believe that binary drivers are acceptable.

    First of all, I don't believe that Linus et al refuse to provide a stable kernel API merely so they can snub companies who only release binary drivers (although obviously it is perceived by many as a nice side effect).

    Providing a stable kernel API would provide substantial benefits for open source drivers as well in terms of reduced maintenance. This would especially be true for hardware that was once common and well supported but is now aging and not actively maintained, but there would be many other benefits as well.

    IMHO, the kernel developers would do well to realize that their antics are hurting themselves far more than they are hurting any hardware company that refuses to release GPL drivers.
  • by zogger ( 617870 ) on Tuesday May 09, 2006 @03:13PM (#15295619) Homepage Journal
    In fact, you have strayed into the compounded strawman argument fused with completely false assumptions, and gone from there somehow conviced that is "fact".

      Google WOULDN'T EXIST without open source. They wouldn't have a single dollar. There would not be a google as you see it now. That they only open some of their stuff means even there they still don't fully "get it", yet, besides that, they still managed to garner huge market share, PRECISELY BECAUSE OPEN SOURCE CODE WAS THERE FOR THEM TO USE. They kicked the shit out of MSN precisely because they came in on open source and that is the primary advantage they had.

    You also make some utterly and arrogantly *lame* assumption (getting to be a habit with you, you might want to get that checked at the shrinks) that the open source community doesn't have MIT grads or EEs or Phds working in it. OMG OUR COMPANY HAS REAL SMART GUYS!!! NOBODY ELSE DOES THAT MAKES US THE BESTUS AUTOMAGICALLY!!ONE

        Ya, SO WHAT? You lose,you lose BIG TIME right there, demonstrably false directly on this board, there are TONS of high level superior brains working on open source code, you can see that readily.

        Now, back to a video card, PROVE that their hardware wouldn't get better with major contributions from the community, and come in faster than what the "closed source" community would provide for "the other brand". Prove it, that's your claim, so prove with some real world examples, give us the list of other type hardware vendors who lost major sales because their hardware runs on open source code. Give us THE BIG LIST, where is it? Can you point to serious LOST SALES?

    Try again, PROVE some hardware vendor lost a sale-that's you ENTIRE POINT, LOST SALES OR NO SALE BASED ON OPEN SOURCE DRIVERS, because the hardware was running on open source and the competition wasn't, or could run on open source just as well as closed source and that open source was readily available. Give an example of lost sales, go ahead.

        I like nvidia cards, I purchased a what to me was an expensive video card. Not top of the line, I can only afford a couple of generations back, but certainly good enough for my purposes now. If nvidia had a truly open source driver, does that mean automatically I would NOT buy a card from them because of that, that I would just flee overnight to ATI because for some reason they just "must" be better now? Any "advantage' that ATI might have would STILL BE IN THE NVIDIA CARD DRIVER, now wouldn't it?

    This is why you don't get it on open source, you flat out refuse to see this, you think by opening the code you are THROWING IT AWAY, you AREN'T, you STILL GET TO KEEP IT, no one has "stolen" anything from you. You are selling VIDEO CARDS, that's where the cash comes from. And you get the code first while you are working on it, so how does that change things with 'the competiton" again? they don't see it until you dump it on the market, you have a huge lead time then, and after that, you STILL have lead time because something as important as that WILL get serious major effort donated to it, because it is in the USERS-your customers-best interest now, even MORESO than previously. They now have a stake in your business to make sure it stays working correctly and makes good stuff, so not only do you still get the cash for the cards, you get fre help to make the cards better.

      I am just an average typical user, but we'll let some others chime in here, maybe someone in charge of purchasing a lot of desktops for their business, now would anyone "you" reading this NOT BUY A VIDEO CARD for those machines if there was a very good open source driver for it, direct from the devs at the manufacturer, with fully open contributions from the open source community? Or would you insist on the closed competition card?

    This is a yes/no proposition, all things elsewhere take it as a near level playing field and considered, the hypothetical machines need video cards of some good qulity.

    If yes, OK- that is understood, if the answer is NO, you WOULD NOT PURCHASE THOSE CARDS, why wouldn't you?
  • by cronius ( 813431 ) on Tuesday May 09, 2006 @03:24PM (#15295725)
    Perhaps, but I think the linux kernel (like all open source projects) is continously moving and evolving, and the engineer in charge just wants to make good code, and not get caught up/delayed in bureaucracy and restrictions.

    It's a typical windows/linux debate argument: Windows has a stable ABI, vendors are happy, but the OS is crippled by legacy code that can't be thrown away because of ancient decisions (can't change the ABI).

    Linux developers on the other hand are free to do whatever they want with few restrictions to create the best kernel, but vendors pay by having to keep up with the changes (which is time consuming and hard).

    Local kernel code shouldn't break itself of course, but as the article for the previous slashdot article pointed out: Some drivers broke because of a API clean up, it happens.
  • Re:question (Score:2, Insightful)

    by jnelson4765 ( 845296 ) on Tuesday May 09, 2006 @08:25PM (#15297808) Journal
    On the other hand, I enjoyed my time doing kernel programming. It does take familiarization - any codebase that big requires time to learn how the software works, even ones less complex than modern kernels.

    I just think it's great that there's an opportunity for mere mortals to play in one of the biggest games on earth for OS geeks - and of all the OSS kernels out there, the Linux kernel has the fastest pace. And there's a lot of room to grow, especially in the microcontroller area.

Work is the crab grass in the lawn of life. -- Schulz

Working...