Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Time for a Linux Bug-Fixing Cycle 236

AlanS2002 writes "As reported here on Slashdot last week, there are some people who are concerned that the Linux Kernel is slowly getting buggier with the new development cycle. Now, according to Linux.com (Also owned by VA) Linus Torvalds has thrown his two cents in, saying that while there are some concerns, it is not as bad as some might have thought from the various reporting. However he says that the 2.6 Kernel could probably do with a breather to get people to calm down a bit."
This discussion has been archived. No new comments can be posted.

Time for a Linux Bug-Fixing Cycle

Comments Filter:
  • by Anonymous Coward on Tuesday May 09, 2006 @07:40AM (#15291965)
    As a user, I preferred the old odd/even unstable/stable code split; I'd run .even at work and .odd at home.

    As someone who doesn't really keep up to date with Linux politics, I was wondering could someone explain to my why this (IMHO) good development model was abandoned in favour of continuous feature-adding in the 2.6 kernel? Was it something Linus wanted to do, or was he pressured into it?
  • by mlwmohawk ( 801821 ) on Tuesday May 09, 2006 @07:44AM (#15291982)
    I have been using Linux since the early 1990's and I've been a software developer for almost 30 years. The one ting that concerns me, and I think this recent indictment is just a symptom of a larger problem.

    The problem is that the drivers have to remain in constant flux because the kernel API is always changing. Now, when there are a limited number of drivers, this means that you can move quickly on the kernel. As you add more and more drivers, you add more and more work to keep the drivers updated. Eventually, there is more work needed to update the drivers than modify the kernel, and the drivers become your sticking point.

    This is where I believe Linux is stuck. Linus and the kernel team has to look at the various kernel APIs and standardize them with the next release.

    Sorry guys, time to grow up. Linux *is* mainstream!
  • by WillerZ ( 814133 ) on Tuesday May 09, 2006 @07:57AM (#15292040) Homepage
    The main difference was that if 2.4.x was good for you there was a very good chance that 2.4.(++x) would be good for you as well. Now, however, nothing is off-limits; so that is less true.

    (Yes, I recall some times in the 2.4.x era when this wasn't true either.)
  • Re:question (Score:5, Interesting)

    by Vo0k ( 760020 ) on Tuesday May 09, 2006 @08:09AM (#15292095) Journal
    yep, the previous poster is right.

    I thought I know C until I tried to fix a bug in the kernel.

    It was a simple syntax bug. Somebody put xxx[...]->yyy instead of xxx->yyy[...] in one line, and the compiler was protesting about type mismatch. One single line. But it took me 4 hours or so and I figured out only what the correct syntax for that piece of code would be, by analysing types of the variables used. I have no idea if the fix really corrected the problem, it just made the line lexically correct and let the compiler go on. In the meantime I had to crawl about 4 levels of header files for each of the variables/records used in the line to reach primitive types of given variables and macros, from which the structures, pointers etc were derived, and generally was totally dizzy. And I was doing it the code monkey style, I didn't really understand the workings of the kernel, what was the line I edited meant. I was purely checking that a pointer to float isn't directly assigned a value of float, just pointer to it etc.

    Kernel is too difficult for us average coders. Only the elite can fix these bugs for us.
  • by s31523 ( 926314 ) on Tuesday May 09, 2006 @08:16AM (#15292123)
    I wouldn't so much say we're a dying breed... Rather, I would say that the numbers of people that do their own Kernel building is growing, but the number of people that just buy a distro and install and "hope everything just works" is growing much much faster, which can be viewed as a good thing, since the more people that use Linux will cause commercial Vendors to take note and support Linux more readily. Although, I will miss being that nerdy guy who doesn't run Windows...
  • Re:question (Score:5, Interesting)

    by bhima ( 46039 ) <Bhima.Pandava@DE ... com minus distro> on Tuesday May 09, 2006 @08:33AM (#15292189) Journal
    This is nearly the same as my own experience... which makes me enjoy using, in my case, OpenBSD. I use C professionally but it's an order of magnitude (or two) less complex than the Linux kernel. It's just amazing to me it all comes together despite how many people are working on it.

    Back to the point what can spending some time and having a bug fixing cycle hurt? I don't see a downside...
  • it depended on the machine you had.
    my ide/ata interface was broken 3 times in the 2.4.x series ... but at least alan was a good fellow and fixed it quickly with the -ac patches ;)

    i started to use linux quite late, on the 2.2 series ... and the 2.4 seemed rather unstable at times.
    2.6 ... the dev. model has changed so much that there isn't really a possibility for a comparision here

    i miss -ac series, i miss the stability and i welcome my new freebsd overlord for now, after all it's a choice of a tool that lets you do the work. everyone should pick what they like. if you want to be rock stable, look at 2.2, if you want to bleed the edge and the stability out of it, sit on the latest 2.6, if you are tired of all that mess, you can try freebsd aswell.

    ps. tannenbaum, where is your post about how microkernels would prevent all of this ?
  • Good ol' Pat... (Score:3, Interesting)

    by zenmojodaddy ( 754377 ) on Tuesday May 09, 2006 @08:53AM (#15292286)
    Does this mean that everyone who has complained or criticised Slackware for sticking with the 2.4.* kernel for the sake of stability will apologise?

    Not holding my breath or anything, but it might be nice.
  • by Jessta ( 666101 ) on Tuesday May 09, 2006 @09:04AM (#15292335) Homepage
    That will probably be the future. We are not there yet. The Message passing overhead is still large and makes coding difficult. eg. HURD is still not finished after 20 years.

    Eventually with multi-core cpus with stupid amounts of threads the micro-kernel will make it's come back.

  • by ratboy666 ( 104074 ) <<moc.liamtoh> <ta> <legiew_derf>> on Tuesday May 09, 2006 @09:15AM (#15292402) Journal
    Thank you.

    I would like to modify this slightly. I don't think a single DDI (device driver interface) will work, but several DDIs can be defined:

    A low level SCSI DDI
    A low level audio DDI
    A low level network DDI

    and maybe others. Factor the drivers, and extract common parts into the appropriate DDI.

    Now, a vendor would write to that DDI, and the Linux team would have to promise that the defined DDI would have a lifespan of (?, but as long as possible). Any drivers needing a custom kernel interface would be planted into the source tree as is now done.

    All drivers under the DDI can be checked for conformance. The DDIs would not have to be "officially" introduced until they are ready.

    Putting fences like this into the kernel would be (in my opinion) a very good thing.

    Ratboy.
  • by LordOfTheNoobs ( 949080 ) on Tuesday May 09, 2006 @09:30AM (#15292506) Homepage
    Point 1 - Your post contradicts its own supposed respect of the GPL.
    Point 2 - Linux is FSF free, share and share alike by license. BSD is not. You can't generalize them together on this issue. If you don't get the difference, you don't know what the hell your talking about.
    Point 3 - The operating system doesn't _have_ to do shit. If the companies want their shit to run in Linux, the should submit GPL'd drivers or suffer their rightful hell for being miserly with their code in a project based on sharing. To hell with them.
    Point 4 - There is a fairly standard API. And when they change it they fix the GPL drivers. There is not an ABI, `application binary interface' since you obviously don't know, which is not require or desired as Linux runs on many different types of hardware. Should we instead suffer to create an ABI for each hardware platform that each driver must uphold? There is more than x86 out there. Hell, even in x86, should we make all drivers have to suffer to a 16 bit driver interface, or create different ABIs for 32 and 64 and the future 256 and 1024 bit systems?
    Point 5 - Cry more noob.
    Point 6 - If a hardware manufacturer wants to sell their hardware to us, they will either suffer intolerably or they will give in and release some GPL'd code. If coders want it bad enough, someone will reverse engineer it and create free code on their own. It's not like we're going to start using our DVD-Rs to burn off graphics cards. Well, not until my chinese GFX-RW comes in anyway.
  • by sydb ( 176695 ) <[michael] [at] [wd21.co.uk]> on Tuesday May 09, 2006 @11:55AM (#15293656)
    But FreeBSD is a complete operating system, whereas Linux is a kernel. If you run Debian GNU/Linux, then critical (security) fixes will be issued for your current kernel, without backporting bugs. That is the job of the distro maintainers, not of the Linux (*kernel*) developers.

    It's about time people realised that the distinction between Linux the kernel and GNU/Linux the operating system is a real and important one, not just RMS whinging (although I agree with his whinge anyway).
  • by ratboy666 ( 104074 ) <<moc.liamtoh> <ta> <legiew_derf>> on Tuesday May 09, 2006 @02:45PM (#15295391) Journal
    Sure, I can elaborate. The idea is that drivers are constrained in the interfaces they use. The idea is NOT to produce a "universal" DDI, but to formalize the current driver classing. Indeed, formalize it to the point that a source tool can examine the driver source and determine if it is "compliant" to the class which it belongs in. All such compliant drivers can be considered "class DDI clean" and a kernel change can then be tested against the interface.

    Drivers which are not completely "class clean" need to be checked (more) carefully against kernel changes. This encourages drivers to be migrated into the "clean" class. If a kernel change occurs which affects the entire class, it is likely that automated tools can handle the change to the drivers (hopefully, the bulk of the work).

    I don't consider the interface to be off limits to kernel developers, but the extra isolation should make things easier from both the driver and the kernel perspective.

    Initially, such "DDI layers" should be imposed on isolated parts, where a great deal of abstraction already exists. Later on -- we (Linux) should remain flexible enough to reject such ideas if they don't work, or extend them.

    Ratboy.
  • by killjoe ( 766577 ) on Tuesday May 09, 2006 @04:26PM (#15296364)
    "While the rest of us use Ubuntu to get work done"

    I hear this a lot but it goes against all my experience. Usually the people I met who compile their kernels and do other geeky things tend to get way more work done then the people who want everything dropped on their laps.

    Am I hanging out with a different crowd then you? The people I meet who use computers while not understanding anything about them tend to be some of the least productive people in any business. It's always the savvy guy/girl who can use the tool properly that gets all the work done.

    By the way, that applies just as much to windows as linux. In any office you always have three or four people who really know how to use a computer and can use excel and access to get things done while everybody else just putters along.

    Who gave you the idea that people who compile their kernels don't work as hard as you do and are unable to get as much work done as you do?
  • by zogger ( 617870 ) on Tuesday May 09, 2006 @11:06PM (#15298509) Homepage Journal
    Hell ya I got angry with your veiled assertion that because some company had some brains on staff that other brains aren't out there in the open source community. I thought that was a real cheap shot and why I went with my name, to follow the response easier.

    And you still haven't answered the main capitalist "bottom line" question, which is what this is all about, money. More money one way, or the other?

        Can you point to any other hardware sales LOST because the hardware ran on open source? This is a very simple question and gets directly to the heart of the argument. They have to "stay closed and secret" because they will "lose money". OK, swell, I got that part, I understand the argument, now, show me the beef, I keep being shown the bun, but where's the beef?

      ATI and nVidia *think* they might lose sales, they don't know that for a fact. They obviously believe that-I'll grant that-I just think they are scared and still locked into last century's business model and can't see the big picture cleanly or clearly enough (**AA's as well for that matter). i.e.; they just "don't get it" with open source, completely fail to see that the advantages outweigh the perceived "dangers" by a huge margin.

        I assert, which is an opinion and not data, which-ever one-right now, today-went to pure open source would actually pull far ahead of the other in a relatively short time span. That's easy enough to grok. That the combination of good hardware combined with greatly enhanced enthusiasm from a LOT more coders wanting to help make their cards better on the software side would magnify as a force multiplier their own in-house coders efforts and result in even higher sales. That's my posit.

      OK, we shouldn't confuse theoretical assumptions with hard data, yes? We can agree on that point? OK then, we need some examples to prove their's-and your's- point. Go ahead, be my guest!

        I need to see some examples of where a hardware vendor, previously using closed source only, went to open source and lost significant sales because of the fact, and that the decision to go open source was the primary reason for the lost sales. That's the closest we could have to a real world example. I'll wait for some examples, and I will accept their validity if/when I can see some.

        I am not aware of any, and nor do I know every single thing about all hardware business out there. I see a lot of counters to that though. I *have* seen that places like IBM have really gone out to try and incorporate more and more open source and it certainly hasn't hurt their hardware sales any, and they are significantly larger than either video card maker and in a competetive and similar market - "computer hardware". I have seen examples like FF pull completely away from what MS has in the browser arena. And so on. This is an oft discussed theme here, there are a lot of examples showing that open sourcing is being adopted more and more by more companies, and they all seem to mostly like it, they start seeing benefits and improvements. It is not the predominat case yet, I will grant that as well, but it is growing fast. Are all these innovators wrong? Are they lying? I don't know, I can only go on what I read and see and experience.

    Really, show me an example someplace to make your
    assertion of a higher probability of "lost sales due to open sourcing the code". If it is so probable, surely there must be a plethora-even just one real good one- of examples out there to use as references to affirm the counter.

    I'll wait. Take your time, no rush.

     

For God's sake, stop researching for a while and begin to think!

Working...