Time for a Linux Bug-Fixing Cycle 236
AlanS2002 writes "As reported here on Slashdot last week, there are some people who are concerned that the Linux Kernel is slowly getting buggier with the new development cycle. Now, according to Linux.com (Also owned by VA) Linus Torvalds has thrown his two cents in, saying that while there are some concerns, it is not as bad as some might have thought from the various reporting. However he says that the 2.6 Kernel could probably do with a breather to get people to calm down a bit."
Re:I preferred the old odd/even split (Score:1, Interesting)
As someone who doesn't really keep up to date with Linux politics, I was wondering could someone explain to my why this (IMHO) good development model was abandoned in favour of continuous feature-adding in the 2.6 kernel? Was it something Linus wanted to do, or was he pressured into it?
Standardize the Kernel API!! (Score:5, Interesting)
The problem is that the drivers have to remain in constant flux because the kernel API is always changing. Now, when there are a limited number of drivers, this means that you can move quickly on the kernel. As you add more and more drivers, you add more and more work to keep the drivers updated. Eventually, there is more work needed to update the drivers than modify the kernel, and the drivers become your sticking point.
This is where I believe Linux is stuck. Linus and the kernel team has to look at the various kernel APIs and standardize them with the next release.
Sorry guys, time to grow up. Linux *is* mainstream!
Re:I preferred the old odd/even split (Score:3, Interesting)
(Yes, I recall some times in the 2.4.x era when this wasn't true either.)
Re:question (Score:5, Interesting)
I thought I know C until I tried to fix a bug in the kernel.
It was a simple syntax bug. Somebody put xxx[...]->yyy instead of xxx->yyy[...] in one line, and the compiler was protesting about type mismatch. One single line. But it took me 4 hours or so and I figured out only what the correct syntax for that piece of code would be, by analysing types of the variables used. I have no idea if the fix really corrected the problem, it just made the line lexically correct and let the compiler go on. In the meantime I had to crawl about 4 levels of header files for each of the variables/records used in the line to reach primitive types of given variables and macros, from which the structures, pointers etc were derived, and generally was totally dizzy. And I was doing it the code monkey style, I didn't really understand the workings of the kernel, what was the line I edited meant. I was purely checking that a pointer to float isn't directly assigned a value of float, just pointer to it etc.
Kernel is too difficult for us average coders. Only the elite can fix these bugs for us.
Re:I preferred the old odd/even split (Score:4, Interesting)
Re:question (Score:5, Interesting)
Back to the point what can spending some time and having a bug fixing cycle hurt? I don't see a downside...
Re:I preferred the old odd/even split (Score:3, Interesting)
my ide/ata interface was broken 3 times in the 2.4.x series
i started to use linux quite late, on the 2.2 series
2.6
i miss -ac series, i miss the stability and i welcome my new freebsd overlord for now, after all it's a choice of a tool that lets you do the work. everyone should pick what they like. if you want to be rock stable, look at 2.2, if you want to bleed the edge and the stability out of it, sit on the latest 2.6, if you are tired of all that mess, you can try freebsd aswell.
ps. tannenbaum, where is your post about how microkernels would prevent all of this ?
Good ol' Pat... (Score:3, Interesting)
Not holding my breath or anything, but it might be nice.
Re:Typical monolithic kernel problem (Score:2, Interesting)
Eventually with multi-core cpus with stupid amounts of threads the micro-kernel will make it's come back.
Re:Standardize the Kernel API!! (Score:3, Interesting)
I would like to modify this slightly. I don't think a single DDI (device driver interface) will work, but several DDIs can be defined:
A low level SCSI DDI
A low level audio DDI
A low level network DDI
and maybe others. Factor the drivers, and extract common parts into the appropriate DDI.
Now, a vendor would write to that DDI, and the Linux team would have to promise that the defined DDI would have a lifespan of (?, but as long as possible). Any drivers needing a custom kernel interface would be planted into the source tree as is now done.
All drivers under the DDI can be checked for conformance. The DDIs would not have to be "officially" introduced until they are ready.
Putting fences like this into the kernel would be (in my opinion) a very good thing.
Ratboy.
Re:Standardize the Kernel API!! (Score:3, Interesting)
Point 2 - Linux is FSF free, share and share alike by license. BSD is not. You can't generalize them together on this issue. If you don't get the difference, you don't know what the hell your talking about.
Point 3 - The operating system doesn't _have_ to do shit. If the companies want their shit to run in Linux, the should submit GPL'd drivers or suffer their rightful hell for being miserly with their code in a project based on sharing. To hell with them.
Point 4 - There is a fairly standard API. And when they change it they fix the GPL drivers. There is not an ABI, `application binary interface' since you obviously don't know, which is not require or desired as Linux runs on many different types of hardware. Should we instead suffer to create an ABI for each hardware platform that each driver must uphold? There is more than x86 out there. Hell, even in x86, should we make all drivers have to suffer to a 16 bit driver interface, or create different ABIs for 32 and 64 and the future 256 and 1024 bit systems?
Point 5 - Cry more noob.
Point 6 - If a hardware manufacturer wants to sell their hardware to us, they will either suffer intolerably or they will give in and release some GPL'd code. If coders want it bad enough, someone will reverse engineer it and create free code on their own. It's not like we're going to start using our DVD-Rs to burn off graphics cards. Well, not until my chinese GFX-RW comes in anyway.
Re:I preferred the old odd/even split (Score:3, Interesting)
It's about time people realised that the distinction between Linux the kernel and GNU/Linux the operating system is a real and important one, not just RMS whinging (although I agree with his whinge anyway).
Re:Standardize the Kernel API!! (Score:3, Interesting)
Drivers which are not completely "class clean" need to be checked (more) carefully against kernel changes. This encourages drivers to be migrated into the "clean" class. If a kernel change occurs which affects the entire class, it is likely that automated tools can handle the change to the drivers (hopefully, the bulk of the work).
I don't consider the interface to be off limits to kernel developers, but the extra isolation should make things easier from both the driver and the kernel perspective.
Initially, such "DDI layers" should be imposed on isolated parts, where a great deal of abstraction already exists. Later on -- we (Linux) should remain flexible enough to reject such ideas if they don't work, or extend them.
Ratboy.
Re:I preferred the old odd/even split (Score:3, Interesting)
I hear this a lot but it goes against all my experience. Usually the people I met who compile their kernels and do other geeky things tend to get way more work done then the people who want everything dropped on their laps.
Am I hanging out with a different crowd then you? The people I meet who use computers while not understanding anything about them tend to be some of the least productive people in any business. It's always the savvy guy/girl who can use the tool properly that gets all the work done.
By the way, that applies just as much to windows as linux. In any office you always have three or four people who really know how to use a computer and can use excel and access to get things done while everybody else just putters along.
Who gave you the idea that people who compile their kernels don't work as hard as you do and are unable to get as much work done as you do?
Re:you haven't proven a damn thing (Score:2, Interesting)
And you still haven't answered the main capitalist "bottom line" question, which is what this is all about, money. More money one way, or the other?
Can you point to any other hardware sales LOST because the hardware ran on open source? This is a very simple question and gets directly to the heart of the argument. They have to "stay closed and secret" because they will "lose money". OK, swell, I got that part, I understand the argument, now, show me the beef, I keep being shown the bun, but where's the beef?
ATI and nVidia *think* they might lose sales, they don't know that for a fact. They obviously believe that-I'll grant that-I just think they are scared and still locked into last century's business model and can't see the big picture cleanly or clearly enough (**AA's as well for that matter). i.e.; they just "don't get it" with open source, completely fail to see that the advantages outweigh the perceived "dangers" by a huge margin.
I assert, which is an opinion and not data, which-ever one-right now, today-went to pure open source would actually pull far ahead of the other in a relatively short time span. That's easy enough to grok. That the combination of good hardware combined with greatly enhanced enthusiasm from a LOT more coders wanting to help make their cards better on the software side would magnify as a force multiplier their own in-house coders efforts and result in even higher sales. That's my posit.
OK, we shouldn't confuse theoretical assumptions with hard data, yes? We can agree on that point? OK then, we need some examples to prove their's-and your's- point. Go ahead, be my guest!
I need to see some examples of where a hardware vendor, previously using closed source only, went to open source and lost significant sales because of the fact, and that the decision to go open source was the primary reason for the lost sales. That's the closest we could have to a real world example. I'll wait for some examples, and I will accept their validity if/when I can see some.
I am not aware of any, and nor do I know every single thing about all hardware business out there. I see a lot of counters to that though. I *have* seen that places like IBM have really gone out to try and incorporate more and more open source and it certainly hasn't hurt their hardware sales any, and they are significantly larger than either video card maker and in a competetive and similar market - "computer hardware". I have seen examples like FF pull completely away from what MS has in the browser arena. And so on. This is an oft discussed theme here, there are a lot of examples showing that open sourcing is being adopted more and more by more companies, and they all seem to mostly like it, they start seeing benefits and improvements. It is not the predominat case yet, I will grant that as well, but it is growing fast. Are all these innovators wrong? Are they lying? I don't know, I can only go on what I read and see and experience.
Really, show me an example someplace to make your
assertion of a higher probability of "lost sales due to open sourcing the code". If it is so probable, surely there must be a plethora-even just one real good one- of examples out there to use as references to affirm the counter.
I'll wait. Take your time, no rush.