Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Missing Kernel Patches 159

BlueEar writes: "There is an interesting, short story posted on the Gentoo Linux site. It talks about kernel patches created by Linux distributors that while publically available never get submitted. It even gives an example of one 'no brainer' patch that has been sitting over a year, without being incorporated into the 2.4.x distribution. The article ends with an appeal to Linux community to keep those patches flowing to Marcelo."
This discussion has been archived. No new comments can be posted.

Missing Kernel Patches

Comments Filter:
  • by Anonymous Coward on Tuesday February 26, 2002 @07:00AM (#3069543)

    What most people don't realize that someone who puts together these releases like Linus or Marcelo is by no means omniscient. The kernel is a huge piece of work, and no one person can know what's happening in every corner of the thing. Most of Marcelo's time goes to merging patches, so he surely cannot go around the net looking for what ever fixes some distributor might have used or even checking out how come some fix that was circulating around before was never submitted to him.

    What's nice is that nowadays there seem to be a couple of "patch harvesters" on the lkml who create their own releases (Alan Cox is now one of these people!) and persistently keep submitting forgotten patches.

    • I am wondering... (Score:2, Insightful)

      by hokanomono ( 530164 )

      I am wondering if the distributors themselves don't have too much interest in offering patches upstream, not only with the kernel. Commercial distros have a chance to become "pseudo-proprietary" this way.

      I think this is a rather childish behavior and use Debian [debian.org] instead.

      • by Anonymous Coward
        You might wonder, and you'd be wrong.

        Read lkml thouroughly for a while and see why.

        Using debian or *bsd doesn't endow anyone with any moral superiority. Deal with it.
      • that, or they have better things to do with their time. why would they want to expend effort making sure their patches get applied to the "official" kernel, when chances are, it won't (the rik vm comes to mind). the kernel source management system is in need of, and has been for quite some time, a serious re-engineering in and of itself. i think talks have begun on implementing semi-automated procedures, which is a step in the right direction. i think it needs to go much further, and faster. i imagine there are quite a few trusted individuals would could have write access to their maintained areas of the kernel source.
      • Re:I am wondering... (Score:5, Informative)

        by bero-rh ( 98815 ) <bero AT redhat DOT com> on Tuesday February 26, 2002 @09:42AM (#3069804) Homepage
        I am wondering if the distributors themselves don't have too much interest in offering patches upstream

        This plain isn't true, and whoever wrote the article on gentoo.org just shows he doesn't have the slightest hint of a clue.

        There are some good reasons not to blindly apply distributor patches into the main kernel (for example, we have quite a few workarounds for bugs, but the right way to fix them in the official kernel is to fix them, not to add workarounds), and there are some other things preventing other patches from getting in (e.g. Linus not having the time to handle them immediately).

        Other stuff is controversial (such as Red Hat Rawhide kernels putting in the Rik VM rather than the AA VM currently in upstream).

        The patches are sent upstream, but at least Red Hat doesn't believe in forcing upstream maintainers to accept all patches we send.
        • by clump ( 60191 ) on Tuesday February 26, 2002 @11:01AM (#3070153)
          I would have to agree with Bero in that the article is a tad mislead. If you listen to the mainstream Linux media as of late you would likely believe that there is a huge wealth of wonderful patches that are being dropped by Linus. This just simply isn't how kernel development works.

          From Kerneltrap's wonderful interview with Andrew Morton [kerneltrap.org]:
          there has been quite a lot of talk lately about kernel development processes, patches getting dropped, etc. I think it's all terribly overblown. The people who aren't being heard (and who aren't even bothering to comment) are the _users_ of that system - the developers. We're all just rolling our eyes and waiting for it to stop. The current system could be more efficient, but it mostly works OK; it is very unlikely to change and anything like a kernel fork is hugely improbable, even if Linus gets bored of it all and decides to do something else.


          The above article should be required reading for those following/concerned about kernel development.
        • The article was written by Dan Robbins, who is the head honcho for Gentoo Linux.

          And he certainly does have a clue.
          • Oh really? I had never noticed that. He is, in fact, the sole reason I walked away from Gentoo. I actually recommend people not read what he says as anything close to The Truth.
        • Alan Cox works for redhat, and he is using the Rik vm in his kernel series. That is a big part of the rawhide kernel situation.
        • Is there an official kernel bug list (using bugzilla :) )? If there isn't, why not? Either way, I would think that an official bug must exist before a patch can be submitted. This way, the patch could be attached to the bug. This then gives the maintainers a clear list of all the bugs and how they are fixed. The bugs could be prioritized and voted on so that the maintainer can work from the highest priority down when determining how to spend his/her time.
      • Thats absurd (Score:4, Insightful)

        by clump ( 60191 ) on Tuesday February 26, 2002 @11:14AM (#3070223)
        I am wondering if the distributors themselves don't have too much interest in offering patches upstream, not only with the kernel. Commercial distros have a chance to become "pseudo-proprietary" this way.


        I think this is a rather childish behavior and use Debian [debian.org] instead.

        It is extremely difficult to be proprietary when you are bound by the GPL. If your referring to Red Hat's using Rick's VM, there would be no stopping you from nabbing a .srpm and making a diff.

        I also use Debian and must tell you that they make changes to the kernel. That is good, however. It just isn't practicle for a distro to try and update to the latest kernel. Plus if you like me, the first thing you do on any distro is nab a tarball from ftp.kernel.org.
  • The quality? (Score:4, Insightful)

    by Anonymous Coward on Tuesday February 26, 2002 @07:02AM (#3069552)
    Many of the distributor's fixes are ad hoc kludges that are designed to quickly making the thing *work*, ignoring elegance and maintainability... even when they do fix things, in the long run we don't want to take all of them into the kernel.
    • Re:The quality? (Score:4, Informative)

      by supine ( 94843 ) on Tuesday February 26, 2002 @07:34AM (#3069597) Homepage
      Many of the distributor's fixes are ad hoc kludges that are designed to quickly making the thing *work*, ignoring elegance and maintainability...

      I don't understand why a distro would bother shipping a kernel (or app for that matter) with a patch that was "ad hoc". It wouldn't exactly endear their customers to provide repeat business.

      I think you will find that most distros test their patched kernels thoroughly before releasing them to the world. This would include not only checking that the patch fixes the problem, but that it compiles on all supported architectures and does not jeopradise future modifications to the same bit of code or adjacent or related pieces of code.

      Why they don't submit all the patches to the kernel maintainer I don't know? Maybe the patch was submitted and was passed over or missed and then not resubmitted.

      marty
      • Re:The quality? (Score:2, Informative)

        by shippo ( 166521 )
        I remember when Redhat attempted to modularise the sound drivers for one of the 4.x releases. They ended up with Soundblaster drivers working as modules, but every other card driver was completly broken.
      • Re:The quality? (Score:3, Informative)

        by bero-rh ( 98815 )
        I don't understand why a distro would bother shipping a kernel (or app for that matter) with a patch that was "ad hoc"

        Easy: Because a kludgy workaround is preferrable over a bug, and we don't always have the time to fix things the right way.

        I think you will find that most distros test their patched kernels thoroughly before releasing them to the world. This would include not only checking that the patch fixes the problem, but that it compiles on all supported architectures and does not jeopradise future modifications to the same bit of code or adjacent or related pieces of code.

        This is true - but it doesn't include checking for stuff that's just a workaround for a bug with a relatively bad code quality.

        Why they don't submit all the patches to the kernel maintainer I don't know?

        Because the guy who wrote the article either didn't check the facts or lied.
        At least as far as Red Hat is concerned, patches do get submitted.
    • I wouldn't be so sure about that... For example the only 2.4 VMs that work are in distributor kernels (e.g. 2.4.9-RedHat), and RedHat's "2.96" gcc also contained a whole slew of important fixes.
    • If the developers are giving more piority to dressing up the code and "making it look nice" rather than functionalty, then i think that something is wrong.

      True, I arn't the one whos gonna work on it in future... but when its BROKEN, it's BROKEN! Whatever happened to the days of 0 day patches to fix even the smallest typo errors?
  • by Yakman ( 22964 ) on Tuesday February 26, 2002 @07:05AM (#3069554) Homepage Journal
    Based on that sample patch they gave it seems that an unpatched system allocates one more page of memory than it actually uses. Sure it's not nice in terms of resource use but it's hardly going to affect the operation of the kernel.

    Obviously with the number of people (especially "power" users) who run the "generic" kernel any critical flaws are going to get uncovered and patched. I think these kind of issues, that directly affect the stability of the kernel are more important than "clean up" type patches this article describes.

    Obviously they're nice to have, but it's hardly a priority when there are bigger fish to fry.
    • Actually the pre-patched code seems to be reserving one LESS page than is actually needed, and forgetting to reserve the last page required.

      Admittedly this can't be giving that bad an effect, as it would have been fixed in the main kernel but it looks like it could make the system go BOOM !

    • Based on that sample patch they gave it seems that an unpatched system allocates one more page of memory than it actually uses

      I'm no kernel expert, and this is no place for a kernel argument, but doesn't that code actually fail to reserve a page that it's supposed to? It looks like a potentially serious bug, though I'd have to see the reserve loop in context to decide one way or the other. What if eidx is stored somewhere as the highest reserved page, and some other code relies on that page being available for use? We'd have the kernel overwriting a page that may contain vital data.

    • by Alan Cox ( 27532 ) on Tuesday February 26, 2002 @08:40AM (#3069680) Homepage
      Precisely. This patch was in fact submitted and the consensus was thats its tricky to prove correct, its 1 page of memory and it was better to wait for 2.5 before doing that work.
      • So if this patch, which was given as an example of an unsubmitted patch, a no-brainer, was in submitted and delayed for very good reasons, how can we get a handle on the magnitude of the problem, or be sure that there is, in fact, a substantive problem? I wouldn't expect that every single commercial distribution patch would be submitted to the kernel maintainers, for reasons that others have given on this thread. So is there really a huge amount of kernel patches that the lkml and kernel maintainers have the capacity to deal with, but are not getting submitted because kernel janitors don't have the numbers or time?

        I don't doubt that the kernel janitors does a great job and that it's a useful resource for kernel maintainers, but the article seems to have given a false impression of the nature, and possibly the scale, of the work to be done.

      • consensus was thats its tricky to prove correct, its 1 page of memory

        So nobody can actually figure out whether this eidx thing points to the last page or just beyond the last page. I find that quite worrying...

  • by Anonymous Coward on Tuesday February 26, 2002 @07:05AM (#3069556)
    Yeah, that's what Marcelo needs, every clueless dweeb bombing them with endless copies of "this rmap vm is so 1337 why dont j00 include it in 2.4.19!"
  • by Krellan ( 107440 ) <krellan@NOspAm.krellan.com> on Tuesday February 26, 2002 @07:14AM (#3069564) Homepage Journal

    An example of why a particular patch might not be accepted, even though it seems like a "no-brainer", is because it would be for too specific a purpose. It might optimize the kernel for one particular application, at the expense of others. One of the best things about Linux is that it is general-purpose: suitable for everything from palmtops and embedded systems to servers and enterprise applications.

    A patch to aggressively cache the disk in memory, for instance, might be good for servers but not for embedded systems. So, I could understand how a patch would be rejected in this case.

    As an example, a company I once worked for made many minor changes to the Linux kernel. Since Linux is GPL, I made a webpage publishing these changes, and unlike the company, my webpage is still in existence!

    Splash [krellan.com] Open Source Page

    These changes would be too narrow in focus to apply to the Linux kernel for everybody, so we never submitted them.

    • by Anonymous Coward
      No, that should never happen. If a patch is good, but has narrow application, you #ifdef it and make its inclusion a config option.
      • I'd tend to agree, but it easy to say "accept everything" good disregarding who and how it will be maintained. I think that bug fixes beign dropped are a problem, patches not beign included DEPENDS. If they can't maintain it, Linux stalls and we all lose. So the problem is, as Linux said (and i didn't see it inmediately) sending the patch to the people that actually maintain the stuff beign patched. Just my 0 cents (i am broke)
    • The article is concerned with patches that big Linux-distros apply to their kernels. The kernels they put in their distributions, not special purpose kernels. Redhat (and other Linux-distributors too i suppose) do extensive testing on those kernels before they get included with their distributions. So if they find a bug and patch it, or if they find that a patch has issues in testing (and leave it out) it would benefit the whole Linux-Community (themselves too, since they would have fewer patches to manage) if that information somehow made it back to the kernel-maintainers.
      --
  • Not just linux (Score:3, Insightful)

    by cperciva ( 102828 ) on Tuesday February 26, 2002 @07:30AM (#3069587) Homepage
    The same thing sometimes happens to the BSDs, where a bug will be fixed (usually in Open) and nobody gets around to integrating the same fix into the others.

    It seems to me that much of this could be automated... for each patch which gets added into the xBSD source tree, compare the contexts to the yBSD and zBSD source trees and alert a human if it looks like there's a match.

    But for this to be effective, I think that patches would have to have labels attached, since it's really only bug fixes for which this is necessary.
  • by Fortyseven ( 240736 ) on Tuesday February 26, 2002 @07:33AM (#3069594) Homepage Journal
    Out of all the people involved with this on the planet, not one person could be assigned the task of doing this sort of sweeping up? Lots of busy folk out there, certainly, but those people were found to do the major stuff in the first place... And please, save the "well why don't you do it, smarty man?" responses for someone that sort of backwards logic will work on, thanks, I'm just making an observation, not an accusation.
  • O, Henry? (Score:2, Offtopic)

    by kzinti ( 9651 )
    There is an interesting, short story posted on the Gentoo Linux site.

    No, there's a short article posted on the Gentoo Linux site. A ``short story'' is a form of fiction [randomhouse.com]. (Not that anyone at Slashdot cares, but some of us can't help tilting at windmills.)

    --Jim
    • Actually the term was chosen correctly - "Red Hat does not submit its patches" is certainly nothing more than a piece of fiction. ;)
    • a "short story" MAY be a form of fiction, but what about a short "story"? Dictionary.com defines story as
      1. An account or recital of an event or a series of events, either true or fictitious
  • Issue (Score:5, Informative)

    by mrfiddlehead ( 129279 ) <mrfiddlehead&yahoo,co,uk> on Tuesday February 26, 2002 @08:05AM (#3069635) Homepage
    Since the patch doesn't show how eidx has been calculated it's not immediately obvious that this patch should even be applied. That is, if the bug was subsequently "fixed" by incrementing eidx, when it was calculated, then this patch would make matters worse. So you'd have to go get the 2.4.3 source and verify that the calculation of eidx has not itself changed.

    Careful.

  • by bob@dB.org ( 89920 ) <bob@db.org> on Tuesday February 26, 2002 @08:22AM (#3069658) Homepage
    could be that whoever produced the patch (Mandrake in this case) got tired of having to submit it over and over, only to have it ignored bye (for example) Linus. i'm not complaining here, but i think at least part of the solution to this "problem" relates to how the patches are handled by the maintainers.

    • Another possible explanation is that there are a small group of whiny people who are tired of Linus controlling the linux development process just because he happens to have invented it in the first place, and are waging a propaganda campaign to replace him with a committee that will rubber-stamp every ill-concieved patch submitted from anywhere on the internet, with little or no review. Goodness knows, any time any random undergrad or script kiddie changes a few lines of code in the linux kernel, it must be an incredible improvement that we cannot live without. What does Linus know about kernel development, anyway?
      • Absolutely.

        And then, people in black helicopters, payed by the Melissa and William Gates Foundation continuosly harass Andrea, Alan and Linus, so the Kernel work gets disturbed.

        And there's a special comission of the CIA that manipulates the coffee beans used by those people, so they are decafeinated cofee beans, agravatting the situation.

        Nevertheless, RMS is behind the situation. He is orchestrating an alternative-media based advertisement campaing that will reveal the isze of Bill's and Steve's undergearments. This will surely make MS developers stop fixin bugs and go back to develop new features and buffer overflows in Microsoft applications.

        Also, a team of mutant ninja turtles ha been seen with spray cans on Redmond, WA, trying to change (defacement) all the advertisements from ".Net" ot ".Slash".

        Goodness only knows who will win this battle. But rest assured, it will show up on the Season Finale of X-Files.
  • by Otis_INF ( 130595 ) on Tuesday February 26, 2002 @09:23AM (#3069766) Homepage
    No, this is not a troll. Hear me out. This is an example how this work can be done using a good tool. I use Visual Sourcesafe here as an example, but any tool with the same functionality described below will do:

    Visual Sourcesafe has the ability to merge back changes automatically in branch B from branch A when they have the same parent.

    Say, you have the kernel v2.4.10. You branch off another project from it, call it v2.5.0. When you fix a bug in 2.4.11, you can merge it back into 2.5.0 without a hassle, it can be automated or you can do a visible merge when there are conflicts. The other way around also does work. So you can do this even further: branch of a prerelease 2.4.11-pre branch and a 2.4.11 branche from the 2.4.10 branch. Create fixes in 2.4.11-pre, merge them back into 2.4.11 after testing and when you're done, release 2.4.11 and get rid of 2.4.11-pre.

    This is inside a versioncontrol system, you don't have to hassle around with a lot of files you have to merge by hand which will increase the risk for errors.

    Of course, Visual Sourcesafe is just 'a' tool, you could use another which has the same functionality and is perhaps Open Source (I don't know of any but I'm sure others will). Doing this job by hand TODAY is erm... not understanding why we have computers in the first place. That's right: to serve mankind.
    • I would stay away from visual sourcesafe if your repositories grow beyond a normal size. We had database corruption at a former company once a week as some of our databases where getting huge.

      We pushed and pushed for a change (2/3's of us wanted vanilla cvs over VS!) but management would never listen. And in fact we could not do any remote development with VS as it was not TCP aware... it only worked across MS networks (netbios). We later found another product that integrates TCP support into VS for you. But that added another point of failure for our remote developers (across the country). And those of us that preferred a unix workstation where SOL.

      Basically we never used any features that make VS compelling over cvs. And its lack of support for anything but Netbios is unexcusable (especially for java developers who need that cross platform support). The parent poster has probably never used another version control system, and is just pushing MS products.
      • The parent poster has probably never used another version control system, and is just pushing MS products.

        No, actually, he's just using it as an example:

        I use Visual Sourcesafe here as an example, but any tool with the same functionality described below will do
        • I've used CVS and VSS, plus some own made tools but these were never up to par with what other tools could offer. The mention of sourcesafe was indeed as an example. I know VSS isn't made for very large projects, even microsoft uses a different system internally afaik, but the functionality it has (i.e. the branching/merging) is IMHO what should be used in Linux development/management.
      • This is getting more offtopic but:
        you should check out SOS [sourcegear.com] if you're still using VSS.
    • Where I used to work we used to call VSS 'Visual Sourceunsafe'. We would always get database corruptions - and you know why? because the damn thing is based on netbios. This means that anyone browsing the network drive via Windoze Explorer can accidentally trash the source repository, by inadvertantly dragging and dropping a folder to the wrong place. (Which is a very easy thing to do with explorer).

      If you have a choice to use VSS - just say no.
    • I'll agree with the versioning/branching comment, although I'd say that ClearCASE, cvs, pretty much anything would be more stable than VSS. Also, VSS doesn't make it nearly as easy to branch as ClearCASE - in VSS you seem to have to branch the whole project, in ClearCASE you can branch individual files and directories, so you never have to merge more than you need to.

      Unfortunately, the Linux kernel configuration management paradigm seems to be more of developers maintaining separate trees, and then handing off patches between trees instead of patches that move between branches. I think this is because for a branching scheme like ClearCASE, you need a centralized authoritative repository to say who has branched from where, and when. Linux has no central branch directory like that, and the patch format commonly used doesn't encode this sort of information. So you can't do automatic conflict resolution (or at least you can't do as much as you'd like) without a branch directory under central control.

      Branches make sense to me - I use them every day. But Linux, at the moment, isn't set up to use them very well. And in moving to bitkeeper they're going even farther down the path of handling trees rather than branches.

  • Vendor patches (Score:4, Informative)

    by Captain Zion ( 33522 ) on Tuesday February 26, 2002 @09:24AM (#3069770)
    Marcelo is certainly well aware of the existance of many patches that never get included in the main kernel tree, as he maintains Conectiva's kernel package which contains a large amount of vendor patches. He certainly has his reasons for not including the patches to the official kernel -- it certainly would make his life much easier if he reduced the number of vendor patches in Conectiva's tree applying some of these to the main tree. Marcelo is being very conservative regarding the 2.4 tree, and I believe that's the way it should be, considering it's a "stable" kernel.
    • Re:Vendor patches (Score:1, Insightful)

      by Anonymous Coward
      What good is a stable tree if all vendors have to apply 500 patches to it for it to be useful?
      • What good is a stable tree if all vendors have to apply 500 patches to it for it to be useful?

        Exactly.

        Mod this A.C. up!
        • Because vendors want features, FEATURES, *FEATURES*! They dump in XFS, low-latency, JFS, ALSA, etc. etc. which is fine if THEY are willing to support it. The mainstream kernel would rather wait to accept patches when they have a clearer idea of how maintainable things will be in the long term. Eventually, many or most of the popular and useful patches do get in. Linus believes very much in the vendor model; it allows him to focus more on future development rather than current featuritis.
  • by hardaker ( 32597 ) on Tuesday February 26, 2002 @10:28AM (#3069981) Homepage
    As a maintainer of a package which is distributed via many linux and *BSD distributions, I'd like to complain on the behalf of software authors everywhere. The linux distributions are nutoriously bad about applying patches to their rpms (say) but never submitting them back to the authors of the package themselves. The BSD distributions are just as bad. The infamous FreeBSD port tree also frequently houses patches that never make their way back to developers.

    I'm not sure how this could ever be considered a good thing, as the project authors must spend time searching through distribution source releases looking for patches, which takes time. The distributions must continually apply their patch to a changing source tree (and I'm sure it'll eventually break and need reworking), so they loose time as well. This is one case where communication really could be a very positive thing.

    sigh... It's about time I went to search for patches again...
    • Not to deny that your comments are valid, but in some cases, what I've heard is also that sometimes the authors won't apply patches sent by BSD package maintainers. In some cases the authors won't accept patches because they're only interested in Linux. Certainly both sides need to work harder, but it's not just the port maintainers' fault.
      • I have no doubt that some maintainers are only interested in platform X. However, in my case at least, that's not true at all (since we advertise it as being supprted on FreeBSD, etc). Typically the only changes I ever reject are ones which break other architectures due to impropeer ifdeffing. That's a whole other problem, actually. Most people really aren't expereienced in writing portable code.
    • FWIW, I'm sure that varies. I certainly try to upstream generic patches myself for the packages I maintain at Red Hat for Red Hat Linux, and many others try to do the same. When it gets accepted, there's less maintenance for me.
      • The first time I looked into a redhat distribution, I was amazed to see 3 patches I'd never seen before. Since then I've tried to make it a point to check what the maintainers there have done every once in a while.

        On a side note, the redhat bug database really needs a way for me to be able to say "send me mail for any problem from package X". Sure, you can subscribe to a particular bug, but I need to subscribe to an entire package. Last I checked, this isn't possible. It would certainly be an easier way to help keep package authors in sync with the distribution packagers.
  • If vendors have semi-proprietary systems by virtue of applying patches that aren't making into the mainstream ...

    And if one wants to ensure that one is running the most stable, but well-patched system ...

    Then who has it - Redhat, Debian, Mandrake, etc.?

    Or is this even a fair comparison? And should one make this comparison when planning a Linux install?
  • I just wanted to mention that Gentoo is the coolest distribution that I've ever tried. It has quite a time consuming install process because everything is compiled from scratch; however, that's the power behind the distro. _EVERYTHING_ is compiled specifically for your hardware, and you specify global compiler optimisations that you want applied to each and every package. The package manager, portage, is based on the FreeBSD port system, but it's rewritten in Python with many added features (i.e., better handling of dependencies, fine-grained package management, "fake" (OpenBSD-style) installs, safe unmerging, system profiles, virtual packages, config file management, etc...). It has the ease of Debian's apt mixed with the better performance of custom compiled binaries, and let me tell you, it flies! It includes custom patched kernels with the preemptive, scheduler, XFS, and many other features already patched in! Running Gentoo and Win2k Pro dual boot on my machine, I can tell you that Gentoo (w/ KDE2) is noticably faster and more responsive, and I never thought I'd say that about X under Linux, but it's true! If you haven't done so, try Gentoo today!
    • I am a Gentoo user since rc5

      In simple terms : Gentoo rocks.

      The package system is amazing. I had apt-get like features, FreeBSD ports like features, its compiled for YOUR system. (there is a binary distro in the works) and most of all, the people in #gentoo arent the elitist bastards you find in #freebsd and #debian. You ask for help, you get help.

      I also try to support the project in any way I can. I cant code. I dont have bandwidth to give. I donate the only way I can, which is to send money or if I have spare computer parts which can be used by the Gentoo team, I donate those as well. And I try to tell as many people as I can about Gentoo.

      dvNull
  • It seems like it would be trivial for vendors to maintain their patches in their own BitKeeper repository. If done consistently across vendors, it would allow the kernel maintainers to merge patches into the standard distribution with minimal effort.

    Moreover, this would probably make it easier for anybody to track different sets of patches. Imagine being able to use an SCM tool to help minimize the pain of tracking patches through several kernel revs. Many of us do this on a daily basis anyways and would love to see such tools used properly in the open source community.

  • by pschmied ( 5648 ) on Tuesday February 26, 2002 @04:58PM (#3073020) Homepage
    ...And no, I'm not trolling.

    People talk about the exchange of ideas between the BSDs and Linux, and I think that a core group like FreeBSD's would be a great idea for the Linux world.

    It seems like we are running into more and more scaling issues with the people behind Linux than with Linux itself. This is no fault of theirs. Linux is too big a project for a "the buck stops here" kind of person like Linus.

    Obviously, Linux is Linus's brainchild, and he can do whatever he likes with it (yes, I know the GPL allows forking, but think of how a kernel fork would be recieved on /.).

    I don't believe that Linux can attain the kind of consistency (and that is not the goal anyway) of FreeBSD or NetBSD, I think they might be able to fix some of the kernel patching and architecting problems if an elected core team could work on this.

    -Peter

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...