Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Linus Speaks With c't On Clean Design And ReiserFS 139

Daniel Burckhardt writes: "There is a nice interview from the german c't magazine with Linus Torvalds. It's no longer about getting hit by a bus but instead about dropping into the atlantic ocean. He talks about his dislike for keynotes, the importance of "good taste" in os-design and why nobody wants 2.4 (they are all happy with 2.2). And for some real news: Expect ReiserFS going into 2.4.1." It's a longer interview than online translators like: babelfish choked on the translation for me, and only got about halfway through it; the systransoft engine, on the other hand, worked like a champ and yielded a smoother translation to boot.
This discussion has been archived. No new comments can be posted.

Linus Speaks With c't On Clean Design And ReiserFS

Comments Filter:
  • "If the airplane from Frankfurt would fall now to San Francisco, everyone could
    - now well, perhaps everyone, but a quantity of people could not take over mine
    job."
    -Linus Torvalds

  • Well, I can't speak for v2.2.16; I don't use a v2.2 kernel. Have you tried v2.2.18pre17? After all, that's 4 months worth of bugfixes, and whether it does fix that particular problem or not, it should fix other things you want fixed. The OOM-killer I was talking about is the one in v2.4.0test10pre3, as I wrote. That one is quite simple and fully logical: if a process goes haywire and eats all memory, it gets the bullet. It might take a while, but it'll die. Generally, no other processes get killed in the process. I've tried this with various workloads and it works 100%, on system with very varying configurations (from 486's with 16MB memory to K6-3's with 256 MB memory.)

    If you can oops your kernel by plugging in an USB-scanner, have you supplied a translated copy of the oops to the LKML? Oh, and after a brief look at v2.2.18pre17, I found tons of USB-fixes. Maybe one of those fixes your bug?

    You can't possibly believe that Linux is the only operatingsystem that has got buggy device-drivers. Device-drivers ARE fully capable of crashing your machine. This holds for virtually any operatingsystem (can't say how things hold for beasts such as Z/OS, formerly known as OS/390 and MVS.) The only thing that is special with Linux is that it contains drivers for the hardware. Most other operatingsystems doesn't, barring Windows. But for Windows, the vendors write the drivers. And they still are buggy. Go figure.

    What's so broken about the semaphores/spinlocks in the v2.4 kernel? Oh, and consider that Linux has support for more filesystems than any other operatingsystem that I know of...

    How do you plan to support all the features of NTFS, XFS, BeFS, HFS and HPFS on a VFS that respects only Posix?

    Well, if I'm not ALL wrong, both XFS and HPFS only contain EA's, not streams, so they won't be that hard to support. As for HFS, NTFS and BeFS, I can't say, and I don't feel ashamed to say so. But imho, none of the proposals posted on the LKML during the entire debate was sane either. But face it, NTFS, HFS and BeFS will never be the native filesystems on a Linux-machine. They are all proprietary, for one thing. The support for them exists first and foremost to help people to migrate files. The reason there exists a superugly hack for HFS is that any binary will break without its resourcefork. If I'm not all wrong, neither BeFS nor NTFS stores any vital information (just configuration-data, extended attributes etc.) in their streams, so if they get lost, at least your computer will survive. I might be off here, though.

    We do submit things. Cox even rejected a patch to provide 64-bit printk output on 32-bit platforms, twice. This, after developers are told they should use printk and not a debugger. Never mind that even printk is lacking.

    I'm pretty confident Alan must have given you an explanation why? Why not including it here? Might it have been because this was a patch against a stable kernel-series? Oh, and doesn't printk("%Lu") work for you? It should print long long's without any trouble, at least in the v2.4-series.

    No, Linus doesn't have to accept code he doesn't like, but if you try to imagine how much code he has to accept every week, you'd realise, it isn't really possible for him to find ALL bugs. He does a DAMN good job, though. And the rest of the people on the LKML certainly does their part to help him.

    There are plenty first-year CS-students that write decent code, but yes, it is social engineering too. People who don't realise that they can't patch their own kernel to add the kernel debugger, and thus won't submit a patch, are probably just as well kept from submitting their code.

    And, why won't Linus use CVS? Accidental additions can be backed out, the commit logs with provide a start for documentation (since the inner circle of kernel developers themselves don't write any), and it will allow better control of the process at large. Right now, all the major decisions are made off list between Viro, Cox, Linus, etc. and the rest of us get to hear about them only the next patch is released. There is never and intent or direction advertised before, and there's no documentation after. If they at least used CVS and let people read the histories, we would have a better idea of what few design decisions are made actually mean.

    Well, this is a question I think only Linus can answer. However, I believe he will sooner or later begin using BitKeeper. Maybe already for v2.5. Who knows? Only time will tell, I guess.

    Quite a lot of the "inner circle" of kernel developers document their API's. Alan has written a lot of kdoc lately, Peter Anvin writes very good documentation, Richard Gooch documents everything he does, etc. But you have to realise, that it's impossible to work 48 hours/day.

    A lot of the design discussions are kept on list and has gone on on for several months before actually turning into code. Other things are thought up outside of the list. The LKML sometimes simply isn't the right place to design new things. Live with it. Nowadays, Linus writes changelogs for every single pre-patch, Alan has done so for ages. I don't think that you'll get much more information from CVS commitlogs than from these. But you seem already to have decided that Linux isn't your thing, so you'd probably not appreciate the kernel no matter what changes are made. Or am I totally wrong? I mean, your point of view is that almost no changes are actually thought through. This is a quite negative attitude.

    The reason I considered your post trolling wasn't because of your opinions on Linux, but because you implied that people reading Slashdot automatically would flame you, send spam to you, direct ICBM's at your house etc., just because you have a different opinion.

  • Are you aware that your comment has absolutely nothing to do with the comment you're replying to?

    --
  • Given a German to English translation, what the hell is a French word doing in there?
  • I just copied and pasted from the Wumpscut lyrics page (I provided the link). Everything was in caps.
  • Of course, that is only true as long as Linux is limited to servers. When Linux hits the mainstream desktop market, project like Debian which don't use the latest kernels will become increasingly irrelevant. Ever wonder why Windows2K has DirectX 7 support? Because it is meant to be both a desktop and server OS, and keeping two releases is just too much trouble. While there will still be server people using the previous version, the majority of people will simply go with the leading distro makers (eg. Redhat) and install the latest kernel.
  • I'm sure there are top developers at BSD that are even worse (no offense BSD, I'm just trying to make a point).

    That is a fact actually. That's how OpenBSD got started. And there is still a lot of tension between Open and Net BSD camps. Point is, no matter where you go, be it a free software project or a multi-billion dollar company, you always face things like that. It is really important to just get over it and go on. Don't take it too personally. Free software has advantage here precisely because you can fork it if you don't like the direction where it's going. And I think it makes sence to have forks of the kernel specialized for big iron, PDAs, and general-purpose PCs. One size does not fit all.
    ___

  • Kswapd regularly dies on my 2.2.16 + USB machine. 2.2.16 is supposed to be a stable kernel -- release 16 of a stable kernel. I can oops the kernel by merely plugging in my scanner to the USB port.

    Stay on topic, man. If you patch your kernel with an unstable patch, it's your problem when it oopses. Have you tried seeing if it oopses without the USB patch? If it does so, yes, you have discovered a bug. Another fun one is 'ls -w 10000000'. Also note that 2.2.17 is out and 2.2.18pre series needs testing. Perhaps you are the man we need to fix these problems?

    True, but interesting that it was stated during the 2.3 series that Ext3 would probably be included in the 2.4 kernel, even though it was less capable, less stable, and in less widespread use than ReiserFS at the time.

    Most of the confusion in these conversations was due to semantics: Reiser thought he'd be out of 2.4.x period, whereas ext3 would be in 2.4.0. I don't recall if ext3 will be in .0 or in a subsequent release, but all this is cleared up now.

    Streams. The reality is that POSIX plus streams is very complex and as such, require a much larger contingent of users who need it before they will be a supported feature. If you could point and say "look, almost a quarter of Linux ppl are patching their kernels to use our NTFS streams" you would probably get further than screaming "I need this very complex feature!"

    Perhaps they just don't like your patches and got sick of your long-winded style like I am.

    Linus won't use CVS because he has little patch control then. He likes the latest proposal for a mail-based revision control system with some basic bug tracking features.

    -l

  • Look at the brokenness in the spinlocks and semaphores. ... One that must make use of the broken spinlocks and semaphores, while avoiding the wrath of bdflush and kupdated, is very hard to stabilize on Linux.

    I'm curious what is broken about Linux spinlocks and semaphores? Can they be fixed? Your comments imply that spinlocks and semaphores have bad interactions wtih bdflush and kupdated. Do they deadlock or something?


  • Well, because confused programmers outnumber the ones who aren't confused by a huge number I think that you will discover that letting them play in the game makes the sheer volume of bad code being submitted so large that you will spend all of your time reading and rejecting bad code. The real danger is that bad code becomes the norm - rather than the exception, and you wind up with a kernel which works only about as well as Win 9X.

    Linus' position on the subject may not be snobbery - I suspect that he receives so much bad code as it is that the idea of letting all the confused people in would utterly swamp him. Bad code is more likely to be written by confused programmers than people who aren't confused.

    The 'do you allow a debugger or not' question is a difficult one in programming as a whole. Most of the time your position is the accepted one. The tool in the use of those who do need to use it on occasion is generally better than not having the tool. However, a kernel may be one place where it is better not to have one because of the problems I mentioned.

    Wirth noted after he saw Pascal and Modula-2 being used by confused programmers that standards and a model don't seem to keep confused programmers from producing bad code. They follow the form of the standard without comprehending the content of the standard; the letter of the law rather than its spirit is what gets obeyed.

    The beauty of the Free Software world is that it allows constructive competition. If you can do better you have, in my opinion, an obligation to the rest of humanity to make the attempt. Linus has said that he hopes that something better does come along.

    I know my limitations as a programmer and as a manager of software projects; a few thousand lines of code with a team of 10 or so people is about all I am able to lead. I am the programming equivalent of about a 10.5 second 100 meter sprinter: I am way better than most people - but I don't belong in the Olympics.

  • >Of course, not everyone has the space or need for a separate TV...

    Get a Video Projector then, and find a wall.

    Or get a video capture card.
  • HERE I AM WITH the HANDS FULLY to BLOOD AND carry IN ME a BITING RAGE YOU SAID YOU WANTED the body OF ME AND I GAVE YOU ALL STRAIGHT LINES LIKE an ANIMAL

    There's probably some other errors. Oh well.

  • This isn't about narrow-mindedness, it's about sanity and interoperability. It's about not making the same mistakes Microsoft keep doing over and over again. NTFS streams ARE a complete mess. Try to map them sanely into the Unix-world, and you'll see. Try to use tar to backup an NTFS-volume and see how much you'll preserve...

    Why is this considered a problem with the streams concept rather than with tar or Posix FS APIs? Strict compliance with POSIX brings you only so far (i.e. about as far as today Unix FSes). Systems with richer semantics would solve quite a few problems in the computer world.

    I want to see attributed file systems (file encoding using a hierarchical system expanding on mine, file type, file creator, date of last backup, md5 checksum, etc.). I'd also like to see multi-stream files, but slightly less. I wish my Python files would contain source and bytecode (of course with some security features to protect against virii propagation). I want my dynamic optimizer to be able to store its information in the same executable it improves, in a separate stream.

    Now you will tell me that FS's that provided such features have not been successful. Sure, guess why: because of the POSIX smallest common denominator tyranny. MacOS forked and type-encoded files may be a nightmare in multi-OS configurations, but I put the blame on Windows and Unix FSes rather than on HFS.

    I also realize that most of the multi-stream improvements could come by treating directories as files (e.g. running a python script by exec'ing the directory it is contained in). This is exactly what Apple is doing with Mac OS X application bundles. It doesn't help for scripts and is of limited use until more OSes implement the idea.

  • . But face it, NTFS, HFS and BeFS will never be the native filesystems on a Linux-machine. They are all proprietary, for one thing. The support for them exists first and foremost to help people to migrate files. The reason there exists a superugly hack for HFS is that any binary will break without its resourcefork. If I'm not all wrong, neither BeFS nor NTFS stores any vital information (just configuration-data, extended attributes etc.) in their streams, so if they get lost, at least your computer will survive. I might be off here, though.
    >>>>>>>>>>>>
    1) BFS is quite openly documented in a book written by the guy who designed it. It's called Practical File System Design. A copy of the FS is used in AtheOS.

    2) The extended attributes in BeOS store a lot of vital stuff. (Size, data, permissions, resources) Taking them out will usually hose the system.
  • Of course, Babelfish [altavista.com] uses Systran technology.
  • aha.. a david byrne/talking heads reference. Good for you. My favorite: "We are just flowers in God's garden, that is why he spreads the shit around"
    ----
  • I assume nothing's changed. It was said reiserfs wasn't going into 2.4.0, but there's nothing against getting a big patch into 2.4.1 if 2.4.0 turns out to be good and stable.
  • by hayz ( 160976 ) on Saturday October 21, 2000 @02:37PM (#686066)


    `Was users do, are never false '

    Linus Torvalds to history and future of Linux

    Star guest of the LinuxWorld at the beginning of of Octobers in Frankfurt was
    Linus Torvalds. The success of Linux has the free operating system and its
    `Erfinder the ' far beyond EDV world admits made.

    1991 had begun Torvalds from its discontent with the PC operating systems
    existing at that time, a its own, to program Unix similar operating system.
    Originally Linux was only for the computer at that time of the 21-Jaehrigen
    meant; but after publication of the version 0.01 in the InterNet Linux won very
    fast trailers and a constantly growing crowd of developers. Meanwhile the open
    SOURCE system on all usual hardware architectures runs and particularly with
    the InterNet servers a fixed workstation captured.

    c't: Linux what you wanted originally, but already some years ago achieved. Why
    did you continue at this point?

    Torvalds: The targets changed. At the start it concerned to me above all to
    make something interesting and have fun. I assumed that I would be the only
    user, and made also no concrete plans regarding the features. I knew, what I
    expected from a Unix; but I was not interested for example in diagram, because
    I wanted to only edit and compile source code.

    After I had published Linux in the InterNet, however different users asked for
    features, of which I had never thought, and ever more new ideas arose. Instead
    of a Unix for my own Desktop Linux should become now the best operating system
    at all. By the desires of other people and later then its Patches and
    assistance it became many more interesting. In the meantime most work is made
    by others.

    Meanwhile it concerns to me a good operating system Design, which is useful for
    other people also. My activity did not change at all so much: I program and
    read a quantity of enamel. In view to my original plans Linux is long finished;
    but the many new areas of application for Linux motivate me to continue. If I
    had not placed Linux in the InterNet and these other users were not, I would
    have probably already terminated the work on Linux 1992.

    c't: How long do you want to continue with Linux? You see one point sometime in
    the future, at which you will say: `Jetzt has I enough of it?'

    Torvalds: I do not believe that there is a special point there. I always
    stopped with individual things, and began already very early. Completely at the
    start for example I provided for all applications: I had - additionally to the
    work on the Kernel - portieren which Shell, the compiler and the libraries.
    Therefore then however very early different people worried, and I could
    concentrate on the Kernel. Nowadays I operate still on the Kernel, but I am
    limited now to a large extent to central functions such as memory and task
    management and the fundamental Design.

    Likewise I stopped to a large extent speaking on meetings like this LinuxWorld.
    I participated here in the panel discussion, but no key note held, because me
    such things dreadfully stresses. I assume I will concentrate on ever more
    special areas; but I do not believe that I will stop completely with Linux -
    until someone comes perhaps sometime, which is better than I, so that I
    withdraw myself.

    c't: If you sometime no more desire on Linux have - as then the organization of
    the Linux developers could turn out?

    Torvalds: I do not see that in foreseeable time occurs; but there is a quantity
    of people, which could do my work. Meanwhile I hardly still program, but I show
    `guten taste ' - I make decisions regarding architecture. But there are other
    people, which have likewise `guten taste '. I am a type central start place,
    process enamels, read her, send her to the correct people.

    Meetings are obviously PR work, and there are enough people, which could do
    that just as well. Zurzeit is most important it to be a type identification
    figure for Linux purely for psychological reasons. Meanwhile one knows Linux of
    companies such as SuSE, IBM or talks has; but long time was Linux this radical,
    movement aforementioned by Linus Torvalds.

    If the airplane from Frankfurt would fall now to San Francisco, everyone could
    - now well, perhaps everyone, but a quantity of people could not take over mine
    job. It would not be surely only one person. The fact that it is for the moment
    a person has historical reasons. In reality Linux is not only one person: I do,
    what I do, and [ the XFree86-Entwickler ] my technical work on the Kernel makes
    its job for Dirk Hohndel for example could several people complete.

    If one regards, how Linux is actually developed: I do not touch for example the
    Kernel 2,2, make all [ the Kernel hacker ] Alan Cox. now have we soon the
    Anwenderkernel 2,4 finished, and then Ted Ts'o [ developer of the
    ext2-Dateisystems ] will worry about the Kernel 2,4. I can concentrate then on
    the Entwicklerkernel, because me in most interested and because the developers
    thereby are content, as I do. But it is not like that that that could not do
    anybody to others. It would surely give a quantity to excitement in the media,
    if I fell over the ocean, but for the Linux development I am no longer
    important so.

    c't: Can you tell us a little more about it, how the development of Linux is
    organized?

    Torvalds: We take a simple example. There someone has an idea. First he will
    discuss that with acquaintance and on the Kernel mailing list: I need this
    feature for those reasons, operate there someone to? If nobody announces
    itself, he will program it. Then it uses the new feature only once, speaks with
    people in its environment more drueber and postet it on the Kernel mailing
    list, if it liked that its code is received into the Standardkernel. He knows,
    how it runs, and does not send not equal a Mail to me.

    If it is perfect code, it is called perhaps alike in the mailing list: we want
    to have `Ja, that.' But does not occur in practice. The reaction looks rather
    in such a way: `Wir understand, what you want, but like that are that
    considerable muck. I would like to make something something similar, but does
    not go together with your code.' And then one modifies the interfaces, so that
    both goes at the same time, and it comes to modifications, in which other
    people are interested.

    That can last for a very long time. Above all large modifications can circulate
    even several years as Patches in the Kernelliste, while and even a quantity of
    people the new code is discussed to begin. I receive such Patches and
    discussions on the list; and I decide sometime then that the code is so useful
    that it is to become part of the Standardkernels. Perhaps particularly with
    important questions I discuss along and legend: `Ich sees, for which you are;
    but from view of the Kernelarchitektur that is the false way '. The code flows
    sometime then into my Standardkernel, or it remains an external Patch for
    special purposes.

    c't: How are you to the possibility of a code Forking in the Kernel? The Linux
    boss at IBM about said recently that a Kernel cannot cover all request of the
    Embedded DEVICE up to the enterprise-critical server.

    Torvalds: Forking constantly occurs. Only because my Kernel is considered as
    the official, that does not mean that there is not a quantity `inoffizielle '
    Kernel. For example most Distributoren has own Kernelversionen with special
    features. SuSE about attaches much importance on ISDN, because in Germany is
    important; for the remainder of the world ISDN is however no topic. Different
    distributions address themselves to different classes of users; SGI for example
    is interested particularly in the SGI market with computers with hundreds of
    CPUs. The SGI Kernel will contain therefore features for the application on
    large machines.

    I try to maintain a common Standardkernel; but that is not Kernel for everyone.
    Naturally supercomputers and Embedded DEVICE make completely different demands,
    and the Kernel will be never the same. I try to keep the differences as small
    as possible to insert and new things in such a way that they do not obstruct
    extreme applications.

    c't: During the work on the Entwicklerkernel 2,3 there was a quantity of
    discussions around the store management...

    Torvalds: .. it gives still.

    c't: If one wants to address large quantities in servers at primary storages,
    one needs a store management, which is not so efficient in systems with few
    RAM.

    Torvalds: That is a classical example. A quantity of things seems to be
    incompatibly together. There I need page on the support for small devices, and
    on the other page are large machines with 16 nodes, everyone with its own
    memory and altogether hundreds of GByte at RAM. The solutions for it must look
    naturally completely different. The first response consists usually of two
    different code watering gene, simply because it makes work to few - the code
    does not have to consider so many possibilities. But the maintenance of the
    code becomes more difficult, because one must have interfaces to both code
    watering gene.

    But then it comes to a Virtualisierung of the store management. That was one of
    the things, on which we operated during the 2.3-Entwicklung: to virtualisieren
    the term of a `Speicherknotens '. A small device is then the same like a large
    machine, with the only difference that it has only a memory node, while the
    large computer has several this node. The small device becomes in such a way a
    simple case of the large machine.

    From the same code then different Kernel develops over an appropriate
    configuration option. In the source code there is a loop over the nodes; but
    with a node the loop runs from zero to zero, when compiling is away-optimized
    and is missing in the Binary any longer. That makes the maintenance of the
    source texts many simpler, and with such Design questions I deal with myself.

    Naturally that cannot be done always in such a way. Sometimes one has simply
    different devices, which need different drivers. One must with the Design the
    decision meets, which code is general, and when one writes different code for
    the different cases. Therefore it goes in the long run into the computers
    Science.

    c't: The Kernelquellen is meanwhile very extensive...

    Torvalds: .. around the 55 MByte of source texts; I do not have the exact
    number ready, but there is about three million lines code. The Kernel is
    enormous, and nobody could maintain him, if not very most driver completely
    independent of it were. In addition, driver development is not simple, because
    one must out-iron all the weaknesses of the hardware.

    c't: Programmers know the problem that they modify something in a place in the
    code and the program falls then in another place.

    Torvalds: Occurs such a thing with the Kernel also.

    c't: How do you kriegt such difficulties into the grasp?

    Torvalds: There is only one solution: clean interfaces. Ideal way should give
    it never surprising of bug or to interactions, of which one never thought. The
    interfaces must be so clear that one knows with a modification of the code in a
    place, in which places one otherwise still modify must. I do not state that the
    interfaces are always so clean in Linux, but we operate on it. Many of the
    modifications in the Kernel 2,4 direct in this direction. It in most cases
    concerned more to sketch clean interfaces to write than actually new code.
    Frequently the program code does not fit however what one considered oneself;
    that makes a modifying of the interfaces so laborious. But it is enormously
    important, even if the user can detect only no advantage therein - until it
    discovers a new machine, where the modified interface is necessary.

    c't: I do not even want to ask, when the Kernel will appear 2,4...

    Torvalds: .. I hope, still this year...

    c't:... but would interest me, where the problems with the new Kernel were
    situated.

    Torvalds: A fundamental difficulty is not at all technical type, but is
    situated in the fact that most people do not want to upgraden at all on a new
    Kernel. They are not content with the Kernel 2,2, have larger problems thereby
    - why should they try a Entwicklerkernel out? It is a certain group of users,
    who test new Kernels; and before the publication of a new Anwenderkernels at
    least a section this user must have tested the new Kernel. The developers have
    their own view of new versions, we need external users for testing. That is not
    only with Linux a problem; some software producers pay even people for trying
    new beta versions out.

    But there are also still a few technical difficulties. We know of some genuine
    bug, for which partly even already solutions exist; however all developers are
    not convinced that these solutions are also really good. In this regard there
    are still some open questions: Frequently the developers want better solutions,
    which guarantee that a certain problem does not occur in the future any
    longer.

    Beyond that there are also problems in communication. The people, bug find, are
    usually the developers themselves and do not describe the problems completely
    differently, than that would do a developer. That costs simply much time.

    c't: A while ago you already spoke of the Kernelerweiterungen of the Linux
    Distributoren. SuSE for example delivers the Anwenderkernels 2,2 with the
    Logical volume manager and the Journaling file system ReiserFS. Even ones over
    ReiserFS intensively discussed the Kernelentwickler - with the decision not to
    take up it yet to the `offiziellen ' Kernel. What do you hold of such
    `Eigenmaechtigkeiten '? You had nevertheless surely your reasons for the
    decision against ReiserFS.

    Torvalds: Particularly in the last year new groups of users were added, and
    even SuSE - to want to speak without now for SuSE - co-operated much with large
    customers, who are interested in LVM. For the administration of several hundred
    disks one needs such Tools. And also, if the system does not fall, but, is not
    acceptable e2fsck-Laeufe of several hours is only occasionally again gebootet,
    so that one will take dear ReiserFS. Such applications arose only lately, and
    it needs simply its time to integrate such a thing. LVM is since a half year in
    the Entwicklerkernel 2,3, but still last week we operated on it. ReiserFS
    wanted to take up I in no case before the Kernel 2,4, because I always thought,
    briefly before the code Freeze to be and no completely new questions more into
    the discussion to bring wanted. SuSE and others tested ReiserFS in the
    meantime, therefore we will probably take up it to the version 2.4.1. nglish:

    Which users do, is never false. I cannot prescribe the Linux Usern
    nevertheless, what her to do to have. My opinion always was: Which the people
    want to always do, it is correct. I can make only decisions, how architecture
    is to look, those enabled, or notes to give, how one can obtain the same result
    with another beginning. ReiserFS will come, and I cannot say easily `nein ' to
    it. Perhaps for me it goes only around the timing and some modifications, in
    order to integrate ReiserFS better into the Kernel.

    [ SGIs Journaling file system ] XFS is another thing. **time-out** it be not
    yet so far like ReiserFS, and I can not say, whether it in a year section the
    Standardkernels be will. [ the successor of the Linux standard Dateisystems
    ext2 ] ext3fs is again another affair. Already there the code is, and there are
    users, who already use it. ext3fs could be quite integrated into the
    Kernelserie 2,4 or at an early point in time in the Kernel 2.5. It concerns to
    me flexibility. Open SOURCE means that one can make all possible one with the
    code.

    That does not mean that I use ReiserFS or ext3fs. Which interests me in it, is
    something else. ReiserFS, XFS and ext3fs will have obviously a quantity
    together. What means for the Virtual the file system [ the Kernelstruktur,
    those the interface to the file systems forms ]? Perhaps take we sections code,
    which several of the file systems contained - even if they do different things,
    it concerns and tries in the long run the same -, to create for it a common
    interface. Makes work real such a thing. Until the VFS can deal with
    Journaling, still two or three years will probably offense; but then the file
    systems do not have any longer so much work thereby. That is the type of
    questions, with which I deal with myself.

    c't: What is to time the most interesting technical developments with Linux?

    Torvalds: With most things it does not concern at all the Kernel. Naturally
    there are there exciting developments, for example the scaling barness - that
    was technically extremely interesting. But the really fascinating things make
    other people. The whole excitement around DVD was very interesting, although
    perhaps something discouraging. And then naturally the Desktop and things,
    which are quite unusual actually for Unix. If I look for example television, I
    do with a Linux computer, whose fixed disk serves as video recorder. If one
    used such a device times, one wants to never touch again a classical video
    recorder. I use such devices only for films, which there are not on DVD.

    c't: And generally in the IT world? Finally you operate in a Hightech company.

    Torvalds: These whole wireless stories. I have for example a great Handy, a
    laptop and a Palm. If I on the way am, use I mean laptop, in order to read
    E-Mail; and then I want to use the Handy as modem. But does not go; this type
    communication does not function simply yet. I think, in five years all these
    devices to communicate together will be able. Interesting thereby the
    technique, but the conversion to applications is fewer.

    c't: If you on the long history of the development of Linux back-look: Were
    there things, which surprised you there?

    Torvalds: Very few. Naturally I would have been at the start very surprised, if
    I had known, where Linux would develop. But when that all occurred, nothing
    really surprised me of it. When I placed the version 0.01 in Internet, I
    counted on comments. Perhaps came at that time somewhat more reactions, than I
    had expected; but I believe, not times that was really like that. After some
    months there was 50 instead of the expected five people, then some hundreds;
    and that surprised me already somewhat. But I at the same time experienced the
    development of five over ten and twenty to fifty Usern, therefore there was no
    point, at which I said myself: `Mein God, which occurs there? ' Then the
    commercial interest, which increasing medium echo - which think most people,
    which is everything in the last two years occurs, but actually developed it
    slowly of the last nine years. Companies began to support Linux - sometimes it
    was surprising to be seen, in which extent, approximately at IBM nobody counted
    on the fact that IBM would go so far. In addition, there was not these one
    point there, at which I would have been really surprised.

    c't: Does it give somewhat, what annoyed you?

    Torvalds: Not much. The most unpleasant surprise was this Mindcraft study [ one
    of Microsoft financed study, in in April 1999 Linux in relation to Windows NT
    very badly had probably cut off ]. I remember still, how sour at that time I
    was. Meanwhile it does not annoy me any longer, since it went out at the end
    well. Perhaps most surprisingly the constantly positive reactions are opposite
    Linux. The developer municipality was from the outset very friendly, despite
    all these discussions around the Linux Kernel, which can work occasionally very
    violently.

    c't: There there is a quantity of ugly discussions...nglish:

    Torvalds: Yes, with discussions around their technical ideas the people become
    very heated and unpleasant.

    c't: Is typical for the open SOURCE municipality? For example this strong
    antipathy opposite Microsoft...

    Torvalds: No, not only. Are there very similar to Mac user. Internet makes it
    easy to talk simply straight on and then it comes fast to Fleming Wars. One
    does not know the people, with which one argues, and then exaggerates one it
    easily. That is not definitely only with Linux like that - if one all `Advocacy
    the Groups ' there outside regards... it is amusing. The arguments between
    Linux and FreeBSD fans for example are still many more violent, because these
    groups know each other and know well, where it pain-does. The people argue
    simply gladly. It is a social competition, one shows so its superiority other
    one opposite. Many of these fundamental debates are in the meantime completely
    past, for instance the argument around vi versus Emacs (odi)
  • by Anonymous Coward
    Ce que des utilisateurs font, n'est mauvais jamais

    Thanks a lot! We can all read French here in the US since Lafayette came and helped us for our independence.

  • Had I known that Linus had said these things I would have quoted him. Rather than parroting him I was explaining what I've learned on the subject.

    Anyone can be a 'skeptic' that takes absolutely nothing on that person's part; very few people can produce anything of value.

    I've only led one small free software project - but I'll bet that is one more than you have ever led. I know enough to recognize talent in people - you don't, and therein lies the difference.

  • Yes, BeFS is documented. So is NTFS, and probably HFS too. But they are still proprietary. Be/Microsoft/Apple can make changes without any further notice, and at least when it comes to NTFS, it is not a question of IF it will change, but when...

    As for the EA's of BeFS, they'll be easy to preserve. What I was talking about was any possible streams. As I also wrote, I don't know whether BeOS uses streams or not. But IF there are streams in BeFS, and IF they contain vital information, then obviously, it'll require a lot of thought to make support for it good.

  • by 1010011010 ( 53039 ) on Sunday October 22, 2000 @09:34AM (#686070) Homepage
    You can't possibly believe that Linux is the only operatingsystem that has got buggy device-drivers. Device-drivers ARE fully capable of crashing your machine.

    Oh, I know. QNX and Multics are the only OSes I know that can survice a driver crash. The scanner would opps the kernel without the scanner driver being loaded. It was just the usb-uhci code oopsing, before I even had the scanner.o compiled. It works fine on my home machine, as long as I don't turn off the scanner while it's in operation (otherwise... oops).

    What's so broken about the semaphores/spinlocks in the v2.4 kernel?

    To be honest, I dropped the 2.4 kernel after test-6 and retreated to 2.2.16 and 17, which were also broken (race conditions, oopsing on upping the sempahore). Now we're doing FreeBSD, as I cannot lose more time waiting for things to be fixed.

    But imho, none of the proposals [streams/EAs] posted on the LKML during the entire debate was sane either.

    Ours was posted off-list, by invitation from Viro and Cox. Since they simply told us to shut up and added us to their killfiles, we dropped it. I'll post our proposal to the list if you'd like.

    But face it, NTFS, HFS and BeFS will never be the native filesystems on a Linux-machine.

    I don't think anyone ever suggested that; just they they exist and Linux should be capable of supporting them. Interoperability.

    I'm pretty confident Alan must have given you an explanation why [he rejected the prink patch]?

    He did not.

    It should print long long's without any trouble, at least in the v2.4-series.

    That's our patch, I think. It was accepted for 2.4, but not for 2.2. Because 2.2 is the 'stable' kernel, a working printk is useful now The 2.4 will be nice when that kernel stabilizes (in terns of interfaces).

    No, Linus doesn't have to accept code he doesn't like, but if you try to imagine how much code he has to accept every week,

    It's silly for Linus to have to look over every patch. That's what I'm talking about -- a lack of guidelines, coding standards, etc. means that delegation doesn't happen. Viro's nominally in charge of filesystems, but that doesn't stop Linus from tossing in new ones (DevFS, JFFS). There's no clear control. If there was -- if the kernel had clear areas of responsiblity, that is, if it were modular, then Linus would not have to even look at patches for systems he had delegated.

    He does a DAMN good job, though

    So does Alan Greenspan, but central planning still has its limits.

    People who don't realise that they can't patch
    their own kernel to add the kernel debugger


    It's more that, because good support for debuggers is kept out of the kernel, applying one of the kdb patches doesn't help as much as it should. We've tried them all. :) Some of them actually make the kernel crash. Others don't provide needed information -- like a stack dump.

    However, I believe he will sooner or later begin using BitKeeper. Maybe already for v2.5

    That would rock!

    But you have to realise, that it's impossible to work 48 hours/day.

    Viro is notorious for making changes without warning and then not documenting them. It's hit or miss with everyone else. And I'm not expecting people to work 48 hours a day; I'm saying, if you don't have time to do it right the first time, do you have time to fix it later?

    you'd probably not appreciate the kernel no matter what changes are made.

    Not true. If I simply hated Linux and didn't care if it improved, I would simply have kept my mouth shut and used FreeBSD, or something. THe fact is, I want it to improve. I like it. I'm not blinded to its faults, though.

    The reason I considered your post trolling wasn't because of your opinions on Linux, but because you implied that people reading Slashdot automatically would flame you, send spam to you, direct ICBM's at your house etc., just because you have a different opinion.

    Only because it's happened in the past. The voice of experience and all. I figured that if I pointed out in my message that that type of response isn't constructive, it would prevent that type of response. It has worked so far, for the most part. Lots of people start of posts of controversial opinions with "this isn't a flame; don't flame me."

    Thanks! I'm glad someone is willing to debate the ideas I presented, and not my spelling, shoe size, etc. :)

    ________________________________________
  • by Anonymous Coward

    I have been programming all of 2000 without a debugger because we are bringing up an operating system on IA64. We have a complete printk() equivalent that helps us tremendously. I fixed some problems single-stepping IA64 instructions with our own-made disassembler.

    Does this make me a superior being ? Nope it only makes me wish for a debugger. We would still keep our invaluable printk()'s as they still allow us to get useful info when no debugger is attached.

    Having a debugger available will not make me program sloppily. We do test our code thorougly, comment it and have automated quality metrics. But even with those you will always get a nasty bug and then a debugger can save your day,week or month.

    You are a complete loser and I have no respect for people with your attitude. They do not deserve programming jobs.
    I hope you did not represent accurately the thoughts of the Linux kernel leaders. I fear you did.

  • First, MacOS X doesn't treat directories as files, only the GUI does. Proof: you can cd into any application-"fork" in MacOS X (and NeXTstep for that matter). Perfectly tar-able, perfectly Posix.

    However, Linus own suggestion was indeed to treat streams both as files and directories. There are a lot of considerations here, however, and that's one reason why we don't have streams-support in the kernel yet. The VFS is perfectly ready for such an approach, but we have to think this through.

  • dude, the kernel doesn't bug me as much as how all the GNU shit is ruining the unixes i came to love.
    emacs, auto*, info you're practically forced into that garbage.

    but linux is a poor kernel as evidenced by how often it is redesigned.

    and the GNU system drives me nuts because of it's restrictive licencing.

    GNU was the worst thing to ever happen to UNIX programmers, with the exception of gcc and gdb.

    oh, and i suppose this is blatant flamebait to anyone who believes that gnu/linux is the best system to ever exist. after all having a different opinion is the basis of all flame-bait, no?

  • First, MacOS X doesn't treat directories as files, only the GUI does. Proof: you can cd into any application-"fork" in MacOS X (and NeXTstep for that matter). Perfectly tar-able, perfectly Posix.

    Thanks for helping me making my point. As soon as you drop in a shell you lose the bundle POV and only see bundles as directories. As a user this is not what I want 95% of the time.

    I want to have Emacs in a bundle, complete with all the share/ files. From the command line I want to be able to cp this "file" around (no -R), I want to have my PATH include the directory containing the bundle, not a bin/ underneath.
    Only when I need to change the content of this bundle should I have to see this bundle as what it is, i.e. a directory.

    But such semantics would not play well with the POSIX view of a file system. Therefore I am not holding my breath and will have to live with a half solution.

    Does tar even respect the bundle bit ?

  • eh? how can you not agree with the BSD licence?
  • YEah, but it dodn't get modded up. http://slashdot.org/comments.pl?sid=00/10/21/19921 4&cid=103
  • Well, the v2.2.xx USB-support is a backport if I'm not all wrong, and this backport was done simply because a lot of people needed USB. If you can help find the problem, then please, do so. I repeat my question, have you reported the problem together with a decoded oops? And have you tried the latest kernel pre-patch? It contains a LOT of USB-fixes (I've checked it out.)

    The difference between test6 and test10pre3 is quite amazing. test10pre4, to be honest, isn't very good however, because for the purpose of finding bugs, two extraoneous BUG() statements were added. They causes the kernel to oops while it shouldn't.

    As for the off-list posting with you being put in Alan's killfile, this sounds a little strange. I can well imagine you being put in Viro's killfile, however; he's quite easy to irritate sometimes. You might well have been blocked out by Alan's VERY aggresive spamfilter, however. Oh, and this might well be the reason why you haven't heard anything about the printk-thingie; he might have missed your post altogether.

    As for interoperability support for filesystems, it's simply a fact that non-native filesystems ARE less important to support than native ones. And thus we can allow ourselves to leave the planning of that support to the future and focus on the more urgent issues instead. But sooner or later, NTFS, BeFS and HFS/HFS+ will get full support, I'm pretty confident. They all have some form of support already (apart from HFS+, which stems from the fact that HFS+ is totally impossible (it seems) to get documentation for.)

    As for Linus accepting JFFS into the kernel, I see no reason why Alexander Viro should have a say there; he doesn't make the calls on what filesystems go into the kernel, unless they require changes to the VFS. As for DevFS, yes, that was a little rash move by Linus. In the long run, however, I think it'll turn out good; after all, it forced Viro to make his VFS-changes earlier than planned.

    Please submit your proposals for streams/EA's again to the kernellist. But do as soon as v2.5.0 is released, or at least AFTER v2.4.1 has been released. Now is not the right time for such discussions. We're having enough noise as it is with the ongoing discussion about in-kernel use of C++ (sigh!).

    The kernel, does however have some quite clear areas of responisibility; the ISDN subsystem, the VFS, NFS, ext2, the network subsystem, the different ports etc. And almost every device-driver has its own maintainer. The reason why Linus goes through all the patches anyway is because he's acting as an integrator and supervisor. He doesn't make sure all drivers work properly, what he does (tries to) make sure is that no drivers make other drivers fuck up. The same goes for other parts of the kernel, of course. This is why some bigger changes, like ReiserFS, takes quite some time to get into the kernel.

    If you think that the 64-bit printk support really is needed, then submit it again. And if needed, again.

    The kernel, does however have some quite clear areas of responisibility; the ISDN subsystem, the VFS, NFS, ext2, the network subsystem, the different ports etc. And almost every device-driver has its own maintainer. The reason why Linus goes through all the patches anyway is because he's acting as an integrator and supervisor. He doesn't make sure all drivers work properly, what he does (tries to) make sure is that no drivers make other drivers fuck up. The same goes for other parts of the kernel, of course. This is why some bigger changes, like ReiserFS, takes quite some time to get into the kernel.

    If you think that the 64-bit printk support really is needed, then submit it again. And if needed, again.

    Yes, Viro is quite (in)famous for making changes. Close to all of them have been very clueful and needed, however. And he's done a decent job fixing up the mess after himself. And in the cases he hasn't he has at least informed people about the breakage.

    Not true. If I simply hated Linux and didn't care if it improved, I would simply have kept my mouth shut and used FreeBSD, or something. THe fact is, I want it to improve. I like it. I'm not blinded to its faults, though.

    Actually, no. I think you'd complain + use FreeBSD. Quite like what you're doing right now. However, I'm glad to hear this is not the case...

  • WooHoo!!!
    Just a question: there was a lot of discussion about ReiserFS being too late to be included in 2.4, since it was already frozen. What has changed?
    ___
  • It contains a LOT of USB-fixes (I've checked it out.)

    Yes. It doesn't oops anymore, but it doesn't work, either. The driver hangs unti la reboot, but doesn't actually oops anymore. I think I did submit one oops report to someone, but I can't remember. When I get a crash I can't attribute to my own stupidity and/or hardware problems, I generally report it, unless I know that it's already being worked on.

    The difference between test6 and test10pre3 is quite amazing. test10pre4, to be honest, isn't very good however,

    I decided to just sit it out until the first production release, and then get involved with the Linux port again, and submit patches for 2.5 (including real unicode support, VFS enhancements and a new filsystem).

    As for the off-list posting with you being put in Alan's killfile, this sounds a little strange. I can well imagine you being put in Viro's killfile, however; he's quite easy to irritate sometimes. You might well have been blocked out by Alan's VERY aggresive spamfilter,

    Viro is easy to incite. He's like one bug button, and people keep pressing it. Alsn actually emailed me that people discussing streams earned a place in his killfile. But I later sumbitted the printk patch for 2.2.18, and was told "nope" when I asked if it could be included. No explanation.

    it's simply a fact that non-native filesystems ARE less important to support than native ones. And thus we can allow ourselves to leave the planning of that support to the future and focus on the more urgent issues instead.

    We were discussing streams in terms of changes to the 2.5 kernel. And just because they're not the default filesystem doesn't mean they're not useful or important. If you want a Linus server to really be able to replace an NT one, you'll need streams and ACLs.

    I think it'll turn out good; after all, it forced Viro to make his VFS-changes earlier than planned.

    That's as may be; but it created more chaos in the short-term, and undermined Viro's delegated authority. ANd got Linus back into the position of accepting/rejecting patches. That act violated any semblance of organizational structure there was. I understand it's his, but it's easier to get work done with help, and it's easier to get and keep help if your earn their respect.

    Please submit your proposals for streams/EA's again to the kernellist. But do as soon as v2.5.0 is released, or at least AFTER v2.4.1 has been released.

    This is my plan.

    Actually, no. I think you'd complain + use FreeBSD. Quite like what you're doing right now. However, I'm glad to hear this is not the case...

    Well, I've not given up on Linux totally, and still use it for my home and work desktops, and the cvs server, mail server, web server, etc. I've also not given up on the Linux port of the FS we're developing -- just decided that it's not possible in the short term due to internal Linux fuckage (which may be -- hopefully will be -- fixed in 2.4/2.5). Since I still have to complete development of the filesystem, I'm doing it on FreeBSD now, because it's stable and provides the features we need.



    ________________________________________
  • And you know why, don't you? It's simple: Linus and Yoda are relatives!

    It turns out that Linus is Yoda's father's brother's nephew's cousin's mother's third step-child, twice-removed.

    What, you didn't know?
  • I was talking to a vendor at my work (I won't say who they are, but you know them). They told me that they want kernel support for 64 processors. The CEO went to Linus himself, and ask him straight on to please let the official kernel have support for this. Linus replied that he wanted no such thing.

    SGI can mantain the darn patch themselves. How many people would benefit from Linux having 64 CPU support? 5, 10 ? And it would add bloat to everyone else. I don't want to download 80 meg kernel. Same with NTFS Streams. How many people NEED NTFS streams? You want it, get a patch and make your own kernel. Now you have your own fork, and feel free to call it whatever you like.
    kernel debugger:
    Only thing Linus said it that it will not be distributed with the stock kernel. He is not stopping anyone from using it. If you want to debug Linux, go and download the darn debugger (people affected, 100 to 1000 at most).

  • I've been using 2.4 in regular scale since the first test1 appeared. Generally it is much more stable on my machine. Besides the SMP is clearly better and does not suffer some weirdnesses I noted while using 2.2. Besides I need 2.4 for my 3D card as I use XFree4, UDMA/66 and a TVcard. These things are somehow in 2.2 also. But they are mostly patches, with several bugfixes and not working so well as in 2.4.

    Yes there are several bugs and features on 2.4. But that depends for what you are using it. On a production server it may still carry some risks. On my workstation it fits perfectly for my tasks.
  • by Anonymous Coward
    That's a simple type. Replace read with sniff.
  • Linus:

    A fundamental difficulty is not at all technical type, but is situated in the fact that most people do not want to upgraden at all on a new Kernel. They are not content with the Kernel 2,2, have larger problems thereby - why should they try a Entwicklerkernel out? It is a certain group of users, who test new Kernels; and before the publication of a new Anwenderkernels at least a section this user must have tested the new Kernel. The developers have their own view of new versions, we need external users for testing. That is not only with Linux a problem; some software producers pay even people for trying new beta versions out.

    But there are also still a few technical difficulties. We know of some genuine bug, for which partly even already solutions exist; however all developers are not convinced that these solutions are also really good. In this regard there are still some open questions: Frequently the developers want better solutions, which guarantee that a certain problem does not occur in the future any longer.

    Beyond that there are also problems in communication. The people, bug find, are usually the developers themselves and do not describe the problems completely differently, than that would do a developer. That costs simply much time.

  • I'm sorry, but are you making a pun on "Linus Torvalds" or that "Tar ball"?
  • by pbur ( 88030 ) on Saturday October 21, 2000 @03:06PM (#686087)
    Just a note that the Fish is run on Systrans Translation Engine. How ironic.
  • We have more of a resentment against the annual invasion of German tourists. They may be good for the economy, but I still don't like to be addressed in German in my home town during the summer holiday.

    As for German being close to Dutch, that's kind of true, but more in the sense that French is close to Spanish. We do get German in school here. It's hard to get a good grade for it though, because of the differences in each language's 'quirks': the similarities make you forget the differences, so before you know it you're speaking German with Dutch bits of grammar interspersed if you don't watch out.

    Just my $.02
  • 2.2.18 will have USB support built-in and AGP/DRI support. They are both being backported from 2.4.
  • Given a German to English translation, what the hell is a French word doing in there?

    I was slightly inaccurate in my original post. "Email" is a French word meaning a kind of enamelling that migrated to English. I can only conclude that English is not the only language the word migrated to.

    --
    "Where, where is the town? Now, it's nothing but flowers!"

  • I've tried to clean up the systrans translation [sympatico.ca]. In the mean time, I think I've come up with some slightly different interpretations of some of the text, most notably:

    Torvalds: One fundamental difficulty is not at all technical, but comes from the fact that most people do not want to beta test a new Kernel. If they abandon Kernel 2.2, it could end up causing new problems.)) Why should they try a test kernel out?

    was: They are not content with the Kernel 2,2, have their own problems.
    `ø,,ø`ø,,ø!

  • I'm a little suprised to hear that Systran did a better job than Babelfish. The same company wrote both engines. I suppose that Altavista, which seems to have gone into flounder mode when it spun off from DEC/Compaq, hasn't sprung for an update.

    Apropos the email (French for "enamel"? now that's scary) problem: Systran does offer topic-specific dictionaries. But I'd always assumed they were broken, because (a) I haven't seen a lot of difference switching between them and (b) Systran user dictionaries definitely are broken.

    __________

  • There is nothing stoping you from getting the stock kernel, applying patches and releasing it on your own.
    The process is relatively simple, so most people would rather preffer to get the kernel from a trusted source (Linus) and apply patches, than to download it from you.
  • At this point in the 2.0 to 2.2 development process, it seemed like a majority of Slashdot readers were running a 2.1 kernel instead of a 2.0 version and were extremely happy with it.

    Look around here - Almost nobody is running a 2.3/"2.4Test" kernel. Because it's too damn far from being ready, and considering the original target shipdate of XMas 1999, that means there has been a bunch of fuckups. (Not that Linux 2.4 matters at all in my eternal scheme of things, but reading between the lines on LKML, it seems to be a generally held opinion. It will ship when it's ready, but until then enormous effort will be spent backporting stuff to 2.2.)

    Now it could well be that Slashdot is far less technical than it used to be, but I still think there's plently of people who would love to beta test a new kernel. They just don't want to alpha test one.
  • From me? No. But if I were buying an IBM, SGI, HP machine, and it needed a patch. Then yes, I would download it from them.

    If a trusted group, company, group or consortium, forked the kernel for their own use, then if I needed something with the same goals, I would download my kernel from them instead of the "stock kenel" from Linus.

    My attitude is not a "write your own" but to take what is out there and work on it. If there's an enhancement to be made, and Linus won't accept it. Then try to find others (be it companies or individuals) that share your views and start a fork. A group is much more trustworthy than a single individual (with the exception of Linus :) ( and I'm talking code and not politics!). Samba did this with the NT/PDC, why can't Linux do the same between palm tops, desktops, and mainframes?

    Writing your own kernel or forking it yourself is not an acceptable answer. But to go and get a group of users that share your view and together do the fork, is!

    Steven Rostedt

  • by Anonymous Coward on 02:41 PM October 22nd, 2000 EST (#223)

    I have been programming all of 2000 without a debugger because we are bringing up an operating system on IA64. We have a complete printk() equivalent that helps us tremendously. I fixed some problems single-stepping IA64 instructions with our own-made disassembler.

    Does this make me a superior being ? Nope it only makes me wish for a debugger. We would still keep our invaluable printk()'s as they still allow us to get useful info when no debugger is attached.

    Having a debugger available will not make me program sloppily. We do test our code thorougly, comment it and have automated quality metrics. But even with those you will always get a nasty bug and then a debugger can save your day,week or month.

    You are a complete loser and I have no respect for people with your attitude. They do not deserve programming jobs. I hope you did not represent accurately the thoughts of the Linux kernel leaders. I fear you did.


    Just quoting you so you can use my +1, so that someone will read this. What OS, by the way?

    ________________________________________
  • Debian does not make a good desktop OS.

    But Corel and Storm do... I believe Corel is the #2 most downloaded Linux iso at tucows.com.

    New distributions that arise will either be based on .deb or .rpm.

    I don't see this kernel as having a imediate impact for most home users. The major things will be plug and play and usb support. Most desktops only have one cpu so better smb is not a factor.

    People say that Debian software is outdated and this may be true for the stable branch but the unstable has the most current packages of any distro. There is an increasing demand for a compromise between the bugs in unstable and the age of stable as more people start using debian on the desktop and as companies realise that there is money to be made from selling specialised versions. I expect to see a new "Debian firm" release that addresses some of these concerns

    Debian will be relevant for atleast another 10 years yet.

  • So if too many people tell you to love it or leave it you are going to leave it and code a better OS under the GPL?

    That doesn't sound like a very scary threat to me...

  • I think there are two main reasons people aren't as anxious to test linux kernel betas as they are windows or other software packages.

    The first is that the linux kernel *is a kernel*. If Microsoft publicly beta-released the next version of krnl386.exe, I'm convinced the beta usage rates would be very similar.

    The second is that the open development process allows all the potential users to know very well what the new version supplies/fixes so they don't need to try to find out for themselves. When new windows betas come out, people are driven by curiosity to find out what's in the locked box.
  • News for American Nerds. Stuff that matters to Americans. Fair enough. The mass(iv)es have spoken.
  • by Felipe Hoffa ( 141801 ) on Saturday October 21, 2000 @03:07PM (#686101) Homepage Journal

    Now I understand why the geek community loves Linus: He speaks like Yoda!

  • by Sneakums ( 2534 ) on Saturday October 21, 2000 @03:16PM (#686102)

    If had been written "e-mail", the word would have been translated correctly. The word "email", however, is a French word that refers to a type of enamel. The translation is spot-on.

    --
    "Where, where is the town? Now, it's nothing but flowers!"

  • Thanks a lot! We can all read French here in the US

    Slashdot == US ?
  • Ok, so kernel 2.2 has multicast support. Ok, so kernel 2.2 has channel bonding support. But does it have multicast of channel bonding? Nope, I've tried and begged and sweet talked the 2.2 kernel to try to get multicast to work over channel bonding. Finally I vim'ed up /usr/src/linux/drivers/net/bonding.c and found this commented function:

    /* fake multicast ability */
    static void set_multicast_list(struct device *dev)
    {
    }

    I began to weep because, Yes, the function is empty. All I want for Christmas is multicast over channel bonding. Of course, the 2.4-test kernels reliably crash when they bring down bonded interfaces, which sucks because the kernel also likes to panic on fsck. What fun.

    Linus, I hope you are listening. Because I want 2.4 someday! Thank You.

    -JungleBoy
    --
    "You never know when some crazed rodent with cold feet
    might be running loose in your pants."
  • My dual PII won't last more than a few hours with even the latest 2.2.x kernels. I am not sure what the problem is, usually X crashes and will not restart. I didn't have this problem with the earlier 2.2.x kernels and lately I have been running the 2.4 test kernels without any major problems.

  • There is a certain class of programmers who don't use debuggers very much because mostly their code is so well designed and thought out that they don't put very many bugs into it in the first place. Such people can see that the code produced by the other types of programmers - who heavily require debuggers - is sloppy and the result of confused thinking.

    Those who do use debuggers heavily are incapable of understanding the thought processes of those who don't use debuggers because the thought processes of the 'use a debugger' school are too confused to allow such understanding. Indeed, the people who depend upon debuggers lack the clarity of thought to even understand that another way of doing things might exist.


    I think this is a bit of a generalisation; you are artificially separating programmers into two groups, when really some slippage does exist.

    I use debuggers occasionally, usually when I know what I think the code I wrote does, and something else is happening. This then allows me to correct the semantics, to more closely match my envisioned solution. As far as I am concerned, this is the right way to use them; the "oh, it barfs on 0, add a check for that" school of programming just leads to grief and heartache. :)
  • by flux ( 5274 )
    I've been hearing this stuff a bit lately.

    But what I'm wondering, who is forcing you to use GNU-utilities. And moreover, what -is- the real problem with them. Pick two unix-utilities, a GNU-one and a 'traditional' one, and most probably the GNU-one is the working one and with the most features.

    One argument I've been hearing is that GNU-software is 'bloated'. Well, that might be true. I don't care, though. They still aren't that large, they are infact quite efficient in what they do.

    And auto* is a bit horrible, but I don't see a better alternative lurking around.

    But that stuff about 'info' is right on :-). Or actually the fact that the GNU-people are abandoning manual-pages or just converting the info-pages to manual-pages, creating really unuseful manual-pages. Fortunately Debian-people are building the pages back to the packages :-).
  • Your definition of proprietory is a little weird.

    A) When was the last time either BFS, HPFS, or NTFS changed?

    B) Ext2 can change too, and the change will break software.

    Under your definition, everything is proprietory to those outside its community.
  • So say that he's naked, and then move on without the useless fools that don't see it already. Or sit here wasting your time like another type of useless fool, while others move on without you.
  • by The Man ( 684 ) on Sunday October 22, 2000 @09:11PM (#686110) Homepage
    Hmmm...a rational debate. Ok, I think I can manage just a bit of that. Simply put: your arguments are correct, and have maximum negative spin. Many of the problems you address (except one, see below) do in fact exist and are very likely slowing the development and release of 2.4 as well as contributing to needless acrimony. I'm not even going to contest that, as a longtime lkml lurker and sometimes kernel port developer. You take round one.

    In round two, everyone realises that these problems have been going on for some time now and aren't new. Like, almost the whole 9 years. The only way in which they're any worse is that there are more developers fighting for their ideas, more ideas (more of which are bad, simply by proportion), and more lines of (largely useless) code. The process, bizarre and apparently disorganised though it is, has worked thus far, and there is every reason that it will continue to do so. Cooler heads concede your argument, but recognise that it's mostly spin, and take round two.

    In round three, we spar over posix compliance. It's no contest; without posix we're all sunk. Posix is an uneasy compromise attempt to reunify unix after 20 years of fragmentation and dirty infighting. It's the only thing that offers you any real assurance of compatibility, the only real standard out there. If we don't follow posix, then porting to and from Linux becomes tens of times more work, and thus requires tens of times more benefit (or sales, in the commercial world) to make it happen. That's a lose; it restricts us to linux-only unportable, unmaintainable garbage software like util-linux -- any porter's or distribution maintainer's nemesis -- instead of clean, fairly uniform software like the GNU set and for that matter 90% of everything that's out there. You can't bitch at microsoft or sun for ignoring or subverting standards and then do it yourself. Posix is the standard, and we're team players. Being big doesn't mean you should ignore the standards. Posix takes round 3 and comes close to a tko.

    Finally, just a light jab against a bit of unnecessary propaganda: I'll get back to switching over to FreeBSD

    Me, too...oops, no, it only runs on i386, an inferior architecture with no future, and for that matter, no present. Call me when you can support my SMP Sun Ultra 2 as well as Linux does. In the meantime I expect we'll release 2.4, steal your admittedly much superior VM, drop the completely useless and divisive NTFS altogether, and in short make everything right with the world. A bit of positive spin concedes round 4 and proposes a draw.

  • yeah but suse and others dont' set the standard. Once comapnies try to start settign thier own standards and making thier own kernel versions the I must say the future of linux is in question since interoperability issues will come up. i think one place ands one place only should control kernel versions and kernel releases and kernel features, not several companies out for marketign ploys. If Linus says no, so then let it be, if you want it patch the darn thing.
  • Well, I don't know about BFS, but HFS (not HPFS, which doesn't contain streams) changed into HFS+ which still isn't documented publically as for all I know, and NTFS changed quite a lot with Win2K; at least enough for any mounting of a Win2K NTFS on a non Win2K system to corrupt it... And if I know Microsoft correctly, the next version of NTFS will contain a similar change.

    What I mean, however, is that as long as the filesystem is non-open, we can't be sure that we've managed to implement the filesystem 100% correctly. Of course, if Be says that the existing documentation of BeFS is complete then sure, we can probably trust them. I wouldn't bet on this for NTFS and HFS+, though...

  • I read your post on Slashdot and I followed much of the original discussions via KT ( Only kernel developers need read the 1000+ messages a week on LKML ).

    I can't comment about clean code because I don't speak Assembly or C but I can make a guess about Enterprise scalability.

    What are the alternatives to having the official Kernel scale up to 64 CPUs that would pleas the people who need 64 CPUs ?

    You thought only of a fork in the Kernel. However the other way is to have entirely different Kernels running on those systems.

    According to IBM marketing; AIX 5L will do just that. Essentially a system that will compile software written for Linux consistently and will also run Linux binaries built for the same CPU.

    In other words IBM will be able to sell you a Linux system and seamlessly switch to an AIX system when your needs outgrow Linux.

    Thus leaving the main Linux development free to concentrate on the vastly more important ( I'll define important latter ) Small server, Desktop and embedded areas.

    I define important in terms of the number of users. Most people work in small organizations. the kind that a single Linux system running the 2.0 Kernel can adequately serve. Anyone can use an embedded device and most people, both in small and large enterprises and at home or school use desktops of some kind. Well, I mean could use since much of the world doesn't even have electricity.

    By contrast only those with truly vast jobs to accomplish can use enterprise systems. When the Hardware costs $2,000,000 and the application software costs more it doesn't matter whether or not you pay for the OS. That leaves access to the source code as the main advantage of a Linux Kernel on Enterprise systems over other larger OSs.

    But wait. If you are a huge enterprise and are pushing the limits of your enterprise OS you can do two things the rest of us can't. #1. You can tell the vendor when to fix bugs and #2. You can often obtain access to the source code.

    In other words the Free ( Gratis and Libra ) nature of Linux is a huge advantage on small systems but irrelevant on large systems. Except as a matter of principle. Despite RMS' efforts the principle of Free Software for it's own sake hasn't taken root yet. Especially not among IT managers.

    So yes. I have no response for any of the other concerns but if Enterprise scaling adds even slightly to the problems on the low end then the choice has to be against that. After all there are easy ways out. IBM seams to be the 1st enterprise vendor to have figured out that escape route.

    If FreeBSD, OpenBSD or even Hurd can be coerced into running on enterprise systems and made to seamlessly support Linux software you will have all you ask for in spades.
  • Yeeepeeee!!!
    At laaaaaasssst !
    I was soooo fed up with hacking the latest kernel to adapt a reiserfs patch into it each time !

    URESHIIIIIIIIIIIIIII !!!
  • You forgot:

    "I am a type central start place, process enamels, read her, send her to the correct people."

    --Linus Torvalds

  • by Gendou ( 234091 ) on Saturday October 21, 2000 @03:54PM (#686119) Homepage
    The reason most people will stick with 2.2 is its extreme level of maturity. I am determined to not put a 2.4 kernel on any server boxes I run (www.thesilicondragon.com). 2.2 series has already been installed, tempered, and well adapted for a variety of applications. In many cases, this makes 2.4 irrelevant. However, once we see a 2.4.(n > 10), 2.2 will likewise become irrelevant. Take a look at the Debian project. Their thinking is to use what is tried and true for the utmost. Only in Potato are they now out of the 2.0 series. It's good thinking and Linus shouldn't sound so meloncholy when he says that 2.4 won't be vastly accepted in the immediate future. :-)
  • Funny that this showed up on Slashdot before the LKML, especially considering the huge amount of arguing that has gone on about ReiserFS on that list.

    ________________________________________
  • by 1010011010 ( 53039 ) on Saturday October 21, 2000 @05:09PM (#686121) Homepage
    I'd like to know why people aren't interested in 2.4. Is it that it's been delayed so long it's like vaporware?

    I'd say that the reason that people aren't interested in the 2.4 kernel is they they have lost faith in the development process.

    Over the last two years, people have repeatedly posted on the LKML in one way or another that the emperor has no clothes. They've been nice, they've been rude, they've even posted good ideas and patches to provide some clothes. But, universally, the response from the LKML acolytes has been a variant of "the emperor isn't naked; he is in fact wearing a 3-piece suit, and if you don't like it, you can get your own emperor, you idiot."

    It's very sad. Criticism is what keeps any public enterprise honest and productive, and the denizens of the LKML don't have any tolerance for it.

    The linux development process has little direction, no planning, little to no leadership, meaningless feature freezes, and little to no documentation and guidelines. The kernel itself *is* spaghetti code inside, no matter what people say. They try to maintain control over what people use by not exporting some functions from the kernel .o files, but that's a bandaid, and a way to control who gets to work with the kernel more than what can be done with it. That the kernel is spaghetti code is one of the major reasons that 2.4 is so late, and so buggy. Just try to do some kernel programming, and you'll see, if you don't believe me. Take a look at the big, ugly union in the VFS. Figure out all the places that bdflush gets invoked, and the number of different ways to have a pinned buffer flushed by other parts of the kernel anyway. Look at the brokenness in the spinlocks and semaphores. Look at all the VM rewrites and the warring but both broken USB stacks. Check out the tendency of the VM OOM "feature" to kill random programs like X and kswapd. And don't forget all the race conditions.

    It is very difficult to alter some part of Linux because of all the unintended consequences. It's difficult to get needed features and clean-ups into the kernel because of cronyism and a narrow-minded religious devotion to Posix. Go back and read up on the NTFS-streams thread [geocrawler.com] for a good example of that (Alan and Viro actually invited everyone talking about streams to an off-list discussion, and then notified them that they had been added to their killfiles).

    Clean code? Just look at the 2.4.0-test-pre-pooch-screw series of debacles, where the VM is rewritten every few weeks, and new features are tossed in while there's still massive bugs to fix in the code that's already there, and in spite of repeated "feature freezes" [geocrawler.com]. That would all be fine in a 2.3.x series kernel, but judging from the version number, "2.4.0-test" is supposed to be pretty stable except for bug fixes -- not have major features added and subsystems being rewritten.

    Linux has terminal featureitis. No one wants to work on the hard things; they just want to add features. Quickly.

    And Linus, to make things worse, claims that a kernel debugger is counter-productive; that debugging with printk puts hair on your chest [geocrawler.com]. Never mind that you can't debug race conditions well, if at all, by adding printk statements everywhere, because they change the timing of the code when it runs. Never mind that essentially every other 'modern' OS includes a kernel debugger, and that many of those OSes are better designed, better implemented, and perform better and run more reliably than Linux (FreeBSD, HPUX, Solaris, and even NT come to mind).

    Linus must be right. In fact, he's declared himself to be infallible -- he will not allow a kernel debugger to be added, and has publicly stated that he thinks people who use debuggers are dummies and that he won't work with them [geocrawler.com]. But never mind that; he's the leader of the mo vmentarians , [yahoo.com] Linux is our official OS, and we'll just get back to work on his lima bean farm and wait for him to wave out the window of his car at us, or splash us with mud as he drives by. And that would actually be fine, if he was actually a leader; that is, if he made decisions and stuck with them. But he doesn't do that. Refer to his "I'm a wimp" [lwn.net] email. He'll occasionally toss in a new filesystem ("accidentally," [insecure.org]Alan Cox recently suggested merely covering them up with his skip-a-number, backport and turn yourself around hokey-pokey versioning scheme [lwn.net]. The real solution would be the one that software developers everywhere have always used, which is to:
    • set realistic goals for a release
    • defer any further feature creep until the next release
    • concentrate on fixing bugs in the pre-release cycle
    • aim for modularity, stable interfaces and good documentation to make upgrades and new feature addition easier and the learning curve less steep
    • provide robust methods for troubleshooting the system to make development and debugging easier.
    Linux does none of these things. By design. So continue to kvetch about idiots like me (and IBM, and SGI, and HP, and Reiser, and about 1000 other people and companies) pointing out that Linux is fundamentally screwed.

    The most common response to criticism is a variant of "love it or leave it." [tux.org] Keep suggesting that we go write our own damn OS if we don't like it; your love it or leave it response will be accepted one day, and we will leave Linux. I actually think it would be a good idea for the major external linux players to fork the kernel, clean it up, and maintain their own version. I don't doubt that it would shortly become the defacto standard kernel, because it will be cleaner, more stable, more scalable, more extendible, and will probably be released on time and respect feature freezes. SGI, IBM, Reiser and a lot of other people and companies have a lot of good code and ideas to contribute, not to mention full-time developers, years of experience making scalable and robust systems, and a willingness to release all that work under the GPL. And if they fork the kernel, they can do it without having to be named "Ted", "Ingo", "Alan", "Linus" or "Rik".

    One day the question will be, are *you* relevant? Why should we accept *your* code? Is it clean? Is it modular? Is it safe (see LWN article about C code with undefined behavior being included in the kernel). Of course, a fork can always be re-merged with the holy penguin pee [geocrawler.com] version. In the meantime, all the people who want to run Linux on enterprise systems rather than PDAs and webpads can have a stable, working kernel with adequate features.

    It would be useful if people would make substantive replies to this message, rather than engage in the usual comments about rioting, sending spam reports, saying "love it or leave it," moderating it as a troll, port-scanning my mail server, attempted hacks and other juvenile/illegal acts, sending spam reports, threatening violence, etc. Of course, substantive debate is really hard to come by on either the LKML or Slashdot, so I don't expect it. So, go ahead, get started telling me to sod off. I'll get back to switching over to FreeBSD, although I would prefer if someone would take up a rational refutation of this message instead. Show me the Emperor's Clothes.

    ________________________________________
  • Guys, BACK OFF. Have you even tried to get the translation? I did - no luck. Tried for twenty minutes, and I kept getting timeouts. I betcha I'm not the only one, the servers must be loaded. I, for one, am grateful for this post. I don't have to keep trying for another hour, nor do I have to wait 'til 4 in the morning for when the load dies down.

    This is +1 Useful, not -1 redundant. There are going to be a LOT of people who won't be able to get this translation.

    Dave
    'Round the firewall,
    Out the modem,
    Through the router,
    Down the wire,
  • by Zoyd ( 13778 ) on Saturday October 21, 2000 @05:18PM (#686124)
    A flurry of new .sigs is going to result from the publication of this interview.

    "I do not go believe comes out therefrom that I will concentrate on always more special zones."
    --Linus Torvalds

    "Until perhaps sometime someone, am better that than I so that I withdraw."
    --Linus Torvalds

    "Organizations are obvious PR-work, and there are enough people who could that just as well."
    --Linus Torvalds

    "A classic example is that. A quantity of thing seem incompatible together to be."
    --Linus Torvalds

    "At the same time fewer the technology is interesting, but rather the conversion in uses."
    --Linus Torvalds
  • c't really needs to start an English language version of its most excellent magazine.

    A good start might be to stop linking any c't articles not translated into english from /. They might miss the traffic.

  • There is a substantial difference between being forced not to use a debugger and not needing one because the code you produce doesn't have very many bugs.

    The sad truth is that there are differences in programming skill levels - just like there are differences between levels of chess players. No one has ever been able to explain what makes a good chess player, and what makes a bad one, except to note that poor players seem to make weaker moves than strong ones. You can't point to a really weak player and say "You're moving your knight wrong". The problem is not that simple. How do you let a low level club chess player see the board like a grand master?

    The problem boils down to this: craftsmanship can be taught; art can't be. The reason that the most advanced technical achievements are called "state of the art" is that there are a huge number of ways of solving a given problem - it is a matter of artistic choice how the problem gets solved. Some choices work better than others.

    To an artist these truths can be seen easily, if a person hasn't reached the level of artist yet he can't. I don't know how to lift someone from the level of craftsman to artist - neither does anyone else. That is something each person must do for themselves. One of the quick rule of thumb tests to see if someone has reached the level of artist as a programmer is: "Does he need a debugger?". That is not a fair test - the skill sets do not match exactly - but it works better than nothing.

    You are correct that raw novices do behave in the fashion you state. Please remember that I said that there was an element of very bad programmers who don't need a debugger, you have correctly identified them.

    Instead of treating your experience as an ordeal you could have treated it as an opportunity for growth - forcing yourself to increase in skill level to the point that correct code flowed effortlessly.

    I've gone through all of the stages, from not needing a debugger, to becoming skilled in their use and thinking them a great tool, to not needing one again. I don't claim to be a great artist - more accurately I'm like those people in the 'starving artists' group who turn out art to eke out a living; I can see that there are people who are a lot better than I am, but that most people are not as good.

    I apologize if I have offended people - but I am not going to apologize for having reached the level of poor artist, I may never get any better, but at least I got that far.

    In chess they have competitions to sort the skill levels of players - we really don't have any such things in programming, and it is easy to talk a great fight. It is possible that I am just full of it, however I will admit that possibility. Will the rest of you admit that you could be the ones who are full of it ? One of the hallmarks of having reached the level of artist is that you have to be honest with yourself - the self deluded are self deluded.

  • Well, if you're running NFS, you'd better pay attention to kernel traffic and the kernel-NFS mailling list.

    There are some significant fixes for NFS stuff (client and server) coming out in 2.2.18.

  • At this point in the 2.0 to 2.2 development process, it seemed like a majority of Slashdot readers were running a 2.1 kernel instead of a 2.0 version and were extremely happy with it

    I don't know, anecdotical reference doesn't proove much. I remember that all system administrators I know, including myself, never touched a 2.1 kernel and only moved to 2.2 once it was officially released. This included home boxes for non-critical use.

    So you have your anecdote, I have mine.

    ------------------
  • Having spent the time trying to clean up the ctrans version (of course, knowing almost no German doesn't help much), I'd say it's a lack of manpower. If you want to do the translations in your spare time and for free, (and it's good quality), I figure that they'd be happy to do an english version.

    Also: Dutch is pretty close to German. If it weren't for the fact that some Dutch have a residual resentment over the German invasion in World War II, they could probably recognize quite a bit in common with each other. In any case, translating between the two languages is apparently much easier than translating between English and German.

    Something else to consider is the english magazine market is already near saturation. There is less likely to be a lot of competition in the dutch market.
    `ø,,ø`ø,,ø!

  • Seriously though, if you *do* know better, why don't you get together with the people you mentioned and fork your own version of the kernel? Nobody stops you from doing so.

    I am not a kernel developer, I don't read the kernel mailing list, I make up my mind by reading lwn.net's summaries. Yes, it seems that there's a lot of ugly ego-clashes going on there. No, I don't agree with the decisions made by Linus all the time, from what I read on lwn.net about them.

    Still, you sound a lot like a spoiled kid to me who complains a lot and wonders why he isn't taken seriously.

    ------------------
  • what i find hillarious is that the translation still only really makes sense if you already speak some German. not much, mind you. I have two semesters of college german 8 years ago, but it is necessary.

    Two funny things: the first is verb order. German puts verbs at the end of phrases and sentences. eg: "he thinking about other things walked" or "after i working had finished tired i was". Given that this is an incredibly standard sentence pattern in German, you'd think an automatic translation engine that knew something about grammar and parts of speech could move stuff around (English is one of the most word-order sensitive languages in the world due mostly to our lack of other correlational clues such as gender and mood and our relatively weak cojugation of verbs. english-speakers *need* good sentence order to understand things in a way that spanish or arabic speakers are more flexible about).

    second thing: simple words missing. 'Jetzt' means now. i knew this after a few weeks of german education. this should not be hard to translate. 'guten taste' is actually 'good taste'. my point here is that the translation engine seems to barf on simple words for no apparent reason. wired mag had an excellent article a while ago on machine translation (at http://www.wired.com/wired/archive/8.05/translatio n.html) that shows how challenging this stuff is.

    i'm not really complaining because i find the translation a wonderful bridge between two semesters of german and being able to read the whole article. i can actually *read* and *understand* the article in the translation. i don't believe that someone with no german could do that very well yet. but this stuff is getting there.
  • Well, I managed to get the first half of it translated by systrans. I put it on my ISP web page. [sympatico.ca]

    When I get the second half translated, I'll put it up here [sympatico.ca] (there's a [currently broken] link at the end of the first page)
    `ø,,ø`ø,,ø!

  • by Dr. Merkwürdigliebe ( 90125 ) on Saturday October 21, 2000 @02:06PM (#686152)
    c't really needs to start an English language version of its most excellent magazine. There's already a Dutch version, so why not other languages?

    I thought about submitting this, but decided against it because it's a long article in German. There's still a chance an English version is going to show up, maybe if people would mail them? Certain popular articles usually get translated. So start sending in the requests.
  • by IvyMike ( 178408 ) on Saturday October 21, 2000 @02:08PM (#686153)

    To quote the babelfish FAQ: Translation requires significant resources on our servers. To serve as many users as possible, we translate a maximum of 5k of text. If the page exceeds this limit, you see "Translation ends here" in the text.

    Too bad the other site seems frozen--aparently, it's slashdotted ALREADY.

  • by Anonymous Coward

    I now see why there are no responses to this criticizing your position. The way your post is written, it shows that in your mind, there can be no argument. Everything is irriversably screwed up, anyone who makes major contributions to the kernel magically becomes a drooling, power-mad moron, and problems that are fixed aren't. The VM stuff in particular seems to have been fixed, if the recent KTs are any indication. The USB stuff works fine on this machine.

    As for the debugger, I must say that I agree with you there. It might be good to have one for some things. But your amazing leap of logic, that having a kernel debugger is linked to being better designed, implimented, and having better performance... Well, I must say that I don't follow. If Solaris is so much better than Linux, then why does Linux perform better on the same hardware?

    Oh, and I've posted this as an AC because I don't particuarly care to see whatever flames you come up with for a reply. IMHO, this entire post is basically a flame against the entire Linux kernel, and instead of doing something productive, you're karma whoring on Slashdot.

  • > Many of these basic debates have stopped
    > entirely, e.g. the argument of vi vs. emacs.

    That's hardly surpising, since Emacs is obviously technically and morally superior to vi at every level. No need for an argument there.

  • by psin psycle ( 118560 ) <{moc.oohay} {ta} {elcyspnisp}> on Saturday October 21, 2000 @02:15PM (#686157) Homepage
    My favorite part of the translation: I program and read a quantity of enamel
  • What happened? The 2.4 kernel slipped enough for RFS to catch up. (This isn't meant as a troll, just my reading of the facts.)

    The kernel did slip, because of some awesome improvements in the VM code that were "too good to pass up." Others may argue that this was inappropriate -- personally I find it encouraging that Linus and the developers are flexible enough to go ahead and do something they feel is important, even if it offends the dogma of "how things are to be done" or even just the impatience of the rest of us.

    I think little has changed. Reiserfs will NOT go into 2.4.0 because the developers don't want to cloud the issue while working to get a new release out the door. It will probably go into 2.4.1 if 2.4.0 is stable, which may be due to the "slippage" you refer to. OTOH if the kernel hadn't "slipped" it probably would have gone into 2.4.8 or something.

    In short, ReiserFS would have worked its way into the kernel regardless. At most the minor version number changed due to the schedule slippage, nothing more.
  • by tao ( 10867 ) on Sunday October 22, 2000 @05:57AM (#686162) Homepage

    It is VERY unlikely that people are uninterested in the v2.4test kernels because they've lost their faith in the kernel development process. Why? Simple. Because most people that don't hang around on LKML or Slashdot don't even know about how the development process works. For that matter, most people hanging on /. probably don't.

    And, yes, people have often critized the development process of Linux. But face it, the development process works, and is rather unique. People that critize it are generally those who are new in the game; one person who yelled and critized a LOT initially was the CEO of Timpanogas; Jeff V Merkey. He wanted the entire VFS rewritten to look more like the VFS of Windows NT, simply because he was used to programming versus that VFS and because he considered it superior. This was before he came to know the inner guts of the Linux VFS, before Alexander Viro explained how things were meant to be done. Nowadays, he comes with a lot of constructive ideas, and help people with legal advice (he's a licensed lawyer, not just a good programmer...)

    And believe it or not, constructive criticism that comes with realistic suggestions for improvements or patches to fix up the problems does get attention. You mention the code with undefined C behaviour in your post. This got fixed in the very next pre-patch. How's that for response?

    Not all criticism get heeded right away, not all ideas make it into the kernel right off. For instance, DevFS had to wait quite some time. Not because it was a bad idea, but because the VFS-issues weren't as simple as Richard Gooch initially expected them to be when he designed DevFS. With a LOT of work by Alexander Viro, it's now merged. This merge was one of the major points of delay to the development process. But was it worth it? Definitely.

    The reason ReiserFS isn't merged yet is simple: ReiserFS wasn't working properly until early in the v2.4.0test-series. Hans Reiser did not admit this, but Chris Mason, the major programmer in the ReiserFS project, did. Linus thus decided that it was better not to hold back the testing of the kernel to add a new filesystem that required rewriting of other parts of the kernel.

    Yes, quite a lot of the kernel has been spaghetti-code (lots of it still is), and the SMP-support has been below par. THIS is the reason the v2.4 development has taken so long. Not because of "meaningless" feature freezes and lack of documentation (eventhough the latter IS a major problem; feel free to do your part!), but because the v2.3-series has been spent rewriting the VFS and the Network layer to be clean, neat and scalable under SMP.

    I am doing quite some kernel-programming. It's nowhere near as complicated as you make it out to be. But face it, writing a driver or a filesystem for a multitasking, multiprocessor operating-system isn't exactly a piece of cake. If it was, the world might have been a better place. But the VFS in Linux is not very hard to write a filesystem for, and making a devicedriver, especially for PCI or MCA equipment, is almost trivial.

    The VM is broken. Yes, the VM is broken. This is a problem. And it's not one that will be fully taken care of in the v2.4-kernel. Because we don't want to open full development again. The VM will be completely rewritten to v2.5, adding memory-pressure support for journaling filesystems. But you ARE making it looks far worse than it is. The OOM-killer (at least as of test10pre3) does NOT kill kswapd or X11, unless those really ARE the villains. And if they are, they deserve to die. Because if kswapd, a kernel-thread, is buggy, your system will die anyway. But it isn't, and it won't get killed. There are still a lot of tuning to do, but this is typically what does belong in the test-process of a kernel, don't you agree?

    Narrow-minded religious devotion to Posix? This isn't about narrow-mindedness, it's about sanity and interoperability. It's about not making the same mistakes Microsoft keep doing over and over again. NTFS streams ARE a complete mess. Try to map them sanely into the Unix-world, and you'll see. Try to use tar to backup an NTFS-volume and see how much you'll preserve...

    And v2.4.0test doesn't mean that nothing will be added. It just means that it's getting ready for the mainstream to start testing it, because it's nearing feature-completeness and the most problems are known and documented on Theo's TODO list.

    Oh, and about kernel-debuggers. Yes, Linus is violently opposed to those. But does that prevent YOU, or anyone else for that matter, from using one? Several different kernel-debuggers exists for everyones pleasure. Use one of those. What Linus is stubbornly opposed to is having these in the kernel. Because he believes that debuggers are bad. Not because they're helpful, not because they are less cool than doing printk-debugging, but simply because he knows that a lot of people will abstain from reading larger parts of the source to get the picture of the complexity rather than finding out what makes this particular part croak and just add an if (blaha == NULL) clause. He's probably right; I've seen far too many assignments for different CS-classes simply having the bugs patched over by if-clauses, from people that have traced the code to see where it fails. This is not something unique for people using debuggers; people using printf debugging does the same mistakes, but not quite as often.

    The claim that Linus won't work with people that use debuggers is bullshit. There are a LOT of developers on the kernellist that do, and he cooperates with them just fine.

    Yes, a lot of people complain that things aren't as easy as they'd wish them to be. Amongst them IBM, HP, SGI, Hans Reiser etc. Yes those very same contributes stuff non-the-less. IBM ported Linux to the S/390, HP has been involved with the HP/PA-Risc port (and the IA-64 port?!) if I'm not all wrong, SGI are in the process of porting XFS to Linux, has done a lot of helpful work with NFSv3 etc., and ReiserFS will probably go into v2.4.1 or so.

    You know, you ARE really trolling. Not with the substance of this post, but with the last few paragraphs. They are nothing but flamebait. Pray tell me, if you don't want the kernel-developers to tell you to fork your own kernel, and you don't want to submit changes to the kernel to clean things up because you don't like the development process, why DO you worry? Linux will go to hell anyway, as every other major OS is better designed, performs better, better implemented and run more reliably that Linux.

  • Cry, cry, cry.

    Bitch, bitch, bitch.

    I was happy to see someone post this; hell, they're effectively mirroring a translation here. If I had moderator points right now, I'd mod it up too.
  • Although you may moderate me down for mentioning this, has anyone notice that the people replying to this comment are posting as ACs? Afraid to associate themselves with this "radical" viewpoint of Linux development perhaps. This is a good comment, although I don't agree with everything in it, it really should be moderated up.

    Sometimes you by Force overwhelmed are.
  • by loik ( 95237 ) on Saturday October 21, 2000 @06:30PM (#686169)

    "What users do is never wrong"

    Linus Torvalds about the history and future of Linux

    The star guest of the LinuxWorld in the beginning of October was Linus Torvalds. The success of Linux has made the free OS and its "inventor" well-known to a far bigger public than just the IT world.

    In 1991, Torvalds had started to program his own Unix-like OS out of discontent with the PC operating systems that existed at the time. Originally Linux was only intended for the computer of the then 21-year-old; but after the publication of the version 0.01 on the internet Linux started to gain users and a growing hoard of developers very fast. Today the open source system runs on all common hardware architectures; it has attained a strong position above all on internet servers.

    c't: Linux has already achieved what you wanted some years ago. Why did you continue at that point?

    Torvalds: The aims have changed. In the beginning my main objective was to do something interesting and fun. I thought I would be the only user and didn't make any concrete plans concerning features. I knew what I expected from a Unix; but e.g. I wasn't interested in graphics because I only wanted to edit and compile source code. But after I had published Linux on the internet, other users asked for features I had never thought of. Instead of a Unix for my desktop Linux suddenly turned into a project for the very best OS. Because of the wishes of others and - later - their patches and help it became much more interesting. Today the bulk of the work is done by others.

    Now I aim for a good OS design that is also useful for others. What I do hasn't changed that much: I program, and I read a lot of email. Concerning my original plans, Linux has since log been complete; but the many new domains of use motivate me to go on. If I hadn't published Linux on the internet and if there weren't those other users, I probably would have ceased working on Linux in 1992.

    c't: How long go you plan to go on with Linux? Do you see a point somewhere in the future where you will say: "I've had enough of it now"?

    Torvalds: I don't think that there will be a certain point. I always have handed over some things from time to time, and that started very early. In the very beginning e.g. I was concerned with all the applications myself: I had - additionally to the work on the kernel - to port the shell, the compiler and the libraries. But very soon other people started to take over so I could concentrate on the kernel.

    Nowadays I still work on the kernel, but only on central features like the memory and process management and the basic design.

    I also have almost stopped to talk at events like this LinuxWorld. I have taken part here in the panel discussion, but I haven't held a keynote because these things put me under stress incredibly.

    I think that I will concentrate more and more on special areas; but I don't think that I will cease to work in Linux completely - perhaps until someone comes who is better than me so that I retire from it.

    c't: If at some point you don't feel like continuing with Linux - how would the developers organise?

    Torvalds: I don't think that this will happen soon; but there would be a lot of people who could take over my position. Today I hardly code myself; I "show good taste" instead - I make decisions concerning the architecture. I am a sort of central coordinator, deal with email, read it, send it to the right people.

    Public events are obviously PR work, and there are enough people who could do that as well as I. At the moment my most important function is that of an identification figure for Linux, purely for psychological reasons. Nowadays people think of companies like SuSE, IBM or Redhat when they think of Linux; but for a long time it used to be this radical movement led by Linus Torvalds.

    If the plane from Frankfurt to San Francisco crashed now, everyone - okay, probably not everyone, but a lot of people could take over my job. It certainly wouldn't be a single person: I do what I do and [XFree86 developer] Dirk Hohndel does his job. My technical work on the kernel for example could be shared among some people.

    If you take a look on how Linux is actually developed: I don't touch the 2.2 kernel e.g., that's all done by Alan Cox. Soon we'll have finished the user kernel 2.4, and then Ted Ts'o [ext2 developer] will take care of that. I will be able to concentrate on the developer kernels because that's what I'm most interested in and the developers are happy with how I do it. But it could as well be done by other people. It would certainly cause a lot of agitation in the media if I dropped into the ocean, but I am not that important any more for the development of Linux.

    c't: Can you tell us a bit more about the organisation of Linux development?

    Torvalds: Let's look at a simple example. Someone has an idea. First he will discuss that with people he knows and on the kernel mailing list: I need this feature for these reasons, is someone working on it? If not, he will code it. Then he uses it himself, talks to people in his vicinity, and posts it on the kernel mailing list, if he wants it to get into the standard kernel. He knows how it works, and doesn't send mail to my from the beginning. If it's perfect code, maybe the mailing list says: "Yes, we want to have that." But that doesn't happen in practice. The reaction is more like that: "We understand what you want, but like that it's more or less bullshit. I would like to do something similar, but that doesn't work with your code." And then the interfaces are changed so that both things go together, and other modifications are made that make other people interested. That can take a long time. Major changes can live a few years in the kernel list while they are discussed and even a lot of people use them. I take notice of such patches and discussions on the list; and at some point I decide that the code is so useful it should become part of the standard kernel. If important questions are discussed, I join in and say perhaps: "I can see your point, but from the point of view of kernel architecture that's the wrong way to go". At some point the patch becomes part of the standard kernel, or it continues to be an external patch for special purposes.

    c't: What is your position on a possible fork in the kernel? The Linux boss at IBM e.g. said some time ago that a kernel can't fulfill all requirements from the embedded device to the mission-critical server.

    Torvalds: Forking happens all the time. The fact that my kernel is considered the official one doesn't mean that there aren't many "unofficial" ones with their own features. For instance the most distributions have their own kernel versions. For SuSE ISDN is very important because that plays a part in Germany; for the rest of the world it's not important. Different distributions are targeted at different classes of users; SGI e.g. is mainly interested in the SGI market: computers with hundreds of CPUs. Therefore, the SGI kernel will include features for the large machines.

    I am trying to maintain a common standard kernel, but that's not a kernel for everyone. Of course supercomputer and embedded device require different things, and there will never be one kernel. I am trying to keep the differences at small as possible, and incorporate new things in a way that doesn't hinder the extreme cases.

    c't: During the work on the 2.3 developer kernel there was a lot of discussion on memory management...

    Torvalds: ... they still go on.

    c't: To address big amounts of memory in servers you need a kind of memory management that's not as efficient in small systems with little RAM.

    Torvalds: That's a classical example. A lot of things seem to be incompatible with each other. On one hand I need support for small devices, and on the other there are big systems with 16 nodes, each with its own memory and a total of hundreds of GB's of RAM. The solutions look totally different, of course. Usually, the first answer consists of two code branches, simply because it generates the least work - the code doesn't have to take into account that many possibilities. But maintaining the code becomes more difficult because you have to have interfaces to both branches.

    But in the end it comes down to a virtualisation of memory management. That was one of the things that we worked on during the 2.3 development: virtualising the concept of a "memory node". Thus, a small device is the same as a big machine, with the sole difference that it only has one memory node, whereas the big computer has several of these nodes. So the small device becomes a special case of the big machine.

    By a configuration option, different kernels can then be compiled from the same code. In the source there is a loop through the nodes; but if there's only one node the loop goes from 0 to 0, is optimised away during compilation and doesn't appear any more in the binary. That makes maintaining the source much more simple, and that's the kind of questions that I occupy myself with.

    Of course, it doesn't always work that way. Sometimes you simply have different devices that need different drivers. In design you have to decide what code is common and when you write different code for the different cases. That's what computer science is all about in the end.

    c't: The kernel sources have become very comprehensive...

    Torvalds: ... around 55 MByte source code; I don't know the exact number, but there are about three millions of lines. The kernel is huge, and nobody could maintain it if it wouldn't consist mainly of totally independent drivers. But driver development isn't easy either, because you have to iron out all the glitches of the hardware.

    c't: Programmers know the problem when they change something somewhere and the program crashes somewhere else.

    Torvalds: Things like that also happen with the kernel.

    c't: How do you deal with such difficulties?

    Torvalds: There is only one solution: clean interfaces. Ideally, there should never be any surprising bugs or interactions that you would never have thought of. The interfaces have to be so clear that if you change something you just know where the you have to adapt the code. I don't claim that the interfaces in Linux are always as clean as that, but we aim for it. Many of the changes in the 2.4 kernel go in that direction. In most of the cases the main task was to sketch out new interfaces, not to actually write new code. But often the code doesn't fit what you have thought out; that's what makes changing the interfaces so tedious. But it is very important, even if the users first can't see any advantage in it - until they some across a machine that needs the new interface.

    c't: I don't want to ask to start with when the 2.4 kernel will be out...

    Torvalds: ...I hope it will be this year...

    c't:...but I am curios about the problems with the new kernel.

    Torvalds: There is a basic difficulty that isn't of technical nature: most people don't want to upgrade to a new kernel at all. The are satisfied with the 2.2 kernel, don't have any major problems with it - why should they try a developer kernel? It's only a special group of users who test new kernels; and before making public a new stable kernel at least part of these users have to have tested the new kernel. The developers are focused on new versions, we need extern users for the testing. That's not only a problem for Linux; some software producers even pay people to test beta versions.

    But there are also still a few technical difficulties. We know of some real bugs for which there are already solutions; but not all developers are already convinced that they are good. In this regard there are a few open questions yet: Often the developers want better solutions that guarantee that a certain problem won't arise any more in the future.

    Beyond that, there are also communication problems. The people who fix bugs are normally not the developers themselves, they don't describe the problems like a developer would. That simply takes a lot of time.

    c't: You have already talked about the kernel extensions of the Linux distributors. SuSE, e.g., delivers the 2.2 user kernels with the Logical Volume Manager and the ReiserFS. The kernel developers have thoroughly discussed ReiserFS and come to the conclusion that it shouldn't be incorporated into the "official" kernel yet. What do you think about such acts of arbitrariness? You surely had your reasons for the decision against ReiserFS.

    Torvalds: Mainly last year, new groups of users joined, and SuSE - without the intention of speaking for SuSE - has worked together a lot with big customers who are interested in LVM. Management of hundreds of disks requires such tools. And also if the system doesn't crash but is only rebooted occasionally, e2fsck runs of several hours are unacceptable, so that ReiserFS is preferred. Such applications have only arisen recently, and it simply needs time to integrate things like that. LVM has been in the 2.3 developer kernel for half a year, but just last week we have worked on it. I wanted to keep ReiserFS out of the kernel in any case before 2.4 because I always thought to stand immediately before the code freeze, so I didn't want to bring entirely new questions in the discussion. SuSE and other have tested ReiserFS in the meantime, so we will probably incorporate it into the version 2.4.1.

    What users do is never wrong. I surely can't command the Linux users what they shall do. I have always seen it like that: Whatever people want to do is OK. I can only make decisions on how the architecture should look like which makes that possible, or give hints how you can reach the same goal with another approach. ReiserFS will come, and I can't simply say no to it. For me it's only a question of timing and maybe of a few changes to integrate ReiserFS better into the kernel. XFS is a different matter. It's not got as far as ReiserFS, and I can't say if it will be part of the standard kernel in a year. ext3fs, again, is a different matter. The code is already there, and there are users who already use it. ext3fs could well be integrated in the 2.4 kernel series or at an early point into the 2.5 kernel. I am concerned about flexibility. Open source means that you can do everything with the code.

    That doesn't mean that I use ReiserFS or ext3fs myself. I am interested about something else. ReiserFS, XFS and ext3fs obviously will have a lot in common. What does that mean for the virtual file system [the kernel structure that constitutes the interface to the file systems]? Perhaps we will take parts of the code that is common to several of the file systems - even if they do different things, eventually it's all the same - and try to build a common interface. Until the VFS itself can deal with journaling two or three years will probably pass; but then the file systems won't have as much work with it any more. That's the sort of questions that I occupy myself with. c't: What are the most interesting technical developments of Linux at the moment? Torvalds: Most things don't concern the kernel at all. Of course there are fascinating developments, e.g. scalability - that was extremely interesting technically. But the really fascinating things are done by other people. The whole business around DVD was interesting, even if maybe it was also a bit disappointing. And then of course the desktop and things that are actually uncommon for Unix. When I watch TV e.g. I do that with a Linux box that uses its hard disk as a VCR. If you have used such a device once you never want to touch a classic VCR again. I only use such devices for films that are not yet available on DVD.

    c't: And in the IT world generally? You work in a high tech company, after all.

    Torvalds: All these wireless things. I have a great cell phone e.g., a laptop and a palm. When I am away from home I use my laptop to read email, so I want to use the cell phone as a modem. But that doesn't work; this kind of communication simply doesn't work yet. I think in five years all these devices will be able to communicate with one another. The technical aspect is less interesting than the applications.

    c't: When you look back at the long history of Linux development: Were there things that surprised you?

    Torvalds: Very few. Of course I would have been very surprised at first if I had known where Linux would go. When I published version 0.01 on the internet I was prepared for comments. Perhaps there were a little bit more reactions that I would have thought, but I don't even think so. After a few months there were 50 instead of the 5 people that I had expected, then a few hundred; there I was a little surprised. But I witnessed the development from 5, 10 to 20, 50 users, there wasn't a point at which I said to myself: "My God, what's happening here?" Then the commercial interest, the media coverage - most people think all that happened in the last two years, but in fact it developed slowly over the last nine years. Companies started to support Linux - sometimes it was surprising to see to which degree, e.g. IBM. Nobody thought that IBM really would go that far. But there either wasn't a point where I would have been really surprised.

    c't: Has there been anything that made you angry?

    Torvalds: Not a lot. The most uncomfortable surprise probably was this Mindcraft study [a study financed by Microsoft in which Linux had looked very bad compared to Windows NT in April 1999]. I can remember how angry I was then. Not any more, because it turned out well in the end. What is most surprising are perhaps the generally positive reactions to Linux. The developer community was very friendly from the beginning, despite all that discussions about the Linux kernel that can seem violent sometimes.

    c't: There are a lot of ugly discussions...

    Torvalds: Yes, when it comes to discussions about their technical ideas people become very passionate and unfriendly.

    c't: Isn't that typical for the open source community? E.g. this strong aversion against M$...

    Torvalds: No, not only for that community. Mac user are very similar there. The internet makes it simple to just say something, and that easily generates flame wars. You don't know the people you are arguing with and so you easily overdo it. That's definitely not only true for Linux - if you look at all the "advocacy groups" out there ... it's amusing. The arguments between Linux and FreeBSD users e.g. are again much more violent because these groups know each other well and know where it hurts. The people just like to argue. It's a social competition, like that you show that you are superior to others. Many of these basic debates have stopped entirely, e.g. the argument of vi vs. emacs.

  • Actually, I think that there is a possible reason why linux users are less interested in Beta testing a new kernel than Windows users might be.

    Unlike Windows, most people running linux find it quite stable and usable. Unless you want to do things like USB or firewire (for which there are patches for 2.2, anyways), there's no real NEED to move over. People running windows will often move to the newest beta in hopes that it will actually work. With Linux, there's no such need. You have to really want to test the new system.

    I have two systems at home, but one is used by my roommates, and me, on a regular basis (including as a sound source (MP3) for the stereo). (Un)fortunately, we have ADSL, so I also have a Web page on it, and we dual boot over to windows way too often for game playing. As a result, I'm transitioning my second box to serve the web page so that I don't have to worry about it being unavailable. In other words, I have two machines that I just don't want going down for random reasons. If I get a third box (quite likely), I'll dedicate it to testing the new kernel and things like that.

    It's not that I don't trust the new kernel to be at least as stable as your average Wintendos box (kof, kof). I don't trust me. This really is just for testng. I don't want to find that I'm spending a week trying to repeat a wierd problem -- with my roommates wanting to listen to their music or people wondering why my web page is down.
    `ø,,ø`ø,,ø!

  • by jbridge21 ( 90597 ) <jeffrey+slashdot ... g ['ad.' in gap]> on Saturday October 21, 2000 @02:20PM (#686173) Journal
    I think that once the word about 2.4 starts buzzing around, there will be a lot more people who want it... I am running a dual-CPU setup, and I have seen a remarkable difference between 2.2 and 2.4.

    Things I like about 2.4:
    -- much better SMP support (more deserialized)
    -- better disk caching (doesn't waste twice the space it needs, integrates the read and write buffers)
    -- USB support, built in
    -- same with firewire
    -- good AGP/DRI support for XFree4.0.1 (more on this in a minute)
    -- integrated UDF && DVD support
    -- all of these integrated into one

    Specifically, I have two celeron 366, and trying to play back DVD video did not work well with kernel 2.2 and XFree3. It would skip on the audio a lot. Now, with kernel 2.4 and XFree4 and Xvideo extensions and DRI and stuff, it works *so* much better. Same with the OpenGL stuff.

    To sum it up: 2.4 ROCKS.

    -----
  • After all, that's what "looking around here" tells you, and people wouldn't be repeating such things if they weren't solid truth, right?

    Look around here - Almost nobody is running a 2.3/"2.4Test" kernel. Because it's too damn far from being ready,

    First of all, there's about ten times as many people here as there were when 2.1/2.2 came out. I wouldn't be surprised if there's more cautious Linux users, and even more Windows users, than there used to be.

    Second, exactly how scientific a poll are you taking? I'd like to hear how how many people contributed to the conclusion of "almost nobody".

    I've been running them since 2.4.0-test7 (which I dropped into Red Hat 7 with no problems), with no crashes. NVidia's binary drivers leak memory like a sieve, but with closed source software whatcha gonna do? The open nv driver in XFree86 4.0.1 still works great, so I'm not too concerned.

    I still think there's plently of people who would love to beta test a new kernel. They just don't want to alpha test one.

    Excellent. Then they can download a 2.4.0 test kernel any time, as they've been beta quality for longer than I've been using them. "Too damn far from being ready" is a subjective term, and I'd like to hear just what objective problems it's supposed to be insinuating. I'm sure there's still serious bugs that need fixing (knowing that is one of the nice things about a public TODO list), but nothing I've personally encountered.
  • by Veteran ( 203989 ) on Sunday October 22, 2000 @07:14AM (#686176)
    There was a story once about the devil finally defeating God, and finding out that he was required to take on many aspects of the divinity when he did so.

    Your comments are very interesting, and since you asked to be shown the emperor's clothes I'll try to explain the position on debuggers.

    There is a certain class of programmers who don't use debuggers very much because mostly their code is so well designed and thought out that they don't put very many bugs into it in the first place. Such people can see that the code produced by the other types of programmers - who heavily require debuggers - is sloppy and the result of confused thinking.

    Those who do use debuggers heavily are incapable of understanding the thought processes of those who don't use debuggers because the thought processes of the 'use a debugger' school are too confused to allow such understanding. Indeed, the people who depend upon debuggers lack the clarity of thought to even understand that another way of doing things might exist.

    The lack of a kernel debugger is deliberate on Linus' part: it serves as a barrier to keep programmers below a certain level from attempting to contribute their confused code to the kernel. Linus is too polite to explain that. Of course the barrier is no guarantee: this is a Yin and Yang world - some people who don't need a debugger are very bad programmers - some people who do are very good programmers. However as a first test of programming skill level it is one which works pretty well.

    As I said, this is a Yin and Yang world we live in. The mantra 'modular gooood - spaghetti baaad' is too simplistic to fit into the complexities of that world. Generally modular is the right way to do things - and spaghetti code is the wrong way. However, spaghetti code does have one virtue: it is often faster. A place where performance counts is a good candidate for the use of spaghetti code.

    Let me suggest that you attempt to create your own kernel based on your ideas of what a kernel ought to be. If you are right everyone will be able to see that it is so. We will all proclaim you a Geek God , and the Playmate of the Year will collapse at your feet moaning "Take me Digital Man , I'm yours". If, on the other hand, you discover you are not up to the task - then some honest reflection might cause you to see that you are the one who is confused, and that your anger might be misplaced.

  • A good one too, you just have beaten me, being 15 minutes faster...but that's what happens if you are not a karma whore :-) anyway....Thanks.

    Does Linus speak German ? Just wondering, as the original German text is a bit odd German at times
    (at least for a magazine), so I wonder if that is a translation from English back to German, or from Finnish back to German or just Linus own spoken German words (which then would be extraordinary well put), anyone knows ?
  • by 1010011010 ( 53039 ) on Sunday October 22, 2000 @07:31AM (#686178) Homepage
    he OOM-killer (at least as of test10pre3) does NOT kill kswapd or X11, unless those really ARE the villains.

    I cannot think of a single acceptable reason for the OOM code to kill kswapd. That's like the kernel wiping its nose with a revolver.

    Because if kswapd, a kernel-thread, is buggy, your system will die anyway. But it isn't, and it won't get killed.

    Kswapd regularly dies on my 2.2.16 + USB machine. 2.2.16 is supposed to be a stable kernel -- release 16 of a stable kernel. I can oops the kernel by merely plugging in my scanner to the USB port. Yay.

    But the VFS in Linux is not very hard to write a filesystem for,

    A simple one, no. One that must make use of the broken spinlocks and semaphores, while avoiding the wrath of bdflush and kupdated, is very hard to stabilize on Linux. There are many unintended consequences that pop up due to the spaghetti.

    ReiserFS wasn't working properly

    True, but interesting that it was stated during the 2.3 series that Ext3 would probably be included in the 2.4 kernel, even though it was less capable, less stable, and in less widespread use than ReiserFS at the time.

    This isn't about narrow-mindedness, it's about sanity and interoperability. It's about not making the same mistakes Microsoft keep doing over and over again. NTFS streams ARE a complete mess. Try to map them sanely into the Unix-world, and you'll see. Try to use tar to backup an NTFS-volume and see how much you'll preserve...

    I.e., they're not Posix. As I was saying. Linus and I were arguing that, whether streams are posix or not is irrelevant, because there are many existing filesystems that people want and need to use from within Linux that use streams and/or EAs, and Linux would do well to provide clean support for them. Like you said, to provide interoperability and sanity. No one will accuse the current HFS method of posixizing a streams-capable filesystem of being sane. The notion of providing a way to support filesystem that provide streams/EAs was met with a quite visceral reaction from Viro, Tso and especially Cox. How do you plan to support all the features of NTFS, XFS, BeFS, HFS and HPFS on a VFS that respects only Posix? And incidentally, the tar format supports extended attributes, even though the current tar tools do not. Pax also supports extended attributes. If interoperability was actually the goal, then there would have been a discussion of how to provide support for streams without breaking the semantics of filesystems that don't support them. We even submitted a thought-out suggestion for namespace augmentation that would provide the needed support and compatibility for Posix filesystems. Victim of the killfile.

    Oh, and about kernel-debuggers. Yes, Linus is violently opposed to those. But does that prevent YOU, or anyone else for that matter, from using one?

    Yes. His refusal to allow support for KDBs means that they do not work well, or at all; unlike the kdbs of other OSes. "Oops" is not useful output in a lot of situations, noteably race conditions. I imgine there would be fewer race conditions inhernent to the kernel if there was a built in debugger, and there would be less code with undefined behavior if there was a debugger, and developers would be able to get a meaningful stack trace when their driver crashes if there was a debugger. What gooed does printk do when the kernel resets precipitously and cannot flush its output? Stating at the sudden appearance of the bios screen isn't helpful.

    Pray tell me, if you don't want the kernel-developers to tell you to fork your own kernel, and you don't want to submit changes to the kernel to clean things up because you don't like the development process, why DO you worry?

    We do submit things. Cox even rejected a patch to provide 64-bit printk output on 32-bit platforms, twice. This, after developers are told they should use printk and not a debugger. Never mind that even printk is lacking.

    ...assignments for different CS-classes simply having the bugs patched over by if-clauses, from people that have traced the code to see where it fails

    Yeah... it really kept all the a[i]=i++ code out. And Linus doesn't have to accept code if he doesn't like it; why cripple the use of a debugger for the people would would actually make good use of it? The 1-st year CS students won't write worthy code whether or not they use a debugger, so why stop them? Unless it's just social engineering.

    v2.3-series has been spent rewriting the VFS and the Network layer to be clean,

    Not to mention the 2.4-test-pre-whatever series, which has also been spent rewriting the VM and VFS. Linus should never have attempted to force things by jumping the version number up. It didn't work, and merely caused a lot of frustration for a lot of people. Relabelling the kernel as "ready" didn't make it ready.

    It is VERY unlikely that people are uninterested in the v2.4test kernels because they've lost their faith in the kernel development process. Why? Simple. Because most people that don't hang around on LKML or Slashdot don't even know about how the development process works. For that matter, most people hanging on /. probably don't.

    They do see all the delay announcements. They see the kernel not being released in two years. They see features being backported so that Linux can just keep up. Who will believe that the 2.4 kernel is ready when they finally announce it? They've announced 10 versions of 2.4 already, and none of them are production quality.

    And, why won't Linus use CVS? Accidental additions can be backed out, the commit logs with provide a start for documentation (since the inner circle of kernel developers themselves don't write any), and it will allow better control of the process at large. Right now, all the major decisions are made off list between Viro, Cox, Linus, etc. and the rest of us get to hear about them only the next patch is released. There is never and intent or direction advertised before, and there's no documentation after. If they at least used CVS and let people read the histories, we would have a better idea of what few design decisions are made actually mean.

    You know, you ARE really trolling.

    That same comment is made any time any criticism is made of any aspect of Linux. It's lost its sting.

    ________________________________________
  • I am saddened to see linux as the free system of choice

    Linux != GNU/Linux. I'm not complaining about your terminology; I'm saying that the reasons for using [GNU/]Linux as opposed to BSD are not really about the kernel, they're more about things like the Debian project, or commercial 24/7 support.
  • by nevets ( 39138 ) on Saturday October 21, 2000 @06:55PM (#686180) Homepage Journal
    Wow, I'm absolutely impressed with your comment. It has been the best thing I have read on /. for a long time.

    I was talking to a vendor at my work (I won't say who they are, but you know them). They told me that they want kernel support for 64 processors. The CEO went to Linus himself, and ask him straight on to please let the official kernel have support for this. Linus replied that he wanted no such thing.

    I replied to the vendor, that they should go ahead and support it themselves. Yes, I would actually love to see linux fork. And I have a hunch that so would Linus. This would really be a good thing.

    I look back at some bad forks, and nothing sticks out more than the many Unix that were about. But the one thing that made it a bad fork, was that it was closed source. Each vendor trying to out due the other, making none of them pleasable. But just think if you have the same thing, but with the exception that you could incorporate code that looks good in one and place it in the other. Or a a enhancement. It may have its troubles, in maintenance, but this could be worked out.

    Lets have several companies (HP, SGI, IBM, etc) go out and create a new kernel based off of Linux. But it would still need to be GPL, thus open for all to see and use for your pleasure.

    In New York, I watch Linus Torvalds talk about having the same operating system run both your refrigerator and a mainframe. They may both be the same size, and have the same cooling system, but they have too entirely different tasks and that they should only share the same code that makes sense. If you browse the Internet from you fridge as well as from a mainframe, then maybe the TCP/IP stack could be the same. I totally agree with this statement.

    I have no problem with the BSD fork, and think that Linux should have a similar fork but all need to have the same license (The BSD license is the only thing I don't agree with, but its better than other choices :) Also, it would need a different focus. I imagine that Linux may split in three directions. Small Medium and Large. Maybe Linux could have clothing labels! Have LinuxS, LinuxM, and LinuxL, and maybe even add LinuxXS and LinuxXL, and then go further LinuxXXS (for molecule programming) and LinuxXXL (for a future OS that runs like Seti).

    I'm one of the biggest Linux advocates at my corporation and I am often asked about forking. My reply has always been (and still is) forking is good, when it is open, and in an open environment, forking only occurs when a need needs to be satisfied.

    I'm glad Gtk/Gnome came about. (I know this is not technically a fork, because they never were "one") I never really liked the look and feel of KDE. Altough I just recommended KDE to a coworker because of his background, he seemed more the KDE type. And he is very happy with it. So, I say, "to each their own, live with it!"
    Steven Rostedt
  • Thanks. That was a good reply.

    The mantra 'modular gooood - spaghetti baaad' is too simplistic

    It certainly is -- as is any type of fanaticism or dogma.

    However, spaghetti code does have one virtue: it is often faster.

    Even NT, which is very modular, compromised with the "fast path" i/o machanism. However, they started from a position of modularity, and added a hack. Linux is the other way around. We won't discuss win32, because it's utter crap. When I say nice things about NT, it's only about the David Cutler kernel.

    the code produced by the other types of programmers - who heavily require debuggers - is sloppy and the result of confused thinking.

    I'm not disputing that; it's true. However, engaging in social engineering rather than simply rejecting patches is silly and counter-productive. Even very good programmers can and do use a debugger productively. Bad code is bad, regardless of whether a debugger was used. Providing a debugger won't make a good programmer bad, and withholding one won't make a bad programmer good. It would be better to provide coding standards and/or guidelines and recommended testing procedures, and allow a debugger, than to simply disallow a useful tool because it can fall into the wrong hands. If there were standards and even plans, then a lot of patches could be rejected because they don't meet the standards, rather than "Linus doesn't like it." And there would probably be fewer bad or useless patches if there was a plan and guidelines, because people would know from the outset what is expected. Take that one step further, and provide some public coordination and documentation of efforts, and I think it would be better still.

    Let me suggest that you attempt to create your own kernel based on your ideas of what a kernel ought to be. If you are right everyone will be able to see that it is so.

    This is correct, and I've been considering forking the kernel with the aid of the big outside players (IBM, SGI, etc.) and starting an Advanced Linux Kernel Project. Based on the reasion I've gotten to my post via email, /. replies and personal communication from my friends and co-workers, people want it to happen. Setting up the political structure will take some time. Want to help?


    ________________________________________
  • Since we can't read the article before posting (like we do that anyway :-) ), I'd like to know why people aren't interested in 2.4. Is it that it's been delayed so long it's like vaporware? Or is it because USB support along with the rest of the new stuff is patchable?

    I wasn't around during the 2.0->2.2 transition; was all the new stuff in 2.2 like SMP available as a back-patch? 2.2 also had huge delays, so I doubt that's the problem.

FORTUNE'S FUN FACTS TO KNOW AND TELL: A giant panda bear is really a member of the racoon family.

Working...