Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Software

Linus Confirms 2.4 In December 158

Lothsahn was the first to write to us about the latest statement from Linus regarding the Linux 2.4 Kernel release date. His statement says that he knows of no major showstoppers, and that he's asking the major devel houses to deploy the test kernels internally and start bug testing. Early December, hopefully, for a release.
This discussion has been archived. No new comments can be posted.

Linus Confirms 2.4 In December

Comments Filter:
  • "Market" as in "The market share of Linux is growing at the expense of Windows". Stop thinking about the market and watch how quickly Linux becomes yesterday's news.
  • I too had problems with 2.2.x in SMP mode. After applying a patch to irq.h my 2.2.x boxes have been VERY stable.

    Care to let us know which patch this is? My SMP box tends to hang every few weeks under 2.2.x (including 17). I'd love to get that patch... thanks.

  • I learned vi in about one hour when I needed to remotely edit a file on a linux machine. Arrow keys (or jkl;) are for moving, i is for inserting, cancels out of insertation, w saves, q quits, q! forces a quit without saving. Add cut, copy and paste, and that's all you need to know.

    For actually word processing, there is Star Office, which is a lovely product, or, you could learn Emacs. (I haven't, but don't let that stop you =)

    I believe that people forget how hard it was to learn how to use a specific program on a specific OS, and when they try to learn a new one, they are startled at how difficult it is. With all new software, its best to RTFM!
  • Lately I've been getting into Linux a lot more than previously; and one aspect of it that I don't seem to see much advocacy discussion about is the fact that it is so modular compared to other OS's, so the functionality upgrade path is more flexible. For example, I am not likely to get any functionality upgrades to Windows Explorer on my Win2K box until Whistler or whatever. But yesterday I upgraded my Linux box with the latest Gnome & Nautilus. Spot the difference. This applies to the Kernel too of course. Say this was MS Winux ;-) and was released as a Whole OS, then a few service packs for the bug fixes, then another whole OS, . Then we'd have been either: waiting for the 2.4 Kernel before any of the 7.0's came out - so no-one would have the latest Gnome, XFree86 v4, etc. or getting the kernel work rushed to make a market-driven release date. This modular system I think is a great strength of Linux. And of course, another point is that people are actually running real systems with 2.4 development kernels. Again - impossible in a closed source monolithic OS.
  • I haven't heard much mention of the change from ipchains to iptables [uni-karlsruhe.de]. I have been waiting for the production 2.4 kernel release to update my firewall from 2.2.17 and would like to know how it works from someone who has taken the plunge already.
  • In an interesting twist, 2000 is ok, only locked up once while I was using it, pretty stable, but
    it just does not support my IDE chipset correctly
    nor my gamepad of choice, while linux supports
    both. Who would have thought that linux would ever have better hardware support than a microsoft offering :) I like the way linux works better anyway (as far as user interaction and flexibilty). But I am feeling like criticism of 2000 flakiness is not as called for as it used to be.. In any case I'll continue on with my Unix of choice but will say that for once Micro$oft has done a good job of producing a decent product, though 2000's price tag was (and still is) unreasonable and I would never had bothered if my company did not pay for it.
  • yeah, but at least we don't use ROT13! :)

    qvpxurnq

    cheers,

    Alex

  • Get a new version of modutils... see /Documentation/Changes
  • Ummm. what the hell are you talking about?

    Have you even used a 2.4.0-test?

    Cheers,

    Alex
  • You are absolutely right. In order for Linux to succeed, it has to meet it's goal.

    It's goal was to make a geek (Linus) happy.

    I think it's safe to say that it has, is, and always will meet this goal.

    However, you missed the point. The stance they were arguing from was that of "What does Linux have to do to become THE major market player?" I will readily admit that this goal isn't what Linus cares too much about. But the magazines DO (they're paid to write about things, and this is definately a "thing").

    So, you are right. but that wasn;t the point they were trying to make.

    t14m4t

  • So, in other words: commercial testlabs, payed by companies traded on the Nasdaq, do the final testing before release?

    Yes.

    What happened to the driving force behind the quality of open source: "thousands of eyes go over the code to find bugs".

    You think this ever was true ? I don't and nowadays there's too much out there anyway. Think it over -- 90 + n per-cent of all geeks out there do a ./configure at most and that's it.

    Much more important is that those companies can test the kernel in an environment no-one could afford. It is them who do have their databases and big machines and the means to put them under heavy load. I'd be only too glad with not having the means to break the kernel as if it is good enough for them it's for sure good enough for me too.

    Aside from that should it not good enough for them, then it'd stay to be a toy system to mention what it was named so often years ago.

  • The "9. To Do" list seems quite long!

    Will the 2.4 version will be released before the "To Do" list is cleared?
    Strange..
  • by Anonymous Coward
    Linus does not like version control systems -- do a search on kernel traffic [linuxcare.com] regarding that issue.
  • Ah well. You don't have to be sorry for me though. You have to be sorry for all the people who should be reading this and won't because it dosn't show up on their radar.

    Anyway, thanks for the . . .um. . .clap.

    KFG
  • I sometimes wonder if you people read the articles these posts are attached to. Yes, this was on topic. They said that it was impossible for Linus to push back the birth of his daughter, I pointed to a historical reference of when it wasn't. Man. Get a life.
  • I partially agree. For some people, Linux seems like it's 'catching up' because of the USB and larger SMP capabilities. For others, tho, who need a solid web server that doesn't crash, it's MS who is 'catching up' to others with W2k. Some places don't need all the config utilties - often because with a bit of training/experience, Linux is, in some respects, easier to configure (again, not necessarily 'out of the box', but after awhile)
  • by Leto2 ( 113578 ) on Thursday November 09, 2000 @04:13AM (#636209) Homepage
    Major Linux distribution vendors such as Red Hat Inc. (stock: RHAT), Caldera Systems Inc. (stock: CALD), TurboLinux, and SuSe have developed their latest releases to exploit the kernel when it is ready.

    Aha! So it's true! All major Linux houses are in fact hackers trying to exploit the linux kernel!

    I knew it!

  • Down here, we're using 2.4.0 since 2.4.0-test1 on a our web farm (~ 30 web servers with apache) and it works pretty cool.
    The latest reboots of these computers were to upgrade the kernel or the hardware.

    We did the upgrade because the 2.2.X were very unstable under heavy load (they froze in one or two days). AFAIK, there is still a problem with the mem management of 2.2.17.
  • by toofast ( 20646 ) on Thursday November 09, 2000 @04:15AM (#636211)
    Good point, and it's also inherently obvious from the first paragraph:

    the long-awaited Linux 2.4 kernel for commercial release.

    Commercial release? It sounds like someone is selling the Kernel, or that Linus is making money releasing the the Kernel. What the article does fail to say is that the Kernel is being released because it's _ready_, not because of market pressure or financial agony to release a product just for cash (Office).
  • Perhaps Linus finally grew a spine [geocities.com]?

    (as referenced in a recent issue of kernel traffic [linuxcare.com])

    (personally I just took the site as a humour thing though I feel the kernel is a bit overdue. but as i'm no k-hacker and just sit on the sidelines and bitch, what do I matter? :)
  • Did you try building both as a module and otherwise? I can't get the acenic driver to work in the 2.4.0-test kernels unless I build it as a module.
  • Does it make you feel all yucky inside if they use business terms? Grow up for god's sake. They're using "market" to mean the segment of the world's population that will potentially use Linux 2.4. "Market" is just an a phrase that saves you the air and allows distro makers and commercial users, who DO think in business terms, to quickly understand what they're talking about.
  • OK. I'm going to try this question again as me.
    Please let me know how you got Q3 working with kernel 2.4test10 and XFree 4.0.1 ?
    I've been trying every which way with no luck on my voodoo 3.
    michael.rychlik@pp.inet.fi
    Despite what your user info implies you DO have something interesting to say !
  • by Anonymous Coward
    khttp was made as a result of the mindcraft tests which showed linux laging as much as 4x times the speed of NT 4.0 under IIS.

    IIS was integrated into the windows kernel making it much faster. THe problem is that it made NT/IIS more unstable in the process. A direct memory call for example could create a gp fault and take down your whole website!

    Did you know certain webscripts can take out a mission critical IIS server?

    absolutely incredible!

    THe only reason why big conservative companies chose IIS is because of mindcraft fud and ms marketing hype from the mcse employees.

    khttp is for benchmarks only and apache in full ring 3 mode is better for real 24x7 production use. I strong discourage anyone from betting there jobs on khttp for a mission critical solution.

    I believe Linus himself stated after the mindcraft tests that some of his developers may produce a ring 0 webserver for benchmarks if future scalability attacks continued. khttp I believe was the result.

    Also the new apache 2.0 is much faster then the older 1.x series that was used during the mindcraft test over 18 months ago. With the new improvements you may not need khttp at all.
  • If your TV card has ATI anywhere in the name, you deserve instability for buying an ATI product.
  • To tell the truth, it all depends on what you use it for. On my system with Detonator 3 NVIDIA drivers, Win2K has locked up at least a dozen times this month. Its not random instability, Win2K just doesn't like Windows Media Player 7 and OpenGL. As long as you avoid weird GL apps (not Quake but stuff like the demo programs you get on flipcode) and Media Player, then it is quite solid. Unfortunately, WMA and OpenGL are the only reasons I need Win2K so it is a little self-defeating. Methinks it needs some more elbow grease to work the kinks out.
  • by darylp ( 41915 ) on Thursday November 09, 2000 @03:59AM (#636219)
    When's 2.6 coming out?
  • In the test kernels, NetFilter (the official name) is not that much fun, but not terribly bad either. You have to use a program called iptables instead of ipchains (duh) which never seems to be compiled correctly for the kernel modules I have at the time. The syntax for iptables is a little different, but my is fairly basic so I can't help much there. However, you still have to option to use ipchains in the 2.4 kernel if you have extensive config stuff already written.

    You can find some FAQs and HOWTO's (scroll down) at the NetFilter homesite. [kernelnotes.org]

  • KFG wields a blessed +5 Insightful comment.
  • Then that's not the smartest thing to do. There is a lot of stuff in a server that you don't need in a desktop OS, a lot of stuff that isn't that you do, and optimizations that work for one, not the other. For example, on the TCP/IP stack of a desktop OS, you probably want to optimize the stack for fast response on a couple of connections, while on a server you want to be able to handle many connections consistantly. The VM management also has to be different, (on a desktop, the forground process needs a lot more CPU than the rest of the system). The sheduling quanta has to be different (WinNT Server: 120ms fixed length, WinNT workstation: 20 background 20-60 forground variable length, Linux 2.2 100ms fixed, Linux 2.4 50ms fixed, BeOS 3ms fixed), the priority management has to be different (most computation time, or interactive speed?), disk buffer management has to be different, etc. If you try to do it all (and no I'm not going in and changing these values in the Linux source code!) then you get a sub-optimal experience for all.
  • I can't even get the damn thing to install. It comes up with 'Loading device drivers' and then tries to load loads of drivers for hardware I don't have and then just stops, but never on the same device. There's no way to skip anything, you have to do the three finger salute. So I doubt I'll ever get the chance to see a W2K BSOD on my machine. On my dad's machine however, it installed fine first go. Weird.
  • Why no CVS for linux ?

    Linus believes he is a 'CVS with taste'.

    If that's not good enough for you, try here [innominate.org].
    --

  • I'm a big fan of Linux. I am happy everytime it beats any other operating system in performance(especially an os from redmond). But, those stats you just linked to are the worst I've seen in the world. They mean absolutely nothing in the world. The linux computer had 4 to 8 times as much memory in those tests. If you are going to make a benchmark, all hardware has to be identical.
  • They're behind the times. Slackware was so well improved that it skipped from v4.0 to v7.0. It's currently at v7.1 and counting. Way to go Slackware for winning in the Linux Distribution Version Number War!

    And it might be more appropriate for them to release 2.4 in late December. What a nice *gift* that would be :).

  • I take that back. Mandrake is at v7.2. Dammit Pat! Release release release! It's all about the high numbers!

  • Did anybody else hear about the new Linux kernel having a built-in web server? I found it to be rather odd myself, but then some coworkers began getting emails about it. Anybody have any insight?
  • by Anonymous Coward on Thursday November 09, 2000 @04:18AM (#636229)
    Question is, which year?
  • Lbh unir gbb zhpu gvzr ba lbhe unaqf...
  • by JatTDB ( 29747 ) on Thursday November 09, 2000 @04:22AM (#636231)
    The root of the problem is that open source software has never really had to worry about release dates before. Just about everyone who was working on it tended to be doing it in their spare time. Those who used it usually had a pretty good understanding that code done in one's spare time is not necessarily going to be completed by any given date. This new age of open source mixed with corporations causes us to have to worry about many of the same issues as traditional software companies do, including release dates, feature demand, and other nasty things that don't always jive with the "I wrote it to scratch a personal itch" mentality.

    Personally, I'd rather wait for a release and know the code has been tested and is done right, rather than demand the developers set a release date, build a few binaries, run em overnight, cross their fingers, and ship.

  • Market? And which particular 'market' are we talking about here? Can't these people not think in business terms?

    On the contrary, I would argue that a market exists regardless of price (or lack thereof). I don't think the 2.4 kernel is as major of a release to the Linux community as Win2k is to the Windows community. I think in this case the word "market" refers to an operating system market, which Linux is a part of despite the fact that it's free. Perhaps the lack of a stable 2.4.kernel (and knowing that a stable kernel should be out Real Soon Now) might sway an IT admin to choosing to stay with NT-based machines.
  • Comparing a brand new product (2.4) against NT (fairly old) is just not a fair comparison.
    Hmm, let me take a look in my crystal ball ...

    ups, I see [spec.org] bad news for MS. I guess [spec.org] that MS will have some trouble benchmarketing in the future, at least concerning web-performance.

  • I know this is a troll, but...

    Have you _used_ 2.4?

    Have you taken advantage of the multi-threaded TCP/IP stack? The USB support? The improved VM?
    To say that this isn't helping those who choose
    to run Linux at home is riduculous. That's
    like saying that rack-and-pinion suspension
    or power steering has no place in a consumer car.

    Look at the feature list, if it's not what you want... keep running 2.2 or BSD or Mac or Windows or whatever. But the way it stands now, you better change your clothes, because your ignorance is showing.
    ---
    RobK

  • I suggest you use TeX and Metafont, then. Thier versions have been approaching pi and e for some time now, and when Knuth dies, they will be equal those beautiful numbers, and will be perfect. Any `bugs' found after this point will really be features, and will never be `corrected'. I think I much prefer Knuth's versioning scheme, but I suppose that's mature software, while linux would in a few years have hundreds of digits of pi for its version.

  • Does anyone know if there will be support for files > 2 Gig in the 2.4 kernel? I want to set up a bioinformatics cluster, and dang if the genome isn't 3.2 gigaBases long. We will split the genome sequence into smaller chunks to make file access and database searching faster, but we need to handle monolithic big files before we can do anything. Any info is greatly appreciated. -fbj
  • Offtopic? Hmmm... moderators, could you pass me some of that crack? I need a hit. I, too, have noticed this piece of extreme oddness, though -- I can't hit certain web sites (nor can I hit news.giganews.com on its NNTP port) from 2.4.0-test10. Weird. Additionally, the solution listed below ("echo 0 > /proc/blah/blah") didn't work either :( ... I've been extremely impressed with 2.4.0's performance and reliability. Takes a bit longer to compile, but damned if it ain't easy to configure :) Has anyone worked out the network oddness yet?
  • It needs *ONE* geek sitting up in his room at three in the morning going "Oh wow."
    _______________
    you may quote me
  • Actually, the guys at BeDope are winning here. Check out this press release [bedope.com]

    Installation instructions can be found here. [bedope.com]

    PS> You knew you were asking for this.
  • by be-fan ( 61476 ) on Thursday November 09, 2000 @03:29PM (#636240)
    I've always wondered. Does Linus use CVS? I can understand him wanting to be the central relay station for any and all patches to the kernel, but how does HE manage the source? Does he have thousands of source archives of different patches on his harddrive? Does he manually diff and check in patches? Does he have only the one working copy of the kernel?! Maybe it would be easier on him if he used CVS himself, but made sure only a trusted few (him and Alan probably) had commit privleges so he could still be the grand poobah, but he'd have access to the other features (rollback to any state, automatic merging, source management, etc) of CVS.
  • It needs *ONE* geek sitting up in his room at three in the morning going "Oh wow."

    Sorry, I just had to point this out... You do know what you're telling geeks to do, right? :)


    _______________
    you may quote me
  • Running Test10 and iptables 1.1.2 at the moment on my home firewall...

    Iptables is a VAST improvement. Things I like:
    * Forwarded traffic is no longer processed by the INPUT and OUTPUT chains
    * There are seperate match rules for incoming and outgoing interfaces.
    * State tracking module lets you match on the state of the connection (ESTABLISHED, NEW, RELATED, etc)
    * Limit tracking module lets you match burstyness of incoming packets (So you can block huge bursts of pings, for instance)
    * Seperate NAT table allows for more flexible NAT setups (for instance, source NATting out of an IP alias. Or destinating NATing instead of port forwarding, which again with aliases would allow one to have multiple port forwards on the same port with different IPs. Trying to do this in 2.2 is possible, but a pain.). In a playful note: I got Crimson Skies with full MS Zone support with 2.4 masquerading last night. 3 commands all that was necessary.
    * Only thing its missing at the moment is a "iptables-save" and "iptables-restore" but the latest CVS snapshots have the beginnings of them.
    * PREROUTING and POSTROUTING targets, where DNAT and SNATing are done normally. This makes it MUCH easier to tell how packets are being filtered through the system from a logical point of view. Masquerading on the forwarding rule always made me a bit confused.


    Tim Gaastra
  • Just because something is 'free' doesn't mean that it isn't part of a market. To have a market, all you really need is supply and demand; there is, for example, a high demand for homeless shelters in the housing market. They are part of the housing market even though those people who use the shelters don't pay for them.

    Similarly, there is a market demand for Linux just as there is for Windoze and Mac OS. By market, in this context, they are referring to public/corporate (released) availability... and it was late coming it terms of the 'market schedule'.

  • My copy arrived yesterday. I'm a happy boy :) :) :)
  • Go to www.google.com [google.com] and type "ppp atm linux" in the search box and hit the i'm feeling lucky button.
  • showstopper A hardware or (especially) software bug that makes an implementation effectively unusable; one that absolutely has to be fixed before development can go on. Opposite in connotation from its original theatrical use, which refers to something stunningly *good*. I thought that was interesting anyway. Sort of like Flammable and Inflammable meaning exactly the same thing.
  • by timothy ( 36799 ) on Thursday November 09, 2000 @06:43AM (#636247) Journal
    Dear Linus Claus:

    My birthday is the 18th of December. I would appreciate it if you could release the kernel on that date. Since I'm now too old to get any good presents from my parents, and my girlfriend won't give me a present until I find out who she is and where she's been hiding, I would really like a new kernel as consolation prize.

    Sincerely,

    Tim

    p.s. A little early is OK, too.

  • does anyone know where i could get a complete list of updates in the 2.4 kernel? in particular, the current kernel only supports semaphors in thread related apps, however i need the semaphors in certain tasks in my application. unfortunately linux can't support this(QNX on the other hand .... ). any ideas as to where i might find information like this?
  • hear hear!
  • Okay, if business-speak bothers you, read it as: "...the enhanced kernel, which at this point is a year behind its original schedule..."

    After the 2.2 kernel took so long to release, the widely publicized plan was to move to smaller, more frequent, incremental releases. The 2.4 kernel was expected to be out by the end of 1999. That didn't happen. They're just pointing out this fact.
  • I think that is why they are push a little harder now. And probaly in 2002 they will be coming out 2.6 for you enjoyment pleasure to add a whole slew of features that they missed out on the last time.
  • ..as others had pointed out, I was using the term 'Windows NT' generically, not really caring whether you used NT3.51, NT4.0 or W2k.

  • The improved VM?

    Actually, most distros started shipping Vim 5.7 quite a while ago. But I'm an nvi man myself.

    Oh, wait...


    All generalizations are false.

  • Would doing this: echo 0 > /proc/sys/net/ipv4/tcp_ecn have any effect on this setting? If it's meant to, it doesn't :( ... my kernel doesn't have CONFIG_INET_ECN set. Smeg. I wish I could think of other troubleshooting steps -- I'm sure "it won't connect to port 80 of www.americanexpress.com or the NNTP port (forgot the number ;) at news.giganews.com" doesn't help anybody debug this. Heh, I suppose I should also mebbe bug the kernel list instead of whining on Slashdot :). Thanks for your suggestions, though!
  • by Patrick Hancox ( 208810 ) on Thursday November 09, 2000 @06:49AM (#636257)
    do they ever listen...

    IIS is NOT in the kernel, even a little bit, really. It is a userspace series of applications that are executed in the context of one or more service accounts. (no, the account(s) does not have to be give admin priv)

    IIS is faster in some cases for 2 reasons

    1. IIS is highly multithreaded, Apache is not
    2. IIS caches damn near every thing, Apache doesn't.

    please let the kernel myth die allready
  • No showstoppers, evil monkeys, or, I'm betting, a kernel. Judging from the LKML, there's still a LOT of bugs in it.

    ________________________________________
  • by hconnellan ( 31637 ) on Thursday November 09, 2000 @04:33AM (#636268) Homepage
    It has khttpd which is a kernel module to serve static html pages faster. This will be used by the likes of apache so all web servers should benifit.
  • Or complain to the web admins concerned. Cisco have already released a fix for (at least some of) their products to fix this. ECN has the potential to be a very useful 'tool' in making best use of 'net resources but it cannot do this while some (high profile) sites reject connections which signal that they can handle ECN.
  • Yes, the article is done up in vapid, breathless 'IT Rag' speak designed to make a manager think their job is exciting and that they learned something important by reading the article.

    The article could be summed up in a paragraph or three as:

    The stable version of the Linux 2.4 kernel, which was expected to be released 4th quarter of last year, will instead likely be released this December of this year. While the 2.2 kernel is quite functional and adequate for many people's needs, the 2.4 kernel has some nice, long awaited features such as support for USB and better tuned SMP performance, along with a re-written networking stack.

    Many of the SMP and networking improvements were made because NT 4 beat the Linux 2.2 kernel in some benchmarks in the beginning of 1999, and it's hoped that the improvements will avoid a repeat of that embarassment.

    Several Linux distributions are prepared to release a new version as soon as the 2.4 kernel becomes available, having already prepared for its release in the current versions of their products.

  • "A little retrospection shows that although many fine, useful software systems have been designed by committees and built as part of multipart project, those software systems that have excited passionate fans are those that are the products of one or a few designing minds, great designers. Consider Unix, APL, Pascal, Modula, the Smalltalk interface, even Fortran; and contrast them with Cobol, PL/I, Algore, MVS/370, and MS-DOS."
  • What planet are you on, certainly not one with win2000 thats for sure. NT5 does not get the BSOD (i havent anyways) BUT I have had many more reboots than with NT4, yes less BSOD'S and MORE reboots. Me thinks the BSOD was too embarassing and they'd rather have the thing reboot.

    try MP3's across a network and open the CDROM at the same time I crash it every time. Thank goodness its the office puter
  • You are conflating two servers. khttpd [demon.nl] is a static-page-only accelerator; TUX does both static and dynamic content.
  • TUX, which serves both static and dynamic content, does not share implementation with khttpd.

    Feel free to download TUX from ftp://ftp.redhat.com/pub/redhat/tux/Read the README file first, of course. :-) [redhat.com]

  • Good god, that damn dealer sold me tainted crack again. I ought to sue ;)
  • No, Linus doesn't use version control. He considered BitKeeper, but decided against it due to licensing concerns -- just as well, it's (extremely) buggy. He does the whole thing manually -- yes, it sucks.

    CVS also has some serious limitations. I won't go into them here -- let's just say they exist. There are something like three projects right now working on CVS replacements to fix these things. I happen to know some of the developers of one, and wish them luck; some of the features on the drawing board are extremely nifty. Hope for a release (GPLed, of course) by the end of January.

    In short, we need a VC system that doesn't suck, and don't really have one yet.
  • The problem is the Linux community is far from homogenous. Some people think that "success" means Linux on every desktop, or a toppling of Microsoft. Some people think success is simply building a higher-quality OS. Some people think success is spreading Free software. Others think success is scratching their itch, or getting somebody else to think "Oh cool".

    But every time one faction gets any media or corporate attention, others stand up and say "Hey, you just don't *get* it. It's not about *that* it's about *this*. Duh!". Please... Linux needs whatever people want it to need, to succeed. For RedHat that might be getting a standardized UI (users are weird and like that sort of stuff), for Linus perhaps it's adding new technologies for embedded and big iron use. For Mr. Debian Hacker maybe it's maintaining a philosophically pure Free distribution. There's no one right answer, there are many.

    P.S. Moderators: I would like a large Insightful with a side of Informative, hold the Redundant
  • Why no CVS for linux ?

    Simple. Linus doesn't want it that way. I believe (i'm not involved with kernel development, or linus) that one of the reasons is that in the end, Linus controls what goes into linux, and by using just the 'patch' system, it allows him better control over what goes in and out of the kernel.

    Even more likely than that, is that he likes the way he's doing it, so why change because someone ELSE wants CVS? geek stubbornness. we're all born with it.

  • Old old news. It's been in since kernel 2.3.14

    In other words, this will be the first release kernel with it. Not everyone runs on development kernels.
  • by Anonymous Coward on Thursday November 09, 2000 @05:06AM (#636298)
    Why no CVS for linux ?
  • Seems I remember a big flap from O'Reilley about there only being little more than a few registry entry differances that prevent more than 10 connection to the workstation version, oh and several hundred dollars (which rarely plays into the comparison - the only conclusion one can come to is that most Msft advocates must pirate their software). And yes the 'Advanced Server' has the sliding bar for "forground application performance boost" over at the 'none' end of the scale to give network server priority over gui.

    Everytime I setup RH62 it asks if you are setting up a server or workstation.
  • just checking if anybodys reading - yes it's really supposed to be 'Algol'.
  • As I said in my other post, I was using the term NT generically, and I just simply regard W2k as a new NT version, in the same way 2.4 is a new Linux version.

    An across the board comparison, including other Unix varients AND Win2k would be interesting. Think we can leave Novell 3.11 out of the list though!

  • by Anonymous Coward on Thursday November 09, 2000 @04:01AM (#636307)
    Redhat and Suse are both on Linux v7.0. I think that 2.4 must be a typo.
  • When correctly viewed,
    everything is lewd.
    I can tell you things about Peter Pan, and the Wizard of Oz,
    THERE'S a dirty old man. - Tom Lehrer
  • by kyz ( 225372 ) on Thursday November 09, 2000 @04:02AM (#636309) Homepage
    or evil monkeys [bbspot.com].
  • I left it overnight once to see if it was just a loop, and nothing changed. This is just a Compaq with pretty standard hardware, nothing special.
  • PPP has less to do with the kernel then the ethernet card. As for the DEC tulip, I have a Linksys 10/100MB based on that and am running test 9 (haven't had time to get 10 up on it yet) and I got it working just great with my Cable Modem at home.
  • by wiredog ( 43288 ) on Thursday November 09, 2000 @04:52AM (#636318) Journal
    Understanding the Linux Kernel [oreilly.com] has been released by O'Reilly


  • by prs ( 18535 ) <djm@raffriff.com> on Thursday November 09, 2000 @04:54AM (#636320) Homepage

    The Linux 2.4 todo list can be found here [sourceforge.net], and an article detailing the new features of 2.4 is here [linuxtoday.com].

  • Makes sense though, the only reason IIS beats Apache at certain things is it's integration into the kernel (and probably why web script errors can bring down NT). I just hope they've implemented it 'properly' or our famed web server uptime could suffer.
  • by Michael K. Johnson ( 2602 ) on Thursday November 09, 2000 @04:55AM (#636322) Homepage
    Actually, TUX (the kernel-accelerated web server developed by Red Hat) does not offload all non-static requests to "Apache/AOLserver/whatever". TUX is capable of serving complex dynamic requests both with in-kernel and user-space modules, as well as directly running CGI executables. TUX can also forward requests for which no TUX module has been written to Apache to serve, but unlike khttpd, it is possible to have a full website with dynamic content entirely served by TUX.

    Forwarding to Apache (or whatever) is most useful for complex modules that would be difficult to port to TUXapi. TUXapi is event-driven instead of connection-oriented, in order to provide maximum speed. This makes TUX modules harder to write than Apache modules. Forwarding to Apache lets you take advantage of the ease of writing Apache modules when speed for that particular module is not critical, while still allowing TUXapi modules to directly handle speed-critical tasks.

    Lots more detail is available in the /. interview with Ingo Molnar. [slashdot.org]

    (I'm not dissing khttpd; Arjan (author of khttpd) likes TUX. :-)

  • by signe ( 64498 ) on Thursday November 09, 2000 @05:21AM (#636323) Homepage
    What happened to the driving force behind the quality of open source: "thousands of eyes go over the code to find bugs". ?

    Thousands of eyes is great. I'm sure thousands of eyes found lots of bugs. But nothing compares to live load. This is something I learned when working on larger systems, like the ones at AOL. You can comb the code as much as you want, and test it for weeks. There will still be bugs that only a live system will show. What I would guess is that these large Linux shops "test cycles" include things like running live load on the systems, and pushing them harder than they can be pushed on someone's desktop.

    Also, you're way off with comparing this to MS. It's not as if you can't pull a copy of the latest test kernel and run it on your boxen and find bugs and report them. Linus is not saying that these large Linux houses get to test the kernel exclusively. All he's saying is that that's where he's expecting to find most of the last minute bugs.

    The Linux community should consider itself lucky to have large shops that will test new releases internally. I have seen so much code that has been "released" by companies that are not known for bad software, that has completely fallen apart under live load. It tends to be true that the more load we put on a system, the more obscure the bugs we found. But as obscure as the bugs were, they were showstoppers to a large system. And these were things that the software companies couldn't find themselves in their QA labs because they just didn't have access to the load that we were placing on the systems.

    -Todd

    ---
  • by kfg ( 145172 ) on Thursday November 09, 2000 @04:58AM (#636324)
    The kernel cannot, inherently, be "late to market," not only because it isn't a 'market' that it's being released to, but by * definition* it is "due" when it is done.

    They don't get this. There is, and never has been, a projected 'release' date in the industry sense. There is Linus saying, " I think I can get it done by. . . "

    If he does he does, if he dosn't he dosn't.

    By the same token everbody who says something along the lines of "Linux needs (Office, IEX, Magically delicious Lucky Charms, etc.) to succeed," ALSO dosn't get it.

    What does Linux need to succeed? Glad you asked because I'm going to TELL you what Linux needs to succeed.

    It needs *ONE* geek sitting up in his room at three in the morning going "Oh wow."

    And anyone who gets THAT gets *it.*

    KFG
  • So you want to compare it to NT, system that is 4 years old? Afraid it won't beat win2k, a system that is almost a year old now.

    What I'd like to see is how it holds up against the latest Unix competitors like solaris, AIX and *BSD variants. That's IMHO much more relevant than compare it to NT. Or do you think it's relevant to compare it to Novell 3.11 too? ;)
    --

  • The test code is available to everyone to test and find bugs. It is just extra precautions to use commercial testlabs since they can. No harm done, and many many eyes still examine all the bugs, just some testing is more well documented and assured than the rest. Dumping the kernel as 2.4.0 because it looks right to the more experimental users and having many more unknown bugs to fix may have been passable 3 or four years ago, but now Linux is sufficiently mainstream that such practices would be viewed as bad form, leave new users with a bad taste in their mouths, and forever be held up by m$ people as a shining example of the "weakness of linux"
  • by Bazman ( 4849 ) on Thursday November 09, 2000 @04:05AM (#636329) Journal
    "In short, analysts say the enhanced kernel, which at this point is a year late to market..."
    Market? And which particular 'market' are we talking about here? Can't these people not think in business terms?
  • If December is the "projected" release date, that is certainly not the same as "confirmed." Let's not inflate hopes beyond reason here...
  • This list is almost always out of date, by definition, since kernel development moves so quickly.

    That could explain at least part of it.
  • by Michael K. Johnson ( 2602 ) on Thursday November 09, 2000 @05:33AM (#636339) Homepage
    I know I shouldn't respond to a troll, but I'll do it anyway.

    Linus's point is that he has asked all the major Linux houses to (if they had not done so already -- I expect that most, like Red Hat, had already started) add their testing resources to the other testing resources (i.e. individual users and developers) already deployed. Different developers (individuals and corporations) have different strengths. Your idea that Linux corporations are not part of the bazaar, not part of the thousands of eyes, not part of the the Linux community, is, well, bizarre. :-)

    The theoretical framework of the bazaar model does not imply that all the participants are not paid for their participation. Just because Eric Raymond wrote up what he thought the bazaar model looked like to him, and because his model was recognized by many people as a good description of the process, doesn't mean that his writeup was perfect, nor does it make his analysis proscriptive; it especially does not make others' misunderstandings of his model proscriptive.

    Individual users have the widest variety of hardware -- we as individuals do the best job of finding the odd hardware support bug.

    However, the Linux development houses have a major financial interest in stabilizing 2.4 in ways that are hard to do without more capital than the average user has, trying to find corner case bugs both by code inspection and by hammering on machines with lots of CPUs and lots of memory, using stress tests and correctness tests. I expect that all the other Linux development houses are doing this; I know for a fact that we at Red Hat are doing this and, as an example, we have (through stress testing) been helping discover elusive memory corruption issues recently, and (primarily through inspection) been discovering and fixing many filesystem race conditions. Those are just a few examples, and are only from my experience at Red Hat. I'm sure that developers from other Linux houses could talk about how their bug testing work has fit into this model as well.

    Relax, we're all in this together! Sit back, relax, and enjoy the ride...

  • I predict it's gonna ship Platinum.
    --

  • Personally, I'd rather wait for a release and know the code has been tested and is done right.
    I think that this version of the kernel is "done right." Most of the development now is centered around fixing bugs. When it comes out it is going to be a very stable peice of software. I'd personally ship one of the test kernels in a linux distro aimed at high end machines. I'd definatly do it before I put in an unstable compiler that wouldn't compile the kernel like Redhat did with 7.0. I personally wouldn't use such a distrobution, but many others would. 2.4.0 will be a very stable kernel. 2.2 will still be used on slower machines due to optimizations favoring new chips especially in the x86 architecture. 2.5 will probally go through some very active development early on if Linus can come to agreements with Hans reiser, and integrate other patches into the kernel. Hopefully when they are tested they will be backported as official.
  • Read the article carefully... Linus wants to get 2.4 out the door before his third kid is born later this month. You can't delay THAT release date!
    ---

Suggest you just sit there and wait till life gets easier.

Working...