Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

NT Beats Linux in Round 2 327

strat writes "PC Week ran it's own benchmarks, with Mindcraft, Microsoft and Red Hat. The margins were tighter this time, but NT still fared better. They specifically mentioned the lack of multithreading in the Linux IP stack as the main bottleneck. I wonder how 4.4lite would have fared? "
This discussion has been archived. No new comments can be posted.

NT Beats Linux in Round 2

Comments Filter:
  • Post it again!


    Linux failed a third (and forth, and fifth) time! Oh my God! We need to ditch this loser OS worldwide before it destroys us all! And embrace our deliverer and saviour Bill Fucking Gates.

  • by Anonymous Coward
    Sure I can run my Toyota's engine at 9000 RPM and beat out Corvetts in short term races. Of course my car will fly apart and die under such operating conditions before long.

    I have old Linux 1.2 boxes running as ftp/http servers and others exporting nfs filesystems that have be up for over a year. They just keep chugging along without crashing. NT has never even come close to this level of stability and reliability.

    Linux is like my Toyota, it may not be the fastest, but it runs and runs and runs and has outlasted many a faster car. Oh yeah, it also does its job on 386s and 486s. NT won't even install on this hardware even though this hardware is more than fast enough for this type of network usage.
  • by Anonymous Coward
    Rob needs to generate hits for his advertising buddies, so he reposts these inflammatory articles every couple of days. It's flame bait, pure and simple. Ignore it.
  • The German computer mag C't decided to do their own, more detailed benchmarks. Their conclusion was that the PC Labs/Mindcraft results sound plausible, but under more realistic scenarios, Linux & NT are neck and neck, or Linux beats NT soundly. Find the English version of the article here [heise.de]
  • BSD might have performed better for this test; however now that the problem is known I am sure there is going to be a patch released shortly that will fix the problematic locking. I think these tests were very useful; they pointed out several places where Linux could be improved / tuned.
  • You forget. According to the ZDnet article, Microsoft has addressed this issue. They researched why their serving files via Microsoft Networking was slow, and saw that a little tweaking could take care of it more easily than a code rewrite. With a comercial package, there are times that it is simply easier from everyone's perspective if you just tell people how to fix it then try to deal with a new situation through a maintenence release. Perhaps this will be something included in a service pack of NT.

    Shaking your finger at them and saying "Baha! You're faster than we are, but who cares because we continue to make changes that get us closer to where you were with all of your problems!" gets us no where. Be constructive. Think of Microsoft as the best teacher we could have. See what problems they have with their OS, and then look at Linux and see if maybe it doesn't have some of the same problems. If yes, then fix them. They've been around longer and so are bound to have experienced more problems.
  • Posted by hurstdawg:

    I think before we get a bunch of ./'ers yelling about how this is more Micro$oft FUD we should step back and realize how good this will be for linux. Ever heard the saying, "What doesn't kill you makes you stronger?" I think that applies great here. Micro$oft is now convinced that they are #1, and Linux will never be able to compete. But, if we take this as a learning experience (I know, I know, you thought you were done with that in college... :) We can improve on our weakness', turn around with the upgrades (much faster than Redmond could) and blow them out of the water down the road, w/out them even expecting it. So I think its important we don't just think of this as more FUD, but as a chance to find out where we need to improve, and to implement those improvements.
    --just MHO
  • Posted by grabes:

    I dont get it anymore. Now I will say that speed is a major part in choosing an OS. I work for a company we run all BSDI systems, and unfortunately 1 NT box. That NT box only hosts approx 75 web sites. Everytime someone publishes a page to that thing I can hear the drive grind from across the room. It has 256 megs of ram. It has what I like to call the spiraling windows syndrome. Where its really fast when you first install it and then it justs keeps getting slower and slower. However I am getting away from my main point. I have never judged NT to be a good operating system. For a server you need good Remote administation (Linux excellent NT virtually none). You need support (Linux excellent NT 1900 billme). I run Linux personally and I have no complaints I like it much better then the $1000 BSDI I also use. However older unix lovers are hard to sway from there personal choice. Now Im not a C expert, but if the major kernel developers would pull together I think we could have a kick ass os by the end of the year but thats in a perfect world. Instead be will sit here and cry about it for a few more weeks.
  • Posted by Synsthe:

    >You know, it's really funny to see all these
    >Linux Advocates tossing out FUD just like their
    >MS adversaries do.

    It's a common occurrence here on slashdot, as I'm sure you already know. =)

    Yup, it's true NT beat linux, but NT beat linux in benchmarks only; How long was each server up? A few minutes or hours before they ran each benchmark? What would happen if they ran the same after each had been up for a few weeks? What about after intensive use for a few weeks, the same kind a high traffic server might undergo? What about if halfway through you decided to install some new software? How'd they turn out then?

    Point being, benchmarks, although they give you an outward look at things, don't give you the real picture. They're only a very small part of the equation, and they cannot compare stability, or anything of the like.

    They can however show some areas where Linux needs improvement; good for them - because the developers working on Linux actually take heed to such things and do their best to improve areas that are lacking.

    Mark Waterous (mark@projectlinux.org)
  • Posted by Synsthe:

    Microsoft understands that Linux is a threat. They are allocating resources to deal with it. Don't turn your back on them.

    And this is exactly what will give linux the edge to catch up in any area's it's lacking. MS has one objective: Destroy competition. They can't have that, it's too hard on them to actually compete fairly. This is not good. They must do away with the competition.

    Linux, on the other hand, has no such ideal - the developers working on it, don't give a rat's ass about trying to "destroy MS" (however a lot of slashdotter's, various trolls from each side, and the likes, seem to have that as a target). While MS is expending resources and wasting time trying to do in Linux, Linux continually improves right under all the pressure.

    Like I've said before, neither should "win"; a marketplace without competition stagnates. However by taking this stance, they're just giving Linux the time it needs to narrow any gaps.

    Mark Waterous (mark@projectlinux.org)
  • Posted by Synsthe:

    If Linux is to succeed,

    If Linux is to succeed? I think you're confusing succeed with "beat MS". Linux has already succeeded, it's a very stable solution, a very powerful solution, and does it's job very well for many, many tasks.

    Linux just needs to continue improving, and adding new support for newer greater things (as long as it doesn't sacrifice any of it's stability or power to do so), and that's what it is doing.

    Linux has succeeded as far as Linus' original goals for it already.

    Now it's just going to keep getting better.

    Mark Waterous (mark@projectlinux.org)
  • Posted by rdobbs:

    I've seen soooo many posting about subject where the author has made it glaringly clear he or she doesn't even know what they are talking about - and they then lye and make up credentials to back it up.

    Well, I'm not lying - and I'm not about to claim I know everything... But, here's a little info about me before I start smashing myths and urban legends:

    I am 22, and work as 3rd Shift Systems Operator at a Major Life Insurance firm. I'm a high school graduate, with the following certifications:

    * MCP - Implementing and Supporting Windows 95
    * CNA - Certified Novell Administrator - Netware v3.12
    * A+ - Industry Standard in Computer Technician training. Proved proficient in MS/DOS, and basic PC Hardware

    ...And I have the following working experience in the following systems:

    -Macintosh (PowerPC)
    -MacOS 8.6
    -FreeBSD 2.2.2Release
    -RedHat Linux 4.2
    -RedHat Linux 5.0
    -RedHat Linux 6.0
    -OS/2 2.1
    -OS/2 3.0
    -OS/2 4.0
    -Windows 3.1
    -Windows For Workgroups 3.11
    -Windows NT Server3.51
    -Windows 95
    -Windows 98
    -Windows NT Workstation 4.0
    -Windows NT Server 4.0
    -RS/6000 (PowerPC)
    -AIX 4.2

    Whew... With that out of the way...

    • RAIDS:
    I've seen a few people question the reason to even test RAID performance on Linux, as it is a current weak spot. Big business uses RAID technology to support heavy load, and to break up the work of the filesystem among a group of equally capable drives. So it IS a feasible test - and just proves that Linux needs improvement in this department.

    • Multiple Network Cards:
    A few people have said that there is no REAL life application for Multiple NICS. Linux's support for Multiple NIC cards is still premature - but it's promising (from what I can tell). I've built machines, and also maintained machines with multiple network cards - it IS a real-life factor. Another area of improvement...

    • Single Processors vs SMP:
    Businesses will only use Single Processing machines - unless they are doing database related applications, or other heavy CPU-utilizing activity, then they will usually spare no expense to get the proper machine. Linux has primative support for SMP - but NT has better support. Linux needs to improve, so what.

    • Stability and Scalability:
    These are both important issues in the business world. Linux currently has pretty good ground as it comes to stability. However, both NT and Linux get pounded by Solaris (so I've heard) - again, a learning experience...

    I could go on and on and on... But why bother...

    I use RedHat Linux 6.0, as well as Windows NT, Windows 98, and even MacOS. Each platform has it's own strength and weaknesses - and any honest-to-goodness computer professional stakes his/her career on this Maxum. Don't get caught in the hype war - cause it will only lead down the road to eventual failure...

    Ray D.

    PS: I will probably get flammed for this - but as I have learned - that is really the only thing left of the "Professional PC Person" world... Maybe I should have become a doctor? :P
  • Posted by rdobbs:

    Bravo! Level-headedness in a forum of Chaos! :)
  • Posted by airborn603:

    Most wouldn't expect much from an open source
    OS compared to an over $500 "workstation", even
    if it is made by Microsoft. Therefore, Linux
    is incredible compared to NT, and all those tests
    by Microsoft-owned-or-hired companies about "Linux
    being slow", etc., should not affect the
    GNU\Linux\UFie\Anti-MS community in any way.
    * - I suspect that some people at ZD don't know
    this and continue to write columns about the bugs
    in IE5.
  • Posted by The Technical Revolutionary:

    Linux hasn't lost anything. Last time I checked I wasn't running Linux because it had the fastest web server, and I could care less how fast Samba is. We need to stop chasing Microsofts tail here! Microsoft is just trying to pull linux into their overbloated Marketing/benchmark/it don't matter if it works as long as it looks good on paper world! Linux is a good, free, stable and open minded/open source operating system. Its OK if they clean up some problems in the code, but not without forgetting what Linux is all about!!
  • Posted by saberk:

    I agree... This whole benchmarking shit is very annoying... The only thing Linux can claim now is stability. And can someone tell me who let the world know that Linux is faster than NT before testing and benchmarking it. Our (Linux) ass is kicked just because we claimed to be faster and now M$ is "proving" that it's not the case... Maybe it's time to focus more on tuning.
  • Exactly. I read somewere (link from lwn.net?) of someone who had done a series of tests, and found that Linux does not perform well on load balencing across multiple NICs. With a single gigabit NIC, Linux would have done better, how much better, or how much better NT might have done, I don't have any idea.
  • It was probably reposted so those who read /. during the week can see it.
  • Maybe this is a lesser know fact because it didn't
    appear in the PC Week stories but the FreeBSD people
    were there too.

    Mike Smith's ZD labs test update [freebsd.org]

    Webbench results including FreeBSD and Solaris 7 [deja.com]

    The bottom line is of course that given this configuration and test setup neither of the free operating systems have much to boast about.
  • I know I entered the correct URL but Slashdot keeps adding an extra space in there. Just remove the extra character and it should work.

  • by Kostya ( 1146 ) on Monday June 28, 1999 @06:46AM (#1828880) Homepage Journal

    Ok, the results are in. The benchmarks, when administered properly, show that NT outperforms Linux in every category.

    We now need to prove the open source model instead of confusing the issue by nursing our bruised egos. I'm already seeing the "denial" posts that hark back to OS/2 days. "Well, they may have gotten better numbers in this, but that doesn't really matter." Yadda, yadda, yadda. What is wrong with you people? Can't you accept the results? Quit playing games and start making progress.

    Some would love to argue with me until they are blue in the face about how this doesn't mean NT's better, etc. Fine. That is NOT what I'm talking about. The benchmark has exposed an architectural flaw/oversight. We need to fix this in order to reach the performance numbers we need to be a server operating system. So let's do it. Let's fix it. Start downloading and start coding.

    I have linux installed on every one of my file servers. I will be able to fend off the criticism for now. I have faith in out community and in our talent. However, if we continue pointless arguements and pulpit pounding about how linux really is better, instead of making sure that it IS better, we're sunk!

    And if you can't code, you can always test. Get involved. This is your operating system.

    "Doubt your doubts and believe your beliefs."
  • This guy also needs to do some more research on some things..

    A) Why would anyone use Samba on a T1 line? Samba is a file server, the majority of which are used to serve files via a LAN, NOT a WAN. LAN speeds go from 2 (10Base2) to 1,000 (Gigabit ether).

    B) The speed limit on I-93 is 65. I have 2 cars, a Ford Taurus, and a Porche. Both can do the speed limit, little over every onmce and a while, EASILY.. Which do I choose WHEN COST is NOT the issue? (I am NOT saying that NT is a Porche, but the tests make the comparisons look simular)
  • The test was not as screwed as you made it out to be.. They had SIMULAR results. The first test didn't tune Linux as well.. BIG DEAL.. Linux got it's but kicked from a technical standpoint, hands down.. It's a sad day, but I was afraid that this is EXACTLY what was going to happen..
  • Finally, a voice of reason.. We got our rears kicked. Now, to fix it and come back stronger..
  • A) Scripting:

    Please do a little more research.. ALL of these are available on NT already. Oh, and (*PPSSSTT*) Activestate is working with Perl and Perl alone. If you want bash, sed, awk, grep, and all of the toys, you should be kissing cygnus.. Oh, and did I mention gcc is there as well? ;-P

    B) X thru a 28.8 Line? You'[re lying really bad if you say that it's any better then ANY sort of dial up remote admin for NT.. MANY systems can be admined via telnet. And most of NT can be admined from a web browser (As can Linux)

    C) WWW Server Migration: Mute point, nothing to do with OSes here.. Just servers.. I can go from Apache/Linux to Apache/NT with no problem. Does NT now rock? ;-P

    D) A sysadmin who say's 'Like Unix or AS/400's' scares the hell outa me.. Lemme set you strait.. Unix = OS type. AS/400 = Hardware Platform. As of last month, I worked at UPS, who had the largest installed base of RS/600's and AS/400's in the friggen world, so please don't try to impress me. Also, 3 years and working as a sysadmin designing a multimillion dollar ecommerce project. Doesn't add up.
  • The problems that people are finding have nothing to do with what your saying.. Linux hits a 200 mbit limit, along with approx 2,000 transaction limit. The has to do with task scheduling, and the NON multitasking IP stack..
  • I disagree.. If NT is percieved as being able to 'handle more' then Linux, then people will want the system that they feel 'can do more'. Same reason why there's so many jeeps, trucks, and sports cars. People don't NEED them, but they percieve that they need to buy what can 'do more'.
  • I always wondered this back in the days when WordPerfect, Lotus 1-2-3, etc. etc. ad infinitum, ruled the small business world. ZD pubs always pushed Micro$~1 apps instead.
  • A souped up '68 Camaro will get you from NY to LA much faster than a '99 Kia will...

    ...but your chance of death is much greater in the Camaro.
  • Actually, it was not striped across a set of drives. It was 4 partitions on the same drive. That is a BIG difference. And, I don't think it was unfair for MS to do that. They are using standard features of the OS to optimize it.
  • I'm no Linux guru and will be the first to admit that I am just a few notches over the knowledge level of a windows 9* luser... But is Samba not mostly for use in connecting Linux boxes to Windows shares? If your serving up a bunch of pages wouldn't it make much more sense to use NFS or something that is more native to the operating system for sharing between machines on a LAN? Again... I'm no Linux Guru.. just a mildly advanced user. (very mildy advanced).

  • One thing the Linux community forgets is just how STOOOPID the average person is. And, no I won't tone down the arrogance. Here's why:

    These tests suggest the equaivalent of "my car will beat your car in a drag race". Pitty, since many of us thought it wouldn't, but facts is facts.

    The problem is that we have an analogous situation where those that need to find a reliable car to get to and from work daily decide which car to by on the basis of which one would likely win a drag race. Hence, STOOOPID.

    Certainly, the problems identified can, and should, be fixed. They don't strike me as biggies, actually. Unfortunately, M$ will leverage this for all it's worth, and the idiots out there will believe them. Let us not get caught up trying to perform damage control. Our response should be, "Yeah, yeah, whatever -- if you want M$, then buy M$. Me, I still wouldn't touch it with a ten foot pole."

    Smart money already knows that the better deal is, or will very shortly will be.

    GNU and Linux have always shone when it came to helping the little guy, and the big guy will continue to throw money away. Money can buy a lot of ignorance.

  • More like the FreeBSD folks know that it hits the same limitations as Linux. FreeBSD, like Linux, is a work in progress. Both will overcome this hurdle and continue to be better than NT in many ways.

    May the source be with you!

  • ...but you might want to read an "objective" Test done by the german magazine "c't".

    The (yep!) english article can be found here [heise.de].

    Sorry, there is no (I wonder why) german page available, the results can be found in c't 13/99 on pages 186 and following.

    Short summary: This test shows, that Linux+Apache outperforms NT as a webserver if there is only one NIC to handle. It also shows, that the number of processors daoesn't matter that much, when using Linux as the host sytsem for apache.
    NT runs much better than Apache on Linux, when more then one NIC has to be served - which might be the result of not having a multi-threaded IP-stack under Linux.

    Read ;-)


  • Does this test take into consideration real-world applications? Sure, you can beat on any machine for 5 days and hope it doesn't crash. What would happen to the NT machine in 10 days? 20 days? Would it BSoD itself to oblivion? From my experiences, it would, while the Linux box would still be chugging away.
    In any rate, even if NT is a little bit faster, I'd still run Linux, because in the long run, Linux DOES have the edge in overall performance. Linux doesn't crash for no reason. Linux doesn't require a reboot every x days to keep itself happy.
    When I reboot a machine, it's because I want to, not because I have to.
  • while car can be used as exhibit, computers ussualy not. so when byuing car and money are not an idssue, then you buy car which you like better. but when buying computer and money are no issue then still you have to buy computer which can do what you want and which do it the way you want.
  • while car can be used as exhibit, computers ussualy not. so when buying car and money are not an idssue, then you buy car which you like better. but when buying computer and money are no issue then still you have to buy computer which can do what you want and which do it the way you want.
  • yeah, my assumption is that people are thinkink about what they are doing. but mostly it looks like inappropriate assumption :)

    and here we have the main problem with this benchmark (and maybe all other benchmarks): benchmark tels the people that "in this particular setup doing this particular thing is product X better Ytimes than product Z". and people takes it as "product X is always Ytimes better than product Z".

  • Multithreading could help with a single CPU and multiple NICs. A single thread could get blocked waiting for I/O, in this case a second thread could use the CPU to service requests from another NIC. Whether this is really adavantageous depends on a lot of factors including whether the IP code ever does block, and what other things can be done when one IP thread is blocked (what resources are locked during the blocking).

  • FreeBSD has excellent SMP support. In fact, that's something they pride themselves on.

  • The best lesson to be learned about this exercise is that we all benefit from the showdown between NT and Linux. Regardless of who "wins" the benchmark tests, we can quickly pinpoint the weaknesses of each os and fix them. Even NT users will benefit, if MS can react in a timely fashion. This is clear support for the DOJ's case against MS, and support for open source development. I'm looking forward to the next showdown- bring 'em on!
  • I think of it like this..

    Do you know any sysadmins that have switched to NT from Linux(or anything else for that matter) of their own free will? I don't.

    People only use NT because their either afraid of Linux (no crime there) or pointy heads make them.

    Maybe my impressions are wrong, but so far, everything I've seen and done indicated that once you've gone over to *nix, you never want to go back..

    Not to mention the other(more common) end of the spectrum. I'd like to see some benchmarks for P133's with 64 megs of RAM, or 486-66s for that matter..

    My little server is a cyrix 166 with 32 megs, and I only upgraded it from a 486-66 because I wanted to play mp3s...

  • It seems that Micros~1 inspected the linux code and found a weakness. They then publicized a benchmark that exploited that weakness. Good strategy.

    Will linux ever win benchmarks against NT if Micros~1 just benchmarks NT against linux's weaknesses? They will just find another one.

    The good side is that Micros~1 is actually funding linux development by finding weaknesses. Perhaps linux will one day be perfect and there won't be any weaknesses left :-).
    The bad part is that linux may never win any benchmarks, since only Micros~1 can afford to test them.

    Eg., why haven't we seen a gigabit ethernet benchmark?
  • Nope. Samba is meant to allow Linux and other various OS's to create shares which Windows clients can then use.

  • This is a very important issue.

    A German Computer magazine c't did benchmarks with Linux against Windows NT and found that Linux trounced NT in every case except when Linux had multiple ethernet cards.

    If you're using Linux as a workgroup or departmental server, there's no point in having multiple ethernet cards in it. In those cases, Linux is still faster.

    The smart reseller benchmarks show just how much faster Linux is when you're dealing with situations that most businesses will have to deal with.
  • by malraux ( 5479 ) on Monday June 28, 1999 @05:34AM (#1828906)
    http://cs.alfred.edu/~lansdoct/mstest.ht ml [alfred.edu]

    Kinda puts the whole shebang into perspective. Watch out for the twist of irony at the end.

  • Again with the free beer stuff? If NT is faster -whether because of performance or cost of training is lower because it's easier to use- then it saves money. Time is money too.
  • Seriously... This was supposed to be a fair test, and until the RH team that was attending (and configuring the Linux machines for) the test says otherwise, we'll have to accept that.

    However, I think a Mindcraft III would fare _much_ better now. The fixes from Mindcraft I didn't come in in time for Mindcraft II (see an older article on Slashdot), and now the RH guys found even more things to fix. In addition, as the article pointed out, Apache is getting better, too. (What about Boa, or any other single-threaded web server, BTW?)

    Sorry, folks, this was a fair test, at least when it comes to biasing. However, as I've said, Linux was a bit unlucky with the time and date. (Microsoft, on the other hand, had already made their improvements, based on the data from the previous tests... Guess they've invested a lot of money in fixing it quickly...)

    All in all, however, this was a (minor) win for Linux -- the results was nowhere near Mindcraft I, even with Microsoft's new improvements.

    /* Steinar */
  • To me, multithreading here just looks like a `buzzword', to explain something more difficult to the end readers.

    Please get this right: A well-written, single-threaded function will beat (or equal) an equally well-written, multi-threaded function in _all_ cases, simply because somebody has to do the switch between threads (processes) in all cases. In the multithreaded version, the OS/CPU has to do it. _Multithreading_ _doesn't_ _mean_ _extra_ _speed!_ (That's why I wrote my single-threaded FTP daemon, BetaFTPD (search Freshmeat for it if you're interested). Now, just had to pull the plug for my own (GPL'ed) `product'... Don't make that affect your judgement, please...)

    Now, got it right? :-)

    /* Steinar */
  • -> Process A requests a read() from hard drive 1. The kernel blocks A and begins handling the read.

    Well, you see, Linux and all other Unixes have a non-blocking flag on every file descriptor... It's called FIONBIO.

    /* Steinar */
  • Yes, of course, but as far as I understood the article, this was true for the uni-CPU version, too.

    BTW, isn't 4 Ethernet cards a bit unrealistic? Wouldn't a single gigabit card be much better?

    /* Steinar */
  • First off, spawning a thread is a very lightweight operation when compared to fork()'ing a child as most web servers do (including apache).
    And not spawning off a thread is an even more ligthweight operation! I'm not saying that Apache does it right (even Linus has said they don't).

    /* Steinar */
  • Remember that every process (as you referred to, threads are only lightweight processes, also on Linux) takes up additional memory. Less memory for cache, or ultimately more usage of swapfile.

    /* Steinar */
  • But doesn't this make the code more complicated than it would be if it were multithreaded?

    Yes, and that's why almost all server daemons today are multithreaded (often via a fork(), or via inetd). But the complicated code has to be somewhere, remember. With this step, you get much more control about what is executed where, and you can share much more code among the processes. This, combined with a lot of other things, keeps the memory requirements down, for one thing. Instead of a whole new process, you'll need a couple hundred bytes to hold the information.

    To sum it up: Yes, the code is getting more complicated, but I think it's worth it. Much of the same framework can be shared between applications, also, so it's not so much work as you'd expect. The only thing that gets slightly more difficult is the command parsing -- if you miss a char, you can't just issue a (blocking) getchar() and wait :-)

    /* Steinar */
  • One word: eBay


  • >> FUD, FUD, FUD.... >> NT has no useful scripting, Linux has everything you can ever need > Windows scripting host will let you do almost anything using either VBScript or Javascript. There is also Perl for Win32. hmm.. Windows NT scripting is still far behind that of Linux/unix, even the most hardened NT advocates recognise that. Windows scripting host can't compete with BASH,ZSH,(non-alien)Perl, AWK,SED,etc etc. Thats why ActiveState are being funded by Microsoft, they have a lot of catching up to do (but respect to them cos I couldn't face not having Perl when I use NT)

    >> you cannot remote administer NT (Im not talking about fast connections here, (where you could use VNC), try to administer NT over a modem line. Good luck) > I've used both PC-Duo and Carbon Copy to do remote NT administration just fine over a 28.8 line. Hmm.. like you'd get those thru a halfway decent firewall. I have to go down 5 floors to admin our servers from the console because no remote admin tools were included and being a lowly System Administrator I wouldn't be able to get funding for a remote admin tool let alone get it through our firewall.
    Sorry but NT lags sooo... far behind on this one, I can use X thru secure shell from a trusted host with x.509 certifate authentication to my linux box - in fact I haven't touched my linux boxes keyboard since I moved it into a different office.

    >> once youve made your decision to use IIS, youre completly stuck when it comes to changing to another type of webserver, or sometimes even when you want to transfer sites from on IIS to another I've never had any problem with that... the MS tool works well, and its not like on a Linux box you can move seamlessly from Apache to Zeus, for instance. I have - it can (not necessarily always is) be a pig, any move can be damned horrid, but in my experience so far UNIX (linux/aix) stuff is simpler to migrate because I am in more control.

    To put this in context I am working on a multimillion dollar e-commerce project using NT,AS/400s and RS/6000's. If we had used unix or as/400's instead of NT/intel it would have been completed by now and more scalable to boot.

    I have been administering unix for several years and have 2 years + 1 year industrial placement of my BSc Honours Degree in Computer Systems & Networks under my belt so I know what I am talking about.


  • Why was RedHat chosen? A proven stable Linux server setup by a Linux professional would have been a fairer representation. In my experience the workhorse of Linux has always been the Slackware server. Even Caldera or SuSE would have been a better choice. The NT machine was tweaked by NT professionals so why not at our end. RedHat is not exactly known for it's installation ease and stability and usually requires a higher degree of expertise so the test was handicapped right from the start.
  • How can you say that. The library differences alone make or break it. Let's see we have libc5 5.3.12 vs 5.4.46 / glibc 2.07 vs 2.1.1 Bind v4 or bind v8 Where all the apps compiled for the current libraries, dynamic or static? Static libraries consume less memory hence are more effecient but are larger. (FreeBSD one of the proven performers is a-out.(2.2.8)) The amount of differences are staggering and their effect on performance absolutely critical but that's the beauty of Linux. Build it your way.
  • I would be inclined to agree with you, if it weren't for the fact that the PHBs and bean counters are trying to pick between NT and Linux.

    If the management types are asking the question "NT or Linux", then why shouldn't the Linux community be interested in any data that could sway somebody's answer?

    The "right way", of course, is to pick the OS and software that does exactly what you need, etc. etc. Unfortunately, for arbitrarily complex real world situations, with arbitrarily complex management, personnel and customer pressures, there isn't always the time to find the "perfect" system. So, instead people rely on benchmarks, press reviews, word-of-mouth, etc. And also, unfortunately, management often trusts the word of the outside world more than the word of its own employees. (Sometimes, there are seemingly rational reasons for doing so, such as "suppliers X, Y, and Z provide more solutions for platform A than platform B, which is more important to me than the fact that Frank prefers platform B because 'he likes it better,'" neverminding the reasons why Frank might like it better.) And so, monitoring the press and actually responding constructively (eg. fixing performance problems, etc) is a good thing.

    The key, of course, is to keep sight of what matters. Lets not get trapped into optimizing things that don't actually matter. For instance, optimizing Apache for utter speed while compromising stability would be a bad thing. Or worse, bloating the kernel with benchmark specific tweaks, etc. (After all, isn't that how Microsoft attains its legendary (lack of) stability?) Remember, while these sorts of highly-publicized benchmarks may point out some weak points, they also could just be serving as chaff to block the real threats from our radar screen.

    Perhaps that was the idea you were trying to get across, AC?


  • Isn't this the exact behavior that SCSI disconnects are supposed to help?

    I'm pretty sure the kernel doesn't block waiting for the I/O to complete. It issues the request to the device and puts the blocked task to sleep while it goes about handling Everything Else TM.

    Now, in the case of IDE, if both drives are on the same IDE chain, you're going to block waiting for the device driver. If they're on different chains, then they can run independently.


  • Well, you see, Linux and all other Unixes have a non-blocking flag on every file descriptor... It's called FIONBIO.

    But doesn't this make the code more complicated than it would be if it were multithreaded?

  • by eponymous cohort ( 8637 ) on Monday June 28, 1999 @07:02AM (#1828922)
    When people choose to dump their MS OS and go with Linux, I've never heard "because Linux serves more pages per second than NT" given as a reason.

    The reasons are usually one of the following:

    Better Stability
    Tired of being locked into MS-only solutions
    No license fees
    Doesn't need an expensive hardware upgrade
    No annoying features like animated paperclips
    Linux doesn't pretend to be smarter than you, and then fail at it.
    No registry which causes more pain and grief than it actually solves.

    It doesn't really matter than NT beats Linux in this test. Sure we should fix the problems, but NT is still NT with all the liabilities which it has become famous for.
  • Both the PC Week and Mindcraft test prove NT can outperform Linux at static webpage delivery. But this is not interesting to me. Most website development I'm involved with involves dynamic webpage delivery.

    Without a doubt, Linux (or any Unix-like system) would squish NT into the dirt if they tested dynamic content generation... especially if they did it with CGI. By design, NT is process-heavy and has considerable more overhead to deal with when forking processes (i.e. CGI scripts).

    While I agree the Linux and Apache coders have some work to do to get static delivery up to and past the performance of NT, I'd like to see some numbers for dynamic content delivery (the reason I use Linux and not much else).

  • in the article, and acknowledged this problem:

    After having dealt with static pages, both systems also needed to prove their ability to handle dynamic contents. The smallest common denominator for tasks of this kind is the Common Gateway Interface (CGI). [...]
    Of course, this configuration is not first choice for generating dynamic HTML pages on a Windows system. An NT administrator would normally prefer Active Server Pages (ASP) and VBScript, which is why these results should rather be read like a CGI or Perl usability test for NT than a proper system comparison. Since Microsoft's solution isn't portable to other systems we unfortunately couldn't make the reverse comparison. And the figures for CGI and ASP aren't really comparable due to theirentirely different underlying concepts.

    But the main point is the overall impression. It's a much more differentiated view than the other tests. Particulatly interesting are IMHO IIS's problems with delayed requests (3 sec). This seems to be the downside of the threading approach.
  • Could you please be more precise?
    C't is always biased in favor of the more advanced technology (guess what the "t" in c't stands for), and that means they are open to BeOS, Linux/*BSD, (in earlier times) OS/2, and, as they try to be comprehensive, MacOS (X and below).

    As for the c't tests, I've yet to find someone who proves serious factual errors therein.
    On the contrary, c't tests tend to be very sophisticated and neutral. No wonder, as c't employs the best computer tech journalists in the German-speaking world, probably even world wide.
    Maybe you'd recall

    PIII proc ID software switch by processor guru Andreas Stiller,

    the SoftRam scandal, where everybody from APC to ZiffDavis hailed this as an ingenious way to get cheap 'Software' RAM, until c't examined that the company had just put non-functional (unused) code to the standart MS libraries...

    Well there are many other things I could mention, but in general it's save to say c't is one of the best computer mags worldwide, certainly the best I know (and I know quite a number of US/UK/french ones as well).
  • This isn't a reality check ... because you missed an obvious application: intranets.

    6 T1's is not as fast as 10 MBit ethernet ... but many servers are connected to 100 Mb, or Gb ethernet. At that point, your bottleneck are going to be:

    a) The internal IO architecture of the server harware.
    b) Processor speed and blocking within the application (multi-threading in the kernel).

    The web-server example was contrived ... unless you're running a site like yahoo (top 1% of servers), a web server serving external clients won't bog the system.

    The test should have been done with a database server (Oracle for NT and linux) or a Notes server (also available for both). Then there's a better mix of processors/drive access/network.
  • You know, it's really funny to see all these Linux Advocates tossing out FUD just like their MS adversaries do.

    I've been reading PC Week for almost a year now, and I don't find it to be particularly biased in either direction. I use both Linux and NT on my home and work machines, so I'm not an MS crony (as I'm sure I'll be accused of in reply).

    I'd have to say that the tests were fair and NT beat Linux. The first set of un-audited tests weren't. I believe their results are true. No amount of FUD from this community can dissuade me.

  • It's well known that ext2fs is slow, unefficient,
    non-scalable, and not very fault tolerant.
    I'm not surprised at all.
    What is surprising is that linux ip stack which was supposed to be very efficient, was proven to be very uneffective.
    It's interesting how would FreeBSD deal with such load, FFS or vinum are much faster filesystems, and BSD stack is much more effecient. We'll see..
  • I know. i have 2 servers right here with uptimes
    in the hundreds of days. I only recently noticed that they were only using 64 MB when there was 128 MB of RAM.
    In 2.2 Kernels i can testify that problem is no more.
  • Why do you BSD posters commonly act holier than thou?
    Think about how you feel when Windows NT is `in the limelight' in some area even though Linux is a technically better solution.

    Ok, now you know how the BSD folks feel when Linux gets the limelight even in circumstances where a BSD is the technically better solution.

    There's no real reason that Linux should be in the front of everybody's minds, and BSD hardly heard of at all. It's just marketing; Linux folks are a lot louder than BSD folks.


  • I like how the exchange went between Steve Jobs and Bill Gates just as Jobs realizes that Bill had stolen from the ideas that Jobs had stolen from Xerox:

    Jobs says: "Our stuff is better..."
    Gates replies: "Your missing the point. THAT does'nt matter..." (something like that anyway :)

    Cool irony huh?
  • I have heard that Windows NT uses a BSD stack. That technology has been around for a while. Now Linux networking code derives from a similar background. How can their be such a performance different if Bill (gates) snagged BSD code? Could someone please explain this to me. I may not have my facts straight. Also, NT uses 7 mb MINIMUM just to load (according to NTLDR) + cpu time to run the gui + disk access time for virtual memory. Is their gui so well written that it minimizes the usage of such system resources? Well, I am a computer tech & I have seen what NT can do. I optimized an NT system where I work & later, I installed 95 on the same system & optimized it (disk cache sizing, vm adjustments, path/file cache, etc.). You would think the Pentium 200 I set it up on was a Pentium 100 running NT (32mb ram/3.2gb disk). I know NT should probably have more disk & double or quadruple the ram, but why? Why do you need such a beefy system just to run the stupid operating system. Why a gui? It will just sit their most of the time doing nothing but wasting resources. How can NT still come out on top? I want some answers.
  • This was probably a pretty unbiased test.

    Now, instead of whinig and bitching, lets get serious and fix the problems. This is not the last time "Bills Boys" beat us, but if we keep on whining instead of fixing the problems we may just give up here and now! We have to remember a few important things;

    1) There is no such thing as bugfree S/W. Linux is good but not perfect.

    2) Linux and OSF/FSF has com a long way in a short time. Let's not loose focus now.

    3) "Bills Boys" will of course look at this and try to be even better where Linux is deficient.

    4) Linux is _way_ more stable than NiceTry, let's open that gap up even more by removing even more bugs.

    5) Whining and crying foul will only hurt us all
  • The federal government using something overpriced and low quality...what a surprize. The last time it was rebooted was in January huh?...I guess you haven't tried to install any new software or changed any configurations. You also must be doing some pretty simple stuff with it...file serving, etc...not a development machine obviously.
  • A properly tuned and maintained NT server will be very fast and pretty darn stable too. It won't approach the stability of Linux, perhaps, but it will function a whole lot better than many Linux advocates will suggest.
    ------------------------------------------------ ----
    Jamin Philip Gray
  • Microsoft would never have pushed for these tests if they weren't 100% sure they would win.

    They are possibly doing their own tests with linux in order to find scenarios where they can win. Then they announce a test, possibly inviting linux people to tweak the linux machine in order to gain credibility. Then they can say "They worked hard at it - but we still won!"

    This isn't all bad, as we get hints on what we should improve. But we should set up tests ourself too, balancing the opinion by showing off some real-life scenarios where linux wins already.

    How about a stability test with a machine that runs many different services (file/print + database + dynamic web + ...) for months? Or performance tests in an area where linux wins today?

    People who want a purely static webserver now knows to use solaris or NT. Some other benchmarks might let them know when to use linux too. Red hat & other linux vendors could surely fund some tests themselves.
  • loser. After having put up with FreeBSD for 2 years and observing how second-rate it was in functionality, performance, and stability to linux, I was glad to replace it. The few recent comparison benchmarks I have seen confirm that, and regarding your signature, no, linux has many ports than netbsd!
  • many _more_ ports than netbsd...

    Even your web server is offline!
  • Absolutely! They can't possibly compare the monolithic NT against an almost infinitely configurable system. "Oh, ip stack wasn't MT? Just replace a few library calls and recompile with the best MT library" (RH's patch already available!). Or egcs, or pgcc, or other commercial compilers already available. Recompile a minimal kernel, turn off extraneous processes, pin files and executables in memory...
  • well, guy, it's not that we are blind to M$'s good products. For most of us, we have had to _endure_ these second-rate, poorly designed excuses for "productivity" software until we found the superior solutions available in the open-source world. You obviously have no idea what "scripting" in unix is about if you think VBscript even qualifies.
  • yes, supposed to be intelligent... Financial benefits? I have no doubt that I earn almost twice as much as a consulting unix sys admin than a microserf does. Microsoft is NOT the dominant player. It has a monopoly in the PC niche, but that's all. Unfortunately, it's unearned wealth gives it inordinate influence over the media and PHB's. Our energies are devoted to supporting what we perceive thru our experience to be a significantly better way of doing things.
    P.S. No I haven't used ASP's; I suppose they could be one of the few exceptions to the rule that all microsoft stuff is goofy, bug-ridden, toys.
  • Of course MS is going to win these things. I'm sure they have a team of coders who concentrate specifically on making NT perform well in benchmarks.

    What people are failing to note is that NT costs hundreds of dollars a seat, where Linux is free, as in free beer. A global Linux infrastructure for a corporation costs $0. Of course professional support is much more than free, but the fact that the code is so accessible means that an organization can have it's own on-site support staff with access to every bit of information they would need to fix the problem. There's a lot of money to be saved in using Linux and people will figure it out. When they do NT's going to have a tough time.

  • You kind of hit on a good point here...lemme elaborate on it a bit more...you also kind of missed a point. :)

    You say a web server serving external clients won't bog the system...that's not exactly true...but it bogs the system for a different reason.

    The benchmarks were run with directly connected networks, no routing, no wan links, etc. What does this mean to how the typical web hit is processed? The typical web hit on a benchmark is (I would guess...don't have real numbers here...educated guesses) about 1 to 2 seconds long...the connection is made...the data is passed very rapidly because you're dealing with 100Mbps pipes all the way through, no routing, etc., and the connection is shut down.

    In a typical Internet web serving situation, each hit will be much longer lived. How long does it take, start to finish, to load a typical web page on a typical v.90 modem connection? 30 seconds? a minute? I may be high on these numbers, but the idea stands...its not going to be (for the most part) a 1 or 2 second connection duration.

    With the longer duration connections, you're going to have more apache processes running (remember, you have one apache process per hit...and apache processes aren't typically small), you're going to have kernel structures allocated, etc.

    The meaning of all this being...that 1 processor 256 MB RAM box, is likely *not* going to even get close to serving 6 t1's worth of data to typical Internet clients before it falls over.

    To generalize that further...while this benchmark has exposed some limitations of Linux in serving large amounts of data...and this guy's article does give some good perspective on it...don't take his conclusions on how much a linux box will serve without a grain of salt...yes, he does acknowledge that dynamic content has a huge affect on the serving...but that's not all that affects it.

    Yet another posting to point out the goofiness of benchmarks...yes, they have their uses, but they're pretty limited.

  • Now, what corporation would use a loaf machine like yours to do corporate web serving and/or file sharing? If you are reaching this far in order to refute the truth, why don't you mention that NT won't load on that PS/1 you have sitting in your corner... or on that 8088 with dual bad-ass 5 and a quarters?

    OTOH, it is feasable for a startup or smaller corporation to be using a Single CPU Pent III with a measily 256 MB of RAM.
  • Who cares what distribution they used? What matters is the versions of the kernel and servers. Since they had people from Red Hat there to tune the system, it made sense to use the distribution that they felt most comfortable with.

    Distributions only matter in performance if you're not going to take the time to update and configure your software, or if you're not going to disable irrelevant services only provided by some distributions.

    Arguing over distributions for benchmarks is like arguing over Gnome vs. KDE when you're not running X.
  • by crow ( 16139 ) on Monday June 28, 1999 @05:41AM (#1828946) Homepage Journal

    (Found at LWN.) This page is full of technical analysis of the Linux kernel and Apache, explaining a number of performance problems that the benchmarks brought to light, as well as solutions for many of them.

    It sounds like a repeat test in another month or two would bring things even closer. With khttpd, we might even win.
  • by mw ( 16262 ) on Monday June 28, 1999 @08:28AM (#1828947)
    One of the most important points why more and more companies choose Linux/Apache
    has nothing to do with speed or price of the OS or the webserver. The really
    expensive part when running webservices are administrative costs. NT & IIS may win here
    for *very* small sites, and UNIX wins when it comes to housing more than a few dozen
    sites - enough to make NT and especially IIS unmanageable.

    * NT has no useful scripting, Linux has everything you can ever need

    * you cannot remote administer NT (Im not talking about fast connections here,
    (where you could use VNC), try to administer NT over a modem line. Good luck)

    * once youve made your decision to use IIS, youre completly stuck when it comes
    to changing to another type of webserver, or sometimes even when you want to
    transfer sites from on IIS to another. Microsoft has a tool to do that job, but -
    guess what - it crashes on even the smallest problems. Apaches configuration
    files can easily converted to another text-based config - use sed, awk or whatever
    youre use experienced with.

    * if something goes wrong with IIS, the event log will contain such useful error
    messages "could not bind instance XXX. The data is the error code. 43 00 00 6c".

    * IIS is a hell when it comes to logging. All logging is done asynchronously, so your
    only chance to see whats going on is to wait a few minutes for IIS to sync() the
    logs. Really a pain when you want to study the logs...

    * Sometimes under NT, the MMC console simply is stuck. Then your only chance to
    get it running again is to restart the system, simply logging in as a different
    user does not help. Very annoying.

    * Any finally, those beloved situations where those windows popup:
    "Your system is running low on virtual memory...". When you check the taskmanager,
    it will show you that neither applications nor the system itself seems to use
    that much memory. Again, your only chance is to restart the system.
  • Thies tests remind me of when PCs and Macs were being tested. The tester determined who would win. If it was a Mac mag the Mac would win if it was a PC mag the PC won. It's protecting ones investment in a platform. Microsoft (The evil empire) dosn't have to approve in the slightest. I suspect that even if Bill Gates himself sent them nasty letters they'd keep doing this sort of FUD.

    The way it's done is easy. You criple it in some major way that the public isn't going to notice.

    In the Mac vs PC tests it was done by pulling/disabling processor cache. It can also be done by disabling features in the Linux kernel (not hard..) or enabling features that might interfear with the preformence.

    An easy way to beat out NT is do pritty much the same thing to NT and optomise Linux. Same results.

    I say let the FUD fly. This fud is aimmed at the techno savy. Like telling a car expert a KIA is better than a Lamborgini and proving it by putting a lawnmower engen in the Lamborgini. It sounds desprate. Would you believe someone who regularly make false clames?

    Ok it's a matter of catching in the act.. consider them cought.

    A reminder to my fellow zellots... DO NOT FLAME ANYONE... and DO NOT send e-mail to people who issue FUD. Leave them ALONE.

    Those that need to know, know better...
    and if they can't tell the diffrence between fact and FUD then we are better off with them running NT becouse a bad admin will crash a system faster than anything and that makes what ever os they are running look bad.
  • It shouldn't be to long before someone fixes the bottleneck. I doupt thats the real reason however.
    Why? All the FUD, the history of thies tests and the general addatude of people who want Windows NT to be thought of as suppereor to Linux.

    Microsoft has there own Zellots and some of them are in the PR department.

    Linux is a good Os and so is Windows NT.
    It's just not good for the same reasons
  • ... and Linux came out marginally on top. Conditions were a good bit more real-world, and the article is pretty well-written. Read it in English at http://www.heise.de/ct/english//99/13/ 186-1/ [heise.de]

    (Info found at http://www.lwn.net/daily/ [lwn.net])
  • by iapetus ( 24050 ) on Monday June 28, 1999 @11:08AM (#1828970) Homepage
    1. They cheated.
    2. It's not fair to use more than one processor.
    3. A zero got missed off the end of the Linux performance results.
    4. NT only outperforms Linux in non-real-world situations, like when both machines are turned on.
    5. It doesn't matter that NT was faster, because Linux is the right speed. Anything that goes faster is just dangerous.
    6. Once you reach a certain threshold (almost exactly where you get faster than Linux, in fact) reliability becomes more important.
    7. They used NT 4.0, and we only used Linux 2.2, so all the Linux scores should be doubled to make up for the version advantage.
    8. NT smells funny.
    9. Who cares about benchmarks anyway?
  • .... that the press is interpreting benchmarks as what they really are: a process implemented to improve results. In other words: OK, the benchmark has been done, and now the Linux community knows what needs to be done. I applaud the press for this interpretation, instead of something along the lines of, "looks like the Linux Community is 'pulling a Microsoft' by burying the results in a dainty little PR statement, and not actually addressing the issues..."

    Now if only the Linux Community can learn to implement this interpretation of benchmarks ....

    That is to say: folks, benchmarks are our friends, EVEN IF they're biased !! So what if the NT box uses an Apache server ... I'd like to see WINE use IIS, just to say that we've tried it -- wouldn't THAT be a hoot !!

  • Oh, you wanted one that was actually connected to something? :)
  • Micro$oft is now convinced that they are #1, and Linux will never be able to compete.

    Don't you believe it.

    If Microsoft thought that Linux would never be able to compete, they wouldn't have bothered to publish the test results. They think that Linux is so popular that bad press is better than no press; that it is no longer under the radar. Never believe that a company's marketing reflects the internal realities; that goes double for Microsoft.

    As near as I can figure, Microsoft sees Linux as the new platform threat, and will deal with Linux accordingly. If they're smart, they will realize that FUD cannot destroy Linux, but can certainly slow it down. When MS markets a commercial product out of existence, the war has a limited duration and is over when the product's vendor pulls it. Since nobody can pull Linux, marketing and FUD wars could last for decades. But in the meantime, MS may find it useful to slow Linux growth until they can organize a better defense.

    If we're lucky, MS will be stupid and try to FUD us to death. Linux can beat any FUD, because it has more long-term viability than any proprietary software; we have forever to make Linux kick ass. I'm not going to count on that, however; MS shipped all their stupid people to federal court.

    They may be able to embrace and extend popular protocols (like TCP). They can put their proprietary ware on top of the open source Linux kernel, though they can't do much to the kernel itself. One interesting strategy might be to port Win32 to Linux as Microsoft payware. Thus, they get to collect their tax as you install MS-Office onto your corporate Linux desktop. I don't know if this approach would be beneficial or harmful to them.

    Microsoft understands that Linux is a threat. They are allocating resources to deal with it. Don't turn your back on them.

  • by Clark Kent ( 31188 ) on Monday June 28, 1999 @06:21AM (#1828987)
    Here's what the second benchmark showed:

    1. The Linux Advocates Were Right

    Mindcraft *did* seriously mis-configure Linux in the first test.

    The Linux/Apache peak performance in the second test was approximately 50% higher than in the first test.

    More important, the disasterous collapse of Linux/Apache at higher loads, that occurred in MindCraft's first test, was nowhere to be seen in the second test - Linux/Apache performance remained high as the load increased. The performance drop-off in the first test was caused by MindCraft's mis-configuration.

    2. The Anti-Linux Zealots Were Wrong

    Linux advocates did *not* oppose the first test simply because NT beat Linux. The opposition was based on valid concerns about how the test was run - concerns that have been born out by the second test.

    There is little serious opposition to the second test, which is generally considered fair (within the limits of the benchmark). In fact, the knowledge gained from the second test has been welcomed by the Linux community, who look forward to the performance gains that will result.

    3. NT/IIS Beat Linux/Apache - Not That It Matters

    IIS on NT *did* achieve a higher benchmark result than Apache (or Samba) on Linux. But, as many have pointed out, the conditions of the benchmark are highly artificial. In the real world, where there is a greater mix of activity on the server, Linux's virtues in the areas of stability, task management, and I/O performance would play a greater role.

    As some have pointed out, when you're shopping for a reliable delivery van, the fact that it can be beat by a dragster is of little consequence. Or, to use another car analogy, by pouring on the nitrous, you can beat any other car on the track - for one lap (before burning out your engine).

    4. Linux/Apache Performance Was Excellent - Not That It Matters

    As others have pointed out, the Linux/Apache performance on one CPU was enough to handle the load of *twenty* T1 lines. But again, the test is too artificial for that to have much meaning.

    5. That MindCraft Guy is a Whiner

    There are always jerks on every side of every issue. For him to pick out the most obnoxious things said by some Linux supporters, and suggest that they mean something, is childish. Those mouth-offs don't represent the Linux community, anymore than Ballmer represents the...NT...oh...wait... :^)
  • This is a really interesting spread of comments on this matter. We have on the one hand the Linux advocates (which I will happily count myself a member of). Opposite us are the NT advocates. Both groups make good (and bad) points, reflecting a range of reasons from excellent and well-spoken, to extremely close-minded. In short, it doesn't matter _why_ NT beat Linux so much as what it means to have this kind of attention. Yes, the hardware was more conducive to NT than Linux. Is this unfair to Linux? Nope. Does it mean that NT is better than Linux? Nope. It only means that NT outperforms Linux within the limited parameters of the test. Additionally, some limitations within Linux were uncovered that can be improved on. This is excellent news! All of this states quite firmly (to those who choose to look at the big picture instead of a few numbers, however captivating) that Linux and NT can be seen on a comparable scale. Not bad for an OS that scales well on a 386 or higher. Keep in mind just how flexible Linux can be, as well as how committed its development and support is. It's never shameful to be beaten in a contest. It's only shameful to refuse to improve yourself for the next contest. Peace.

  • Microsoft would never have pushed for these tests if they weren't 100% sure they would win. They surely anticipated our response to sketchy testing procedures, and sure enough, we continue to lose. Yeah, yeah, this isn't a drag race, but it would be nice if we could program software that wasn't this much slower than Windows when one of the aspects of Linux we've defended is speed and stability. So, let's do it.
  • by TheDeal ( 41885 ) on Monday June 28, 1999 @05:30AM (#1829024)
    we had this story, a couple days ago: here [slashdot.org]... it has the exact same url. this is the same story. doh!
  • Uh, maybe there is the fact that you can get Linux for free, as many users as you want for free, and run lots of free program, without big brother looking over your shoulder. Anyone know what NT Server w/25 clients costs...like $3500 i think?
    $3500 != free
    i think not...viva linux!
  • They were not completely objective when testing CGI. ASP would be a better choice and ASP scripts could be created to match functionality of CGI scripts.

Syntactic sugar causes cancer of the semicolon. -- Epigrams in Programming, ACM SIGPLAN Sept. 1982