Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
The Internet

Linux Beats Win2000 In SpecWeb 2000 315

PraveenS writes: "While not conclusive, the SPEC group released benchmarks for a variety of systems submitted by various manufacturers (i.e. Dell, Compaq, HP, etc...) and tested their Web-serving capability. Two very similar machines from Dell, one loaded with Linux and the other with Win2000 had very different results; Linux beat Win2000 by a factor of almost 3 . Here's a synopsis of the results from LinuxToday. The actual spec benchmarks are available here for Win2000 and here for Linux."

As Marty of LinuxToday puts it, though, "What does this mean? In the real world, probably not as much as it would seem. Benchmarks in general are typically set up in an ideal environment. Real world environments tend to be quite different. However, this does indicate that Linux is moving in the right direction."

Zoran points out that "[o]ther current SPECweb99 results can be found here." They make an interesting comparison.

This discussion has been archived. No new comments can be posted.

Linux Beats Win2000 In SpecWeb 2000

Comments Filter:
  • If you had bothered to read more of the site, you would have noticed that those results are all submitted by vendors with interests in getting good numbers. If Sun wants to enter a Sparc/Solaris combo, they can do that. If Apple for some reason decided it was in the HTTP server business (which it isn't (yet?) by any stretch of the imagination), it can run the test suite and submit the results.
  • by Ingo Molnar ( 206899 ) on Tuesday July 04, 2000 @10:45PM (#957701) Homepage
    i'm the one who designed/wrote most of TUX, and here are some facts about it.

    'TUX' comes from 'Threaded linUX webserver', and is a kernel-space HTTP subsystem. TUX was written by Red Hat and is based on the 2.4 kernel series. TUX is under the GPL and will be released in a couple of weeks. TUX's main goal is to enable high-performance webserving on Linux, and while it's not as feature-full as Apache, TUX is a 'full fledged' HTTP/1.1 webserver supporting HTTP/1.1 persistent (keepalive) connections, pipelining, CGI execution, logging, virtual hosting, various forms of modules, and many other webserver features. TUX modules can be user-space or kernel-space.

    The SPECweb99 test was done with a user-space module, the source code can be found

    here []. We expect TUX to be integrated into Apache 2.0 or 3.0, as TUX's user-space kernel-space API is capable of supporting a mixed Apache/TUX webspace.

    TUX uses a 'object cache' which is much more than a simple 'static cache'. TUX objects can be freely embedded in other web replies, and can be used by modules, including CGIs. You can 'mix' dynamically generated and static content freely.

    While written by Red Hat, TUX relies on many scalability advances in the 2.4 kernel done also by kernel hackers from SuSE, Mandrake and the Linux Community as a whole. TUX is not one single piece of technology, rather a final product that 'connects the dots' and proves the scalability of Linux's high end features. I'd especially like to highlight the role of extreme TCP/IP networking scalability in 2.4, which was a many months effort lead by David Miller and Alexey Kuznetsov. We'd also like to acknowledge the pioneering role of khttpd - while TUX is independent of khttpd, it was an important experiment we learned alot from.

    Other 2.4 kernel advances TUX uses are: async networking and disk IO, wake-one scheduling, interrupt binding, process affinity (not yet merged patch), per-CPU allocation pools (not yet merged patch), big file support (the TUX logfile can get bigger than 5GB during SPECweb99 runs), highmem support, various VFS enhancements (thanks Al Viro), the new IO-scheduler done by SuSE folks, buffer/pagecache scalability and many many other Linux features.

  • I guess it really depends. First of all I probably would not use apache for a benchmark aolserver is much faster. Also aolserver has persistent database connections (so does php).

    There are several issues.
    1) Database speed. In a typical web based environment or read mostly-write rarely mysql would stomp on sql server introduce frequent writes and the reverse would occur. For a transactional environment try interbase it's fast and robust and stable as hell.
    2) Web server speed. This is a close one but I think aolserver might edge IIS either way it will be a close call.
    3) Middleware. This is where it gets very very tricky. If you are writing simple aps pages using ADO to open and close databases AOL server will trounce asp so will php. In order to write scalable ASP pages you will need to utilize MTS heavily. You will need to write COM objects either in C++ or VB and register them with MTS. In other words you will need to double or triple your developement time and run into insane debugging problems. This is where the Aolserver or php environment really shines. Automatic database pooling and very parid developement easily pays for whatever performance hit you may take.

    Of course the real solution may be to use JAVA servlets. For complex web sites J2EE is a compelling solution and much easier to program then DCOM.
  • by Black Parrot ( 19622 ) on Tuesday July 04, 2000 @08:48PM (#957709)
    > Linux zealots scream bloody murder and inspect the process with a microscope. Someone else does a benchmark that shows Linux 3 times faster than Win 2k, and they are content that the Mindcraft fiasco has been avenged.

    Well, it could be that we notice that a standard benchmark was used rather than one tailored by a company with an axe to grind. Or it could be that the benchmarks were submitted by hardware vendors, whose primary interest is in making their hardware look good (i.e., it's really hard to imagine Dell fudging a benchmark to make Linux look better than Windows). Or it could be that c't already told us how Linux and NT measured up on more equitable benchmarks. Or it could be that Microsoft's own tests showed W2K performing worse than NT on systems with > 4Mb of RAM. Or it could be that testers have been saying that W2K needs +300 MHz in hardware to perform as "well" as NT did.

    In short, there's no reason for surprise at all. This benchmark is only quantifying what the attentive already knew qualitatively. If there are flaws with the benchmark, they almost certainly won't be enough to tilt it against what we already knew; if they do, we'll air our suspicions again.

    I do agree that it's still a benchmark, and is therefore susceptible to all the follies associated with benchmarks. But at least this one wasn't obviously rigged.

  • by khaladan ( 445 ) on Tuesday July 04, 2000 @11:13PM (#957710)
    For Linux they set the backlog at 3000. For W2k it's at 1000. Anyone see the difference? AFAIK, W2k can have a higher backlog, or even a dynamic backlog. I'd like to see a test where the backlogs are the same. Then there would actually be similar simultaneous connection counts! Right now, those numbers mean little if they are being compared.
  • by Greyfox ( 87712 ) on Tuesday July 04, 2000 @08:48PM (#957711) Homepage Journal
    What I'd like to see is how many clients you could serve with the biggest hardware that each OS can run on.

    For instance, on the Windows side you might have an 8 way xeon with 2 gigs of RAM. On the Linux side you might have (for instance) an S390 with a terabyte or two of RAM. Then just start loading them down with network clients until they start to stagger.

    I'd be interested in the oucome...

  • > In any case, if it's in kernelspace, it's most likely not a full-featured HTTP server like Apache, Zeus, IIS. So it can spit out static pages as fast as you'd possibly need. Big deal.

    Yeah, the more you look at it, the more it looks like they were just doing a Mindcraft '00. The most memorable joke then was that the only sites to use big static pages on such expensive hardware and matching network bandwidth would be the more profitable p0rn sites.

    If the configuration was bullshit then, it's still bullshit now. Someone please tell me that Red Hat hasn't spent the past year tweaking things just to beat Microsoft on Mindcraft '00. I guess these things play well with the PHB crowd, but surely there are better things Red Hat could be doing with their time and money.

  • by Ingo Molnar ( 206899 ) on Tuesday July 04, 2000 @11:26PM (#957718) Homepage

    You are confusing two completely different architectural concepts.

    "threads" (which get created) and "processes" (which get forked) are 'context of execution' entities. Linux has both, TUX 1.0 uses both.

    A "threaded TCP/IP stack" is a slightly mis-named thing, it means "SMP-threaded TCP/IP-stack", which in turn means that the TCP/IP stack has been "SMP-deserialized" (in Windows speak) - TCP/IP code on different CPUs can execute in parallel without any interlock/big-kernel-lock overhead or other serialization.

    A 'threaded TCP/IP stack' has no connection whatsoever to a 'threads'.

    FYI, the Linux TCP/IP stack was completely redesigned and deserialized during the 2.3 kernel cycle, this redesign/deserialization was done by David Miller and Alexey Kuznetsov. The TUX webserver of course relies on the deserialization heavily, but this is not the only architectural element TUX relies on.

  • of course, what a nightmare that would be to configure for benchmarking.. i guess you'd have to use oracle on NT and on Linux, but it's a DOG on NT, and, as much as i hate windows, it just wouldn't be fair.

    Use a better database program then. DB2 UDB is available for both platforms (bias alert: I work on DB2) and from what I see, DB2 runs pretty well on both platforms. That should even the playing field as far as database servers are concerned. There is no point in having one database vendor on one platform and a different database vendor product on another. DB2 is faster than SQL Server on NT anyway, so you'd be biasing the results before you started.


    Toby Haynes

  • Your argument is flawed. Look here [] for an IBM/IIS SPECweb99 result done on a similar 4x 700 MHz Xeon system. Check out this [] IBM result as well. And there are HP and even Mindcraft submissions. Dell has the fastest Windows 2000 numbers, and it's fair to compare the fastest Windows 2000 results to the fastest TUX results, especially if they were done on similar hardware.

    You assume that IBM, HP, Mindcraft and Dell are all in a big conspiracy to make Windows 2000 numbers look bad - are you kidding? The reality is that there is fierce competition for best SPECweb99 numbers, and Linux/TUX is just plain faster.

    The other flaw in your argument is this [] TUX dynamic module. Check out the source code, TUX does dynamic modules. (besides, the SPECweb99 workload includes 30% dynamic load, so all SPECweb99 webservers must support dynamic applications.)

  • Now the only advantage Win2K has over linux is a transparent start menu.

    You haven't seen enlightenment [] window manager yet, have you ? Check out the EFM pages, and yes, it has had transparent menus for a while. But it also antialiases fonts and alpha mixes them for the transparency.
  • You're sure W2K would fall apart? Why, have you actually done testing with it? FYI, both ZDNet and CNet (yea not exactly the pinnacle of pure PC power, but hey, they'll do) have tested W2K and found it to be extremely stable. Not quite as stable as any UNIX, mind you, but it would certainly not fall apart after 2 days.
  • I'm not commenting on testing methods exactly, but does it seem a little difficult to swallow that a Dell PIII/667 with one CPU running Linux beat a Compaq DS20 dual Alpha 667 running Tru64 by nearly 200 points?
  • It's modded at 2 because I have more than 25 Karma. I post at 2 by default.

    Thank you.

  • The tcp/ip stack runs in kernel space. In the context of the kernel, there are no threads and there are no processes. Both of these are concepts that userland programs can rely upon because this same kernel imposes these virtual constructs upon them.

    HTH. HAND.
  • by qbasicprogrammer ( 200734 ) on Tuesday July 04, 2000 @08:59PM (#957740)
    FreeBSD [] is actually the most high-performace server operating system. This is what FreeBSD vs. Linux vs. NT [] has to say about Linux and FreeBSD performance:
    :) FreeBSD is the system of choice for high performance network applications. FreeBSD will outperform other systems when running on equivalent hardware. The largest and busiest public server on the Internet, at, uses FreeBSD to serve more than 800GB/day of downloads. FreeBSD is used by
    Yahoo, USWest, and many others as their main server OS because of its ability to handle heavy network traffic with high performance and rock stable reliability.
    And Linux:
    :| Linux performs well for most applications, however the performance is not optimal under heavy network load.
    The network performance of Linux is 20-30% below [] the capacity of FreeBSD running on the same hardware as Linux. As long as you are not trying to squeeze the last ounce of performance out of your hardware, or performing mission critical transactions, Linux is a very good choice for a server OS.
    Windows NT has this description (Windows 2000 is NT 5.0):
    :( Windows NT is adequate for routine desktop apps, but it is unable to handle heavy network loads. A few organizations try to make it work as an Internet server. For instance, uses Windows-NT, as can be verifyed by the error messages that their webserver produces, such as this recent example:
    Error Message: [Microsoft][ODBC SQL Server Driver][SQL Server]Can't allocate space for object 'queryHistory' in database 'web' because the 'default' segment is full. For their own "Hotmail" Internet servers, Microsoft uses FreeBSD.

  • I read the disclaimer at the SPEC site. It seems the manufactureres each ran the tests themselves. This seems to mean that there was no common environment or proceedure to base these tests on.
    I'd like to see Linux win this battle, but lets do it again on common ground with the same clients, same cables and switches, etc... Standardization, yea that's the ticket.

    From the SPEC site disclaimer:
    These are submissions by member companies and the contents of any SPEC reporting page are the submittor's responsibility. SPEC makes no warranties about the accuracy or veracity of this data. Please note that other results, those not appearing here and from non-member companies, are in circulation; by license agreement, these results must comply with SPEC run and reporting rules but SPEC does not warrant that they do.
  • If I recall the Mindcraft period, people's "personal experience with Linux" consisted of "It kicks major ass on my P-120, therefore it must kick major ass on a 4-way Xeon. What? Oh shit. It doesnt!! Cheaters!!"
  • Cool, so it's like IIS, but without the bugs.
  • So ... what happens if, say, half of the Linux webservers switch to Tux over the next year or so? Do these webservers report as Tux servers, or Apache servers with a Tux kernel accelerator installed? The former could be a problem for purely stupid reasons: if 'Apache' held 30% market share, and 'Tux' held 30% market share, Microsoft would immediately claim victory in the 'web server war' -- as a result, all of the 'go with the market leader' people would begin installing IIS.
  • GREAT, Thanks for the explanations! Now, is it possible to get more info about this TUX webserver? Is it open source? Is it available already? When kind of polling model does it use to share connections among threads? (sigqueues, poll(), something else??)
  • As far as I know, no details of "Tux" have been posted yet. The software availability is listed as "August 2000." However, it is definitely NOT stripped down for static pages. About 35% of the requests in SpecWeb99 are dynamic, including custom responses based on cookies, parsing and storing user registration results from POST requests, and doing real CGIs (must spawn a new process!).
  • Sure, more info is here []. (which happens to be a comment in this thread :-) )
  • Yup, dude, its all about freedom
  • by Paul Crowley ( 837 ) on Wednesday July 05, 2000 @01:16AM (#957761) Homepage Journal
    If I claim that I had a fight with Mike Tyson and he won, it's relatively unremarkable; the only implausible bit is that we might meet and fight in the first place, not that he wins. If I claim I had a fight with Mike Tyson and I won, such a claim is far less believable.

    Thus, if your personal experience tells you that Linux kicks the shit out of MS operating systems for Web server performance, a benchmark test whose results accord with that experience is more believable than one which contradicts it.

    That's just good sense, isn't it?
  • by orabidoo ( 9806 ) on Wednesday July 05, 2000 @01:20AM (#957762) Homepage
    do not confuse advocacy with information. FreeBSD and Linux are more or less at the same place when it comes to reliability, scalability, and network performance. at this particular point in time, I'd guess that Linux has the advantage with the improvements of the 2.4 kernel, but it doesn't really matter: FreeBSD and Linux are always catching up with each other; both teams are very good and neither will let the other OS get much better without getting better in the same (or equivalent) way. I'd say that, in choosing between Linux and BSD, you need to look specifically, either at personal preference and familiarity, or at the actuall support for the programs and services that you intend to run, and choose accordingly. Neither platform is overall significally better than the other.
  • Maybe it's just me being stupid, but how can the Linux box score 200% higher in generated traffic measured in ops/sec and only 6% higher in kbit/sec?

    Does this indicate that the pages delivered were not identical?

  • > No it ain't. HW vendors like Linux because they no longer have to pay the "Microsoft Tax".

    Somehow that doesn't keep Dell from being the biggest MS suckup in the whole business.

  • NO. see this : 92/

    its VERY important that linux users are aware that the linux devleopment process is hitting a roadblock right now. things dont look too bright.
  • by Anonymous Coward on Tuesday July 04, 2000 @06:55PM (#957778)
    It will be interesting to compare the comments of this article to the ones regarding mindcraft. I wouldn't be surprised if two thirds of the people here take this to be the truth simply because linux won.
  • by austad ( 22163 ) on Tuesday July 04, 2000 @06:57PM (#957782) Homepage
    It would be nice if someone would run some benchmarks with the same two machines, only have the W2k box serving up dynamically generated ASP and PHP pages, and the Linux box serving up a comparable PHP page. Whack up some identical code to perform Fast fourier transforms in the page and make it spit out the result. Once you get a database into the mix, you're also measuring the performance of it, and this is just a webserver test. Unless of course you have both boxes hit the exact same database, maybe a nice big Oracle database running on Linux. :)

    Everyone here knows that MS zealots will say "Yeah, but W2k can spit out dynamic content faster...". It would be nice to have proof either way. I know I'm very interested in seeing how PHP on Linux compares to ASP on W2K.
  • by tecnodude ( 31328 ) on Tuesday July 04, 2000 @09:28PM (#957783)
    I thought about talking about this, but I'd have to use my experience and that is fairly subjective. I'm curious what do you think the difference in human costs are? Lets see, take 30 minutes to install/configure Redhat, Take 10 minutes to install the latest updates, 5 minutes to disable services that are not needed and block the outside world from anything except 80 and 22, 10 minutes to install openssh, 5 minutes to load an existing website on the machine. Reboot the machine just to make sure and you're all set.

    Looks like about an hour to me. Maybe an hour and a half if you want samba and frontpage extentions installed.

    Just for kicks, lets take a look at NT.
    When I installed it yesterday it took about the same about of time to install as redhat, so lets figure 30 minutes. Configure for network and reboot 5 minutes, Setup IIS 15 minutes, add webpage to IIS 5 minutes. Reboot the machine just to make sure.

    Ok looks like a total of 55 minutes. Great, MS just saved you 5 or 35 minutes depending on what you're looking for. Is it really worth a few hundred dollars, if not more for an MS webserver if you really don't need one?

    Also, with the linux box, I can ssh in and fix things remotely, I don't even have to be there to apply a patch when it comes out. As a consultant I find that very appealing. I just scp a file over, install it, restart the service and I'm set. NT I actually have to be there, when some of my clients are almost 2 hours away, I'd much prefer the linux method.
  • by queasymoto ( 185043 ) on Tuesday July 04, 2000 @06:57PM (#957784)
    I find it interesting that there's no Macintoshes or Suns in the test, although there are at least one Alpha and two RS/6000s. How can they claim to be a useful benchmark by concentrating mostly on Intel hardware, and only running three HTTP servers? I'd think that the differences between different servers running on the same hardware could be just as much as between different hardware configurations; hell, even poorly configured vs. properly configured systems would be a huge difference...
  • It would be interesting to throw in Solaris or Tru64 as well, because they are not Open Source, but highly regarded as 'enterprise level', and thus expected to perform better than Win2k at least, and in all probability comparably or better than Linux.

    Personally, I want to see the BSD's go head to head. (as if they don't have enough rifts between them as-is) ;-)

  • Dell, like every OEM licensee of Windows, doesn't enjoy paying MS money on every machine it sells. Knowing that many of them will run Linux anyway, they offer to install it, coincidentally saving them the cost of Windows. Since they sell the box for the same cost either way, they're making more profit with Linux.
    The situation may be different in the U.S.A., but certainly here in Ireland, Dell do not charge the same for a system with Linux as opposed to Windows NT Server - the NT boxes are hundreds of Euro more expensive.
  • There is no discrepancy here. The SPECweb99 benchmark measures 'number of conforming connections', and the tester choses the # of connections. The SPEC requierment is that every conforming connection must have an average bitrate of at least 320kbits/sec.

    What does this mean? Vendors obviously try to maximize # of connections, but they have to keep the bitrate above 320kbits to have a valid benchmark run. You can test with 1 million connections as well, but you'll get an invalid run because the kbit rate will be somewhere around 0.1kbits/sec. This is why you see almost identical kbits values (and all are a bit above 320kbits/sec), but different connections and ops/sec values. I hope this explains things.

    See the SPEC-enforced Web99 Run Rules [], there are alot of very strict requirements for a result to be accepted by SPEC.

  • it's really hard to imagine Dell fudging a benchmark to make Linux look better than Windows.

    No it ain't. HW vendors like Linux because they no longer have to pay the "Microsoft Tax".

  • Then what you should do is test on the 2GB machine in both OSes, compare results, then test on the 4GB machine for the OSes that support it, showing how much performance is gained at that configuration.

    An example would be, if computer A couldn't use AGP video with the current bios, but computer B could, you could benchmark both with a Voodoo 3 PCI card, then benchmark computer B with a Voodoo 3 AGP card, and say "We can't test directly, but we suspect computer A is missing out on n% performance by not properly supporting AGP." where n% is the difference on computer B. It still shows computer B kicking ass in the default config, but instead of making the whole machine look superior, it narrows the results down to the problem areas.
  • Its a good point, and my immediate response was that the Win2k results are so much worse that something must be very wrong.

    However, if you go further down the results list, Mindcraft have also submitted a set of benchmark results which are broadly comparable to the Dell results on a different but comparable setup. It doesn't seem likely that both companies have made the same crippling mistake.

    So it looks as though Red Hat hove done some serious magic with their threaded web server to me. Will they release the source I wonder?
  • by JordanH ( 75307 ) on Wednesday July 05, 2000 @02:14AM (#957805) Homepage Journal
    • ...that might cast a pall of illegitimacy. Anyone have the inside scoop?

    How can anyone claim that any MS sponsored benchmark has any legitimacy whatsoever as long as MS insists that there be no benchmarking of their products in the EULAs?

    I wonder how SPEC was able to perform this benchmark?

    -Jordan Henderson

  • MS says they?ve ?improved? the parsing speed [...]

    But they still send question marks for quotes and ticks... :-)

  • One of the outcomes of the Mindcraft saga was this wonderful [] set of benchmarks by C't.

    One of the things that they did is force tests that stressed various parts of the OS. For me one of the more telling ones was the selection against many files, where the ability to serve off of disk (as opposed to out of RAM) was being pushed.

    Linux won, of course. But I wonder whether Win2K is better at this than NT was...

  • Well, that is a bit strange yes, but what I did also see was that they don't seem to care much for security either, I think the slashdot peeps, should help them with there servers (ridicolous so many ports open):

    # nmap -nvvO

    Starting nmap V. 2.52 by ( )
    No tcp,udp, or ICMP scantype specified, assuming vanilla tcp connect() scan. Use -sP if you really don't want to portscan (and just want to see what hosts are up).
    Host ( appears to be up ... good.
    Initiating TCP connect() scan against (
    Adding TCP port 32771 (state open).
    Adding TCP port 4045 (state open).
    Adding TCP port 80 (state open).
    Adding TCP port 21 (state open).
    Adding TCP port 873 (state open).
    Adding TCP port 32773 (state open).
    Adding TCP port 22 (state open).
    Adding TCP port 25 (state open).
    Adding TCP port 111 (state open).
    Adding TCP port 32772 (state open).
    The TCP connect scan took 11 seconds to scan 1520 ports.
    For OSScan assuming that port 21 is open and port 1 is closed and neither are firewalled
    Interesting ports on (
    (The 1509 ports scanned but not shown below are in state: closed)
    Port State Service
    21/tcp open ftp
    22/tcp open ssh
    25/tcp open smtp
    80/tcp open http
    111/tcp open sunrpc
    139/tcp filtered netbios-ssn
    873/tcp open unknown
    4045/tcp open lockd
    32771/tcp open sometimes-rpc5
    32772/tcp open sometimes-rpc7
    32773/tcp open sometimes-rpc9

    TCP Sequence Prediction: Class=random positive increments
    Difficulty=286136 (Good luck!)

    Sequence numbers: 7472F6AA 747E8F6E 748CD0F1 74931A18 7498F243 749AF9A2
    Remote operating system guess: Solaris 2.6 - 2.7
    OS Fingerprint:
    T1(Resp=Y%DF=Y%W=FFF7%ACK=S++%Flags=AS%Ops=NNTNW ME)

    Nmap run completed -- 1 IP address (1 host up) scanned in 39 seconds
  • Seriously, if your kids can understand it, that's NOT a good indicator that other adults will understand it. I know -- I used to be that "6 year old whiz" myself

    Unfortunately, they aren't "whiz kids". They are quite average. I just wish they spent the same amount of effort on school work that they do memorizing fscking Pokemon cards.

  • by maelstrom ( 638 ) on Tuesday July 04, 2000 @07:05PM (#957831) Homepage Journal

    Everyone here knows that MS zealots will say "Yeah, but W2k can spit out dynamic content faster...". It would be nice to have proof either way.

    Kinda like when us Linux zealots said, "Yeah, but Linux can spit out dynamic content faster.." ;) I do agree that it would be more meaningful to see dynamic benchmarks. After all you can saturate a T-1 with a Pentium if you are just spitting out flat HTML.

  • FreeBSD has pretty poor SMP capabilities compared to Linux (2 way SMP,
    and I think no SMP support in their TCP/IP stack), so I think it
    would get roasted on this setup. FreeBSD's strengths are elsewhere.

    One of the great advantages of open source is that one can have a
    high-level of confidence that the OS doesn't cheat on benchmarks
    (ie. by making changes to behaviour that increase benchmark
    performance at the expense of overall performance). The temptation to
    do so in a closed source environment must be pretty much irresistible.

  • Linux has a long way to go before it hits the mainstream? Well, that may be true about the "desktop", but NOT the server!

    Ever use any other UNIX platforms? Linux is actually the easiest to get going out of the box, because so much crap is preloaded.

    Solaris is a very popular server OS (on Sun hardware), and isn't "Windows user-friendly". Once could say the same about almost any UNIX platform that REAL servers run. Linux is actually pretty easy of an OS to use as 'nixes run.

    Although I keep wondering why FreeBSD keeps getting ignored. FreeBSD makes a really nice server OS, and has it's own zealots too (many of whom are professional sysadmins, and not college students). Oh, and FreeBSD is actually a *faster* OS than Linux.
  • It would be interesting to see these benchmarks run twice, once with single-NIC servers and once with multiple-NIC servers. Windows NT2K aced the Mindcraft benchmarks by effectively coupling CPU's and NIC's. (One may wonder whether the Mindcraft tests were also run with single NIC machines, and the benchmark's sponsor [MS] elected not to have those results published.) Linux's IP stack has been continually improved since Mindcraft, so it would be nice to see how far the kernel developers have come since then.
  • > I wouldn't be surprised if two thirds of the people here take this to be the truth simply because linux won.

    Or perhaps we will merely take it to be the truth because it jibes with our personal experience with Linux and Microsoft products?

  • Web administrators generally like to cluster machines with no more than four drives: any more and the OS spends more time searching for the file than delivering it.

    Wrong! The OS don't search the drives for a file, that is an extremely stupid way of doing it. The OS knows what drive the file is on, and gets it from there. This leaves the other drives free for other work, such as serving other files.

    Assuming the network traffic they built up was the same in each test (again, they are a little shaky on that as well), Windows is taking more time to search across 7 drives vs. Linux's 5.

    Except it doesn't work that way. More drives gives better performance - not worse. Windows merely was uncapable of taking advantage of a better setup.

    The main reason for not putting more than about 4 drives in a machine isbottlenecks. More than 4 drives on a scsi bus may saturate it. (You don't get worse performance, it just don't get better either.) Easily fixable by using several scsi adapters, then the next obstacle is a saturated pci bus. You may then use a machine with several PCI buses, or simply use two machines. The latter might be cheaper.

  • This is annoying. Why is it that everyone tests the highest-end hardware possible? I'd really love to see the performance of a 2-processor 2u server with less than 1GB RAM and processor speeds of less than 700MHz. Why? Because if I'm going to buy a rack of 20, that's what I'll be buying. I don't waste money on the bleeding edge when I can get more for less with stability.

    Of course, in the MS world, you probably need 8GB RAM and 4 processors to run a Web server....
  • by Jason Earl ( 1894 ) on Wednesday July 05, 2000 @08:39AM (#957850) Homepage Journal

    First of all, these benchmarks (both the Win2k benchmarks and the Linux benchmarks) were posted by Dell, not by some random Linux zealots. Not only is that the case, but the other WinTel vendors have very similar scores for their WinTel hardware. Does this suddenly mean that all of the W2K vendors are conspiring to make Linux + TUX on Dell hardware look good? Or could it possibly mean that all research that Microsoft funded in the Mindcraft benchmarks is coming to fruition? My guess is that the folks at Microsoft are going to start to truly understand the power of release early, release often. While W2K has sat relatively still basking in its Mindcraft glory the Linux community has targetted the specific problems Linux had that caused it to do poorly in the Mindcraft benchmarks, and has rectified them.

    Second of all this is a SPECweb benchmark. The web part of SPECweb would tend to indicate that it is a benchmark of http performance. If you read the spec you would notice that it specifically measures both static and dynamic http content serving. So while this does not necessarily mean that Linux is better than Windows 2000 it probably does mean that Linux + TUX is better than Windows 2000 + IIS (for the things measured by the benchmarks).

    Your observation that most Internet facing sites don't have anywhere near this sort of bandwidth is certainly correct. However, my Intranet server does have this much bandwidth (not that I would appreciate it if it saturated this bandwidth). Besides, if you are going to let bandwidth be the limiting factor then it really doesn't matter what kind of web server you are using. A 486 running Apache will happily saturate a T1 with static content.

    Not that any of this matters. The two most important features, to me anyway, of Linux are 1) Freedom, and 2) Cost. Linux wins hands-down if these are the factors that you value most.

    From the results you must either conclude that Dell (and the rest of the WinTel vendors) are either trying to make Windows 2000 look bad, or you must conclude that Linux + TUX is going to make one heck of a compelling case as a web platform.

    Either way it looks bad for Windows 2000 as a web server OS.

  • by Ingo Molnar ( 206899 ) on Wednesday July 05, 2000 @08:43AM (#957853) Homepage

    No, i dont think there is any such divide, and i think TUX does not contradict Unix concepts. CPUs get faster and protocols get more complex every day. Right now the HTTP protocol is common enough to be accelerated by kernel-space - just like the TCP/IP protocol got common enough 10-15 years ago to move into the kernel in many other OSs.

    The question thus is not 'should we put HTTP into the kernel', but rather '*when* and *how* should we put HTTP into the kernel'. Think of this as an act of 'caching', the OS caches and should cache 'commonly used protocols'.

    Where is the limit? There is no clear limit, but the limit is definitely being pushed outwards every day. HTTP is becoming a universal communication standard, with the emergence of XML the role of HTTP cannot be overhyped i think.

    And the last but not least argument, if you dont need it, you can always turn CONFIG_TUX off.

  • You know it is rather funny that I have seen more benchmarks between windows and Linux since I started reading slashdot so many years ago that it is amazing. The best part is that you can clearly see these benchmarks show one os to be better than the other os. La la la. It really means nothing, but some people think that this will persuade people to move to os 'whatever'.

    The reality is that most web servers are needed for stability and uptimes, and performance is second. The company I work at has both Linux and Solaris, as well I think that there may be some BSD here too. Solaris is stable and its hardware is also pretty good BUT not great. Most of the problems that we have had were because of hardware failures or defective hardware, not the OS. We also use multiple servers so clustering is a must and loadbalancing also.

    Who puts a 4 procesor box on a web site? We do! We have many boxes that are 2 to 8 processor boxes. Of course we don't use that many intel boxes either except for Linux.

    What makes this really funny are all the people who defend windows all the time probably not realizing that Yahoo, MS Hotmail, and yes slashdot all use UNIX or Linux. Yahoo uses Solaris and FreeBSD, Microsoft Hotmail does too. And guess what slashdot uses Linux / perl / and mysql. Hmm you visit a site that runs hardware you hate. hmm aren't you the hypocrite?

    What windows 2000 really needs to prove is not that it can outperform LInux or Solaris, cause I am sure you can tweak it to be just as good if not better, it needs to prove that it can have 200+ days of uptimes on an extremely busy web site. So what company will be the first to have a large scale site and use win2k?

    Lastly we recently bought a site that uses windows servers and we are moving them all to Solaris. Hmmm. Can we see some Solaris 4 processor boxes benchmarked against Linux and Windows? Oh we have and it blew both of them away!

    send flames > /dev/null

  • Maybe that is as funny as: [] ???

    I guess there are more people who don't trust Microsoft's Webservers ;-)

  • The three month difference in the testing was to allow Alan Cox and Linus to get the number of virgin sacrifices / gallons of goat blood needed for K&R to get SMP networking code working.
  • by Blue Lang ( 13117 ) on Tuesday July 04, 2000 @07:14PM (#957868) Homepage
    actually, for it to be a REAL e-commerce test, you'd want one or two web servers and then a database server behind it. no one puts web and dbase on the same box, and the latency involved with network database access is very relevant in e-commerce speccing.

    of course, what a nightmare that would be to configure for benchmarking.. i guess you'd have to use oracle on NT and on Linux, but it's a DOG on NT, and, as much as i hate windows, it just wouldn't be fair.

    maybe SQL Server + IIS on the NT boxen and Oracle + Apache on the leenuchs boxens.
  • As Marty of LinuxToday puts it, though, "What does this mean? In the real world, probably not as much as it would seem. Benchmarks in general are typically set up in an ideal environment. Real world environments tend to be quite different. However, this does indicate that Linux is moving in the right direction."

    I can't believe that people are sitting here saying "yeah, but this isn't the real world." Ok, no offense guys, I actually can believe it.

    In Big-O notation, any scalar factor is neglible, it's factors like powers of the algorithm that arent, but this ain't an algorithm, this is a server.

    If you haven't noticed, Win2K makes my computer at work crawl by relation to my computer at home... And my computer at work is MUCH faster. Trust me.

    It beat Win 2k THREEFOLD. I don't care WHAT your real world situation is, THREEFOLD is a LOT. If it does THREEFOLD, that means that daggonit, it's probably going to be faster in "real world" situations too. Wake up and smell the coffee. Win 2K isn't the holy grail of computing. Linux isn't either, but it's serving 3 TIMES AS FAST, which is significant, unless the skew the benchmarks they were also running 50 copies of photoshop...
  • So your theory is that all top PC vendors, which are in a cutthroat race with each other to get the best SPEC results out, somehow conspired to make *ALL* 16 Windows 2000 Advanced Server + IIS submissions in the past year look bad, and all this with the help and under the watching eye of Microsoft? :-)

    No, my theory is that they used the instructions for optimizing IIS 4.0 on NT 4.0 to set up IIS 5.0; which isn't good.

    This is borne out by doing a search for the settings used on the Microsoft website; they're taken straight from an IIS4.0 tuning document.

    There are separate and entirely different IIS 5.0 tuning docs out there.

    Not to mention that most of the settings aren't registry settings, and appeared to have been set in the registry; IIS 5.0 doesn't use the registry much at all for perf. reasons.

  • Owners of such equipment should certainly thank Mindcraft. It was thanks to the kick in the pants that benchmark gave Linux folks that the appropriate changes were made to fix the problem.

    We can still denounce Mindcraft as being a test that would be representative of real-world conditions to very few people (those who could afford a $ 50,000 server).

    But in the end, it's good that kick was given - and we should congratulate everyone in the Linux community who worked hard to make those improvements possible.


  • by ArchieBunker ( 132337 ) on Tuesday July 04, 2000 @07:18PM (#957888) Homepage
    Not very close, most busy sites don't have all static content.

    On a side note I think you should all visit this address and see what is running: net

    Solaris eh? Whats the front page of andover say?
    "Leading the linux destination" great example you're setting there.
  • by ksheff ( 2406 ) on Tuesday July 04, 2000 @07:18PM (#957890) Homepage

    From []:

    SPECweb99 is the next-generation SPEC benchmark for evaluating the performance of World Wide Web Servers. As the successor to SPECweb96 [], SPECweb99 continues the SPEC tradition of giving Web users the most objective and representative benchmark for measuring a system's ability to act as a web server. In response to rapidly advancing Web technology, the SPECweb99 benchmark includes many sophisticated and state-of-the-art enhancements to meet the modern demands of Web users of today and tomorrow:
    • Standardized workload, agreed to by major players in WWW market
    • Full disclosures available on this web site
    • Stable implementation with no incomparable versions
    • Measurement of simultaneous connections rather than HTTP operations
    • Simulation of connections at a limited line speed
    • Dynamic GETs, as well as static GETs; POST operations.
    • Keepalives (HTTP 1.0) and persistent connections (HTTP 1.1).
    • Dynamic ad rotation using cookies and table lookups.
    • File accesses more closely matching today's real-world web server access patterns.
    • An automated installation program for Microsoft Windows NT as well as Unix installation scripts.
    • Inter-client communication using sockets.

    It certainly looks like they are testing Dynamic content as well as static. Check out api-src/ [] for the source for the dynamic content.

  • As stated in the specs page on linux. Each NIC was assigned a seperate CPU. (This is unlike the mindcraft experiment). Running them both on CPU/NIC aligned would give the most fair results.

    Doomy, old time /.er
  • by _underSCORE ( 128392 ) on Tuesday July 04, 2000 @07:20PM (#957893) Homepage Journal
    This is truly fantastic news, for years linux has held the lead over Windows in stability, usability, remote access, and bugfixes. Now it's poised to take the lead in the one area in which it was lacking... meaningless benchmarks.
    Now the only advantage Win2K has over linux is a transparent start menu.
  • by mindstrm ( 20013 ) on Wednesday July 05, 2000 @06:47AM (#957895)
    Are we that low? Do we pull a Mindcraft whenever we want?

    These systems, although very similar, are not identical. Different drive arrangements, different scsi controllers.
    And, to boot, one is running IIS 5 and one is running Tux 1.0 (whatever that is...).

    What does this prove about the individual Operating systems? ABSOLUTELY NOTHING!
    It shows that operating system 'A' running web software 'B' on machine 'C' is faster than operating system 'X' running web software 'Y' on machine 'Z'.

    What the hell is 'Tux 1.0?' Yes.. I could look it up. WHy not at least benchmark Apache, so at least you could say 'benchmark of most common intenret platform for each OS' or something..
  • by Billy Donahue ( 29642 ) on Tuesday July 04, 2000 @07:21PM (#957896)
    I'd use Linux if Windows was 200% faster..
    A faster Windows still locks me into it's
    stupid upgrade treadmill... Benchmark
    results are just statistics.. and as you
    know, there are "Lies, Damn Lies, and Statistics".

    You can't just jump up and down when Linux
    beats Windows on a benchmark. Then you're
    setting yourself up to hang your head when
    Linux loses one every now and then (Mindcraft)..
    In so doing, you're missing the point:
    The speed, usability, or even stability of
    free software is not the driving force behind
    its existence, It's the FREEDOM!

    On Independence Day, of all days, you lose
    sight of this? I'm so tired of these benchmarks.
  • also, how do you figure that Macs are any less "desinged for this" than x86 boxes?

    Because Macs--and yes, even your "quality-built" G4--are terrible at the most important factor for web-serving performance: memory bandwidth. "Apple's systems generally have had only about 60 to 70% of the effective memory bandwidth of contemporary x86 systems. This is due to Power Mac configurations that run the system bus at lower clock rates than comparable x86 PCs, and the simple fact that Apple's system ASICs cannot match the technical excellence of the best x86 chipsets like the 440BX." (source: Paul DeMone's Mac performance article [] at

    Furthermore, as you'll learn if you read the rest of that article, Apple refuses to submit any Macs to any standard, fair benchmarking organizations, and in particular to SPEC, instead preferring to use decade-old discredited benchmarks incorrectly (BYTEmark) or make up their own. I wonder why?

    the G4 is a damned powerful, quality built piece of equipment--better than most x86 boxes slapped together at some cheap ISP.

    First off, it's OEM, but I'm sure that was just a typo. More serious is your perception that the OEM does anything which impacts the performance of the computer other than pick the components. The only thing that could possibly make a computer "quality-built" by an OEM would be making sure everything is screwed in tight. What matters is that the components themselves are quality-engineered. And in the case of chipsets--again, the most important part of a good web server--a plain old Intel 440BX knocks today's Mac chipsets silly.

    And let's not even get into the x86 chipsets which are actually built to be used in web servers. Apple simply doesn't have anything to compete.

    And there's no reason why they should. Apple has never ever pretended their boxes make good web servers. And considering all the things Apple has pretended over the years, that fact alone should clue you in that they probably don't.
  • It beat Win 2k THREEFOLD. I don't care WHAT your real world situation is, THREEFOLD is a LOT. If it does THREEFOLD, that means that daggonit, it's probably going to be faster in "real world" situations too. Wake up and smell the coffee. Win 2K isn't the holy grail of computing. Linux isn't either, but it's serving 3 TIMES AS FAST, which is significant, unless the skew the benchmarks they were also running 50 copies of photoshop...

    As I said when the Mindcraft results came out - anything more than 30% difference in performance is suspect. 300% difference stinks of an error in the benchmarking procedure.

  • Casting my mind back to university days I seem to recall that SPEC benchmarks were done using very artifical tests.

    I mean sure, they did throughput test, CPU tests etc etc, but they were very calculated tests designed to test one thing only at a time (or something like that) and had little bearing on how a system/subsystem/software would perform in real life situations.

    So the fact that Linux outperformed Win2k by a factor of 3 is pretty much useless as a comparison of real life performance.

    Of course I could be wrong. I'm at work at the moment and can't get my hands on those dusty uni notes ... :-)

  • I'm absolutely loving reading all these comments right now. Its very amusing to me how all of you Linuz Zealots have jumped on this bandwagon proclaiming, "Linux is the greatest OS".

    Yes, there were problems with the Mindcraft benchmarks - and yes there are problems with this one. Namely, what in gods name are they comparing? They certainly arent comparing operating systems - there are way too many differences in this case to do that objectively. Next time somebody runs benchmarks between the two OSes, please try to keep the following things in mind:

    (1) USE THE SAME HARDWARE! I cannot stress this point enough. What you people may call minor differences may often have a MAJOR effect on the outcome of a benchmark such as this.

    (2) Use the same Webserver Software. How in gods name can you blame or claim any of these benchmarks on the operating system? Both are using completely different HTTP servers (one which is isnt even publically available and shouldn't have been used = TUX). If you want a legitimate operating system benchmark and not and HTTP server benchmark - try to compare Win2k running Apache for NT and Linux running Apache. Otherwise climb off your high horses right now - these are webserver benchmarks NOT OS benchmarks. I for one will say that Apache for NT consumes a lot less memory than IIS 5.0 - though on my small intranet site I've yet to notice any speed difference.

    (3) The results are unrealistic. What kind of server has 4 gigabytes of bandwith?

    (4) Also - make a point to configure both servers equally - it seems to me you guys scrimped here and there on IIS configuration - I wonder why?

    If the Linux world wants credibility - its time to grow up and earn it. You guys sure talk a great game - but when it comes down to the numbers you are either whining when you get trounced or creaming in your pants over benchmarks which are obviously flawed.

    While Im on my soapbox - let me say this also: Its amazing how many of the news story's of this so-called "News for Nerds" site appear to be blantantly attacking Microsoft and promoting Linux. Its obvious that whatever sense of objectivity Slashdot once had (if ever) has long been lost to the horde of pre-pubescent teenagers who only have one goal: To get something for nothing.

    So there you go - take it or leave it - I dont really care. You may either post a reply or email one to

  • I find it interesting that there's no Macintoshes or Suns in the test, although there are at least one Alpha and two RS/6000s.

    Suns, sure, but Macintoshes? I don't think I'm aware of anybody using Macs for even semi-serious webserving. Neither the OS (OSX is a different ballgame, granted) nor the hardware is designed for this kind of thing. Correct me if I'm wrong, please :)

    As regards to the number of HTTP servers, maybe they just ran out of time and money to benchmark ten squillion different configuration, and chose the ones that they believed were in most common usage. Testing more of them would certainly be a good thing, though.

  • We all feel that way. We all feel that MS products are a hog. We all feel that Linux is clean and fast...

    Let me say though.. win2k feels much smoother and runs much cleaner than previous NT versions. It's more stable. IT *does* work better. it *IS* faster. And regardless of what everyone says, including me from time to time, at it's core, the NT kernel is *good* technology. I just wish MS would quit fucking it up. It's what they chose to do with it that sucks.. not the kernel itself.

    And you are right. THis benchmark is absolutely meaningless.
    OS 'A' running server 'B' on hardware 'C' beat out
    OS 'X' running server 'Y' on hardware 'Z'.

    That is meaningless.

  • I realise that some guy from LinuxWorld effectively made the same comment as me and it is in the original news post - oops. I wrote this post in response to reading a few reader posts about how great Linux is cos it beat Win2k...
  • Thanks to Linux's open nature it has recently been ported to IBM's mainframes. I truly doubt that that Windows 2000 could compete with Linux on a S/390. Which is sort of ironic, because it also can't compete with Linux on an old 386.

    According to this survey it would appear that Dell doesn't even think that Windows 2000 competes with Linux plus TUX for web serving on Dell hardware.

    I wonder what that could possibly mean :).

  • by jonnythan ( 79727 ) on Tuesday July 04, 2000 @07:27PM (#957925)
    When the Mindcraft benchmarks came out, every Linux zealot screamed and cried that there were problems with the benchmark. They were right. Some sensible people pointed out something interesting I remember..

    They said that when someone performs a benchmark in the future and it shows Linux outperforming Windows NT or 2000 by a sizeable margin, the Linux zealots will claim that THIS benchmark is the correct one and Mindcraft will be PROVEN wrong.

    This post seems to me like exactly that behavior. Mindcraft doesn't tune Linux the right way and WinNT trounces it. Linux zealots scream bloody murder and inspect the process with a microscope. Someone else does a benchmark that shows Linux 3 times faster than Win 2k, and they are content that the Mindcraft fiasco has been avenged.

    Take a look at yourselves. I'm not a Linux lover. I think it has a long long way to go before the mainstream starts to take it seriously. There are so many problems with it right now..installing programs, removing them, x windows interface complexity, simple text editors..the list goes on. Honestly, I don't think it will ever become mainstream - it will get replaced by something else that will before long.

    I don't love Windows either. There are of course many problems with it. However, it's not the spawn of Satan and Linux is not the Great Hope or messiah.

    Be objective, people. Please. You'll do your "cause" some good.
  • by konstant ( 63560 ) on Tuesday July 04, 2000 @07:27PM (#957927)
    The two major distinctions between these benchmarks and the unjustly-maligned Mindcraft benchmark that were later confirmed by PC Labs:
    1) these tests compare Win2k to Linux. By contrast, the Mindcraft study compared WinNT4.0 to Linux.

    2) in the "Operating System" column of the Linux boxes, we see a revealing note:
    Operating System: Red Hat Linux 6.2 Threaded Web Server Add-On

    It seems as though RHAT has taken the trouble to render its TCP/IP stack into a multi-threaded model, rather than the forked model I understand it used to be. This was identified as the primary deficiency in the previous benchmarks.

    At the time, Linux afficianados claimed that the superiority would be short lived. Assuming these stats are otherwise legit, it seems as though they were right, and in such a brief period of time as well. I'm impressed! Keep pumping out impressive turn-arounds like this one, and very soon commercial entities will have to give open source its just props as a development model.

    I am slightly curious whether this "web server add-on" is available to consumers, and also whether it is a fully-featured web server. If not, and this is just a hack, that might cast a pall of illegitimacy. Anyone have the inside scoop?

    Yes! We are all individuals! I'm not!
  • by Chalst ( 57653 ) on Wednesday July 05, 2000 @06:57AM (#957929) Homepage Journal
    Is there a significant performance difference for web servers using WinNT and Win2k?
  • > An NT admin can be had for much less, which is important when figuring out TCO.

    Which probably explains things like this [].

  • I do agree that it's still a benchmark, and is therefore susceptible to all the follies associated with benchmarks. But at least this one wasn't obviously rigged

    I disagree; would anyone care to explain why:

    The Linux setup had on-NIC buffers of 300 bytes, whereas the Windows setup was set to use buffers of 10,000 bytes - thus giving higher latency?

    The Linux setup was set to use (from the get-go) 10Mb of memory for its TCP/IP buffers, whereas (it looks like) Windows was set to use 17Kb?

    The size of the TimeWait buckets buffer in the Linux configuration was HALF that of the Windows NT configuration?

    Why was the logfile on the Windows box set to flush every 60 seconds instead of the default of every 30 seconds?

    The thread pooling settings on the NT box are suspect; they seem artificially high, which can degrade performance.

    Sure, this could all be moot. But before jumping on the "this Benchmark is THE WORD OF GOD" bandwagon, I'd like to see why these changes were made.
  • Although I would love to see linux on every machine at work.. I have to say.

    The administration style used to administer and install linux boxes is *NOT* the same that is required for NT boxes. Not that this is a good thing, mind you, but if you approach NT work in the right manner, it can be done quickly.

    I roll out new workstations in ~20 minutes now, unattended. (Norton Ghost, several other nifty autoconfiguration things I whipped up, some neat network stuff)

    Also, choice of servers is usually do to the fact, still, that *nobody out there has linux/unix experience*. It is still percieved as something that the general admin 'cannot understand'. THAT is the REAL PROBLEM.

    I've tried to roll out linux servers at many companies. There main reasons for not doing it are they CANNOT SUPPORT IT. They don't have any linux people.

    Believe me.. if the other admin in the company was a linux nut like me, I would have *no* problems convincing management to go with it.
  • Actually this depends completely on the amount of experience required. If you want to get an NT admin with the same amount of experience as your "experienced" NT admin, then you will generally find that the NT admin is more expensive. Likewise if you want a Linux admin with the same level of expertise as your recent MCSE graduate then you will probably pay less than you would for an NT admin.

    This doesn't even get into the fact that with Linux upgrades are free, and hardware stretches a lot further. Nor does it recognize the fact that most Linux admins are capable of adminning far more hosts.

    The fact of the matter is that Microsoft has been pitting the skills of entry level MCSEs against hardcore Unix veterans in their TCO evaluations for some time now. They have completely glossed over the fact that hardcore NT veterans are often more expensive than their Unix counterparts. The popularity of Linux, and it's down to earth prices, have made it relatively easy to get a hold of Junior level Linux admins at rational prices. Heck, colleges these days are pumping out kids that know Linux like they were going out of style.

  • by Ingo Molnar ( 206899 ) on Wednesday July 05, 2000 @12:45PM (#957946) Homepage

    1) the maximum filesize in the SPECweb99 benchmark is 900kb, this is why there is a 1MB limit set. Your claim that there are 1MB objects in the benchmark is false.

    2) the CGI executable is mandated by the SPECweb99 Run Rules. A process must be created and destroyed. But the total amount of CGI requests is 0.1%! All the other 99.9% of the workload was handled with IIS 'low application priority' modules, which is a DLL loaded into IIS's address space, not a .EXE.

    3) the IIS object cache was set to 2GB (not 2MB). It's set to 2GB because Windows 2000 + IIS has a serious limitation, threads (such as the IIS threads) can only address 2GB. This is a design flaw in Windows 2000, which hunts them in the enterprise now.

    4) are you really seriously promoting the idea that the top 4 PC OEMs (Dell, IBM, Compaq, HP) and Microsoft did not tune IIS to the max and somehow conspired in making Linux+TUX numbers look good?

    Fact is, the only reason why the TUX result was compared to the same Dell system is that the Dell system also happened to have the fastest Windows 2000 results. Your whole line of argumentation is obviously flawed if you compare IBM's similar Windows 2000 SPECweb99 result to the []TUX result [].

  • Yeah, but most of that time was spent looking for the screwdriver.

  • Warning: This post _will_ be considered flamebait by linux zealots. It'll also be considered flamebait by micro$oft zealots (do any of those exist?). In fact, the only people who will probably like this post are Be zealots. So sue me. Moderators: just because you don't agree with what I say dosen't mean you moderate it down. At least that's the mature way to look at things. There has to be someone out there who's mat... nevermind ;-)

    I've stopped caring about linux now. I think Open source is a great thing. I like CLI's, so naturally I like the unix idea. But until something major changes, I don't think Linux will take over from microsoft in the consumer-arena. Why you ask? Because
    1) Its very difficult to configure. I have very mainstream hardware. Nothing funky on the motherboard, 3com NIC, graphics card from Diamond, SB soundcard, etc. But I could never get everything to work at once. Keep in mind I'm fairly computer literate (I 've only built my own computers since I was in 8th grade.... ) and I know what I'm doing. But I could never get everything to work together, and this is with three different distros keep in mind. If someone like myself, who knows about computers, can't get the damn thing working what makes people thing that average joe-consumer and idiot-boss will be able to make it work on their computer? And that's not even getting into installing software ("make install" my ass, there's better ways to do things if you want it made easy). "Well, just buy it pre-installed then!" you might say, which brings up
    2) A Micro$oft OS is pre-installed on almost every computer on the planet. "But dell has linux preinstalled on some laptops!" If they do, they're not making it very visible, a search a couple weeks ago in their home-user laptop section turned up nothing with linux. ditto for small-business section. Which leaves the average joe to install it himself. Refer to 1) for the impossibilities of that happening.

    Now, you're probably wondering why I said "Be zealots" up there, right? Well, that's my solution. I think with the right pushing that Be actually has a chance against the gorilla. Unfortunetely, it dosen't look like that's going to happen, it looks like another OS/2 (that was a fun one to play with, btw). Coulda, shoulda, but didn't because of piss-poor advertising. Make no mistake, I think Be is great, I use it as often as I can. Its easier than anything to setup (just install it) and it works great out of the box. I urge everyone to try it. Hell, its even free []. And parts of it have been open sourced.

    My rant is done. I guess I could sum it up by saying " We're screwed, the good stuff always gets squashed by the gorilla ". Have a nice day :)

    Predictions for the moderation: Troll, Offtopic, Flamebait. Lets see how close I get.

    Linux is only Free if your time is worth Nothing

  • > When Linux didn't support 4GB of RAM, that was a liability. If Phil had a 2GB NT box and Joe had a 2GB Linux box, and they both needed more performance, Phil could pop in 2 more gigs and get it. Complaining about this particular point is like losing a tennis match and yelling "Hey, no fair, you've been working on your serve!".

    OK. I want to do a Linux vs. W2K benchmark. I'm going to run it on an IBM S/390. Fair?

  • Hrmm, this doesn't appear to have been bent in Linux's favor. Many different systems/configurations were benchmarked by different vendors and the higher end Linux setup beat them all.

    I'm sure each vendor did everything they possibly could to improve their SPECWeb99 results, since it's in their best interests. Does this mean that Linux is just better overall? Does it mean that it can be twisted the most to win any benchmark if you try hard enough?

    The notion you get from reading linux-kernel is that they're totally against patching the kernel just to win a specific benchmark, but it obviously did very well in this one.

    Then again, there are lies, damn lies, AND BENCHMARKS. I see this as being more credible than the Mindcraft benchmarks (Mindcraft, haha, that sounds suspicious) since it wasn't simply NT vs. Linux and multiple vendors are involved.

    It would have been interesting to see FreeBSD thrown in, just because it's another open source system. Maybe there's a trend here? Easier to tweak open source systems to win benchmarks? Maybe they're just clearly better? Hmm.

  • by Phallus ( 54388 ) on Tuesday July 04, 2000 @08:01PM (#957961) Homepage
    The problem here is that these figures aren't aimed at freedom lovers - most of us take benchmarks with a pillar of salt anyhow. They are aimed at business users. Most business users don't play politics - they want results, and they want comments from the press they can show to their managers.

    So while the benchmarks don't directly impact on us, their influence over business computing does give benchmarks some significance.

    tangent - art and creation are a higher purpose
  • by WasterDave ( 20047 ) <(moc.pekdez) (ta) (pevad)> on Tuesday July 04, 2000 @07:42PM (#957972)
    These numbers seem hugely high to me. I mean... 4,200 simultaneous connections at 350kbit/sec is around 1.5Gbit/s. To do that you'd need some fairly serious NIC's. A closer inspection of the test setup reveals the server was pushing 4 networks through 4 gigabit alteon network cards.

    Reality check guys. Does anyone have 4 gig of external connectivity? And doesn't 4,200 simultaneous connections of 350kbit/sec each represent, like, Yahoo? (without doing the sums)

    This would also seem to spurn a more serious debate in terms of web performance testing. If we can get a single server to munge through this kind of quantity of throughput - why have clusters of servers at all? Clearly real world servers perform nothing like as well as this, and we need to have a better look at why.

    Dave :)

  • I believe that apache 2.0 (currently in beta) will be multi threaded. Of course they could have run the test with the open source AOLserver which is multi threaded and extremely fast or even with the built kernel http server which will be available with 2.4. that's the nice thing about Linux lots of choices.

    What surprises me is the fact that even though NT relies on threads heavily and IIS is so tightly integrated with the OS it still lags behind.
  • by tecnodude ( 31328 ) on Tuesday July 04, 2000 @08:04PM (#957978)
    Ok, I agree with you about benchmarks being statistics and how they can be manipulated. This is still a good thing.

    Think about it from the point of view of someone who is trying to justify a linux web server in a business environment. I'm going to assume that most businesses have a budget dealing with what they're going to spend for the year on equipment and software. Isn't it worth it to prove that you could save hundreds of dollars on Windows and the licenses if linux met the businesses needs?

    Say the web server is going to be spitting out static HTML on DSL or a T1, what's the point in having an NT/Win2k box for that when Linux or BSD would do the job for a considerable savings. The money saved would be money for another project.

    If you're working for a company thats money to spend on replacing some of the old junk that gives you problems (10mbit nics, hubs where you need switches, a few larger hard drives....etc).

    If you're consulting thats more value to the customer while acomplishing the project. The last thing I want to do is give an improper solution when my reputation is on the line.

    I think that the more "fair" benchmarks out there the better. Even Mindcraft's benchmarks were helpful because they showed how far linux had to go in certain situations. Right now in my opinion the more linux gets talked about the better, it needs to become a household name before a lot of business owners will consider it.
  • The next time I'm building a web site that will get 12 000 hits a second, this kind of benchmarking will be really useful. Until then... I'm sticking with Linux because of its flexibility and freedom.

    I've got a 66Mhz 486 running GNU/Linux, 450 days uptime serving up to 10 000 hits a day...

    Danny [].

  • Slashdot as pseudo-intellegent cache anyone?
  • Suns, sure, but Macintoshes? I don't think I'm aware of anybody using Macs for even semi-serious webserving. Neither the OS (OSX is a different ballgame, granted) nor the hardware is designed for this kind of thing. Correct me if I'm wrong, please :)

    ok, you're wrong. Mac OS X Server has been out for well over a year, and it does a handy job of serving up web pages with Apache. also Web Objects, from the NeXT world is a nice piece of software for delivering web-based applications.

    more info can be found here [].

    also, how do you figure that Macs are any less "desinged for this" than x86 boxes? the G4 is a damned powerful, quality built piece of equipment--better than most x86 boxes slapped together at some cheap ISP. sure it may not be a high-end Sun box, but there's no reason a Macintosh can't serve web pages with the right software (and better than an x86).


  • I'm not rich. So sue me. Some of us can't always throw more fucking bandwidth and hardware at the problem. Especially those of us who make minimum wage

    But businesses can. Maybe they can't pay more than minimum wage cause they spent all their money on hardware and software licensing...

    because none of the godamn computer companies want to hire us because we don't have "degrees" or "credentials" or any of that bullshit

    It doesn't take "degrees" OR "credentials" to at least make more than minimum wage, if you have any sort of technical intelligence at all. You may not have your dream job, but you can definitely make above minimum wage...

    Besides, our high school has a several thousand dollar tech budget and a T1 line and the shit still crashes every time you turn it on. That's no exaggeration, that's the literal truth

    Uhm.. "several thousand dollars" doesn't go very far when it's spread out across an entire school. Does that cost include the T1 line? If so, you have even less money to spend.

    "Why can't I be a network administrator making 6 figures? I mean, I know I'm still in high school and have never had a job before, but just wait 'til you see the job I'll do! I'll take all those servers and reload linux on them, and they'll run so much faster you won't even need half of them! Then I'll take them home and make a beowulf cluster out of them to crack DVDs and encode MP3s."


    [ok, so maybe i overstereotyped at the end a little bit]
  • Well, I'm sure that Microsoft will repeat the test (in exactly the same way that the Linux mob repeated the Mindcraft ones) soon and we'll see how that goes.

    If you think that those Windows 2000 systems are not tuned well enough then more power to you, i'm sure you'll be hired immediately by any of these companies, good SPECweb99 performance is a top priority for every hardware vendor.

    No thanks; did that for a couple of years (I used to work on capacity planning tools for mainframe and server applications). I'm much happier writing cool applications for Sierra.
  • by Anonymous Coward
    Don't be surprised. NT has no fork function and its CreateProcess API call is extremely slow. MS moved IIS to threads very early on primarily because starting new processes on NT is so darn slow. Apache got bitten by the same bug/feature. When Apache was ported to NT the first time around, it was dog slow because was acting like a good unix program: it forked. Later Apache-win32s used threads. There's jsut no way to get good performance on NT iif you need to make new processes. NT is a VMS derivitive, not a unix derivitive, and this is where it really shows.
  • Suns, sure, but Macintoshes? I don't think I'm aware of anybody using Macs for even semi-serious webserving. Neither the OS (OSX is a different ballgame, granted) nor the hardware is designed for this kind of thing. Correct me if I'm wrong, please :)

    You mean besides the military []. A G4 running OS X Server is nothing to sneeze at [], and if memory serves correctly, web serving on the mac using WebObjects is a pretty sweet combo... If you are going to include commodity x86's then you should include macs...

    course if you're running linux PPC, then you can run Apache on a G4 and really rock and roll......

  • by Black Parrot ( 19622 ) on Tuesday July 04, 2000 @08:31PM (#958008)
    > the unjustly-maligned Mindcraft benchmark

    No, the malignance was just. Even though Mindcraft II addressed some of the obvious technical problems with the first round, they still ran an extremely odd benchmark on an extremely odd selection of hardware, which left them open to charges of having tuned the test to provide the desired results. (I.e., "Here's one we can win!") These charges were confirmed by the suite of benchmarks run by c't at about the same time, where Linux won on almost every test, even though there was a realistic and reasonable variety between the specific c't tests, rather than a single bizarre test as in Mindcraft. (Another poster has given a link to those results.)

    Even though Red Hat was foolish enough to participate in Mindcraft II [*] and thereby gave the benchmarks an appearance of legitimacy, many of us said in advance that we would not accept the results if they did not use a more relistic benchmark on a more realistic selection of hardware. I, for one, still stand by that.

    It's absurd to put any stock in a benchmark that is sponsored by a company with a direct interest in the outcome and that does not even reflect a standard benchmark.

    [*] Or not, as the case may be. Perhaps they were just trying to get a close look at the behavior so that they could get started on their "add on". Indeed, this may be what happened - see the details [] and notice the "Each NIC IRQ bound to a different CPU; Each TUX thread's listening address bound to 1 NIC's associated network", which sound like a direct response to Mindcraft.

    > I am slightly curious whether this "web server add-on" is available to consumers

    The linked page says that the "HTTP Software", "Operating System", and "Supplemetal System" (whatever that is) will be available in August 2000, so it does sound a bit vaprous.

  • by danheskett ( 178529 ) <> on Tuesday July 04, 2000 @08:36PM (#958010)
    Good point on objectivity.

    I noticed one major problem as it is, that may be a bug or a feature of Win 2k.

    The notes on the Win2k Server noted that the home directory was set to execute and execute script access.

    My experience, has been, with Win2k, unless you specifically change it, all files (even those with a .HTML or .HTM) extension gets run through the ASP parser before being served. The ASP engine is hideously slow. If in face the tests were on plain run of the mill html pages, this is a big waste!

    I cant say for sure if this is the case in this in this situation, but i always change the access to just 'read' on home and move the scripts to seperate directory which is given 'execute' access. Really makes a different on all the non-script pages.

    Just a few thoughs from a recovering NT-Admin.

  • Dell, like every OEM licensee of Windows, doesn't enjoy paying MS money on every machine it sells. Knowing that many of them will run Linux anyway, they offer to install it, coincidentally saving them the cost of Windows. Since they sell the box for the same cost either way, they're making more profit with Linux.

    So why not make sure Linux wins a benchmark in an area where they know Linux is popular already? Fudged numbers and fudged hardware aside (we'll assume they were honest), it would certainly seem a logical thing for them to do. As is evident from the results page [], they blew away the competition with TUX 1.0. I've been unable to find any information on this (please enlighten), but it appears that it's a kernel patch, because the options are set with the kernel interface. Is this the khttpd that was discussed after the Mindcraft fiasco?

    In any case, if it's in kernelspace, it's most likely not a full-featured HTTP server like Apache, Zeus, IIS. So it can spit out static pages as fast as you'd possibly need. Big deal. Fireworks accelerate faster than space shuttles, but you wouldn't create dynamic content with fireworks. (Erm... where was I?)

    My point is, Dell has proven that a specially designed static page server is faster than servers with more features. That doesn't really tell us anything we didn't know. It doesn't demonstrate that one OS is better than another, nor does it make deployment decisions any easier (except for fools).

Forty two.