Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Mindcraft Study Validated 345

!ErrorBookmarkNotDefined writes "Another study has appeared validating the Mindcraft comparison of Linux and NT. This time, PC week benchmarked Solaris, Linux and NT. Using a monster machine, NT handily defeated Linux. The study found fault with Apache, mostly. (For low-end machines, Linux would easily beat all comers; but how far along is Linux in the highend market?) "
This discussion has been archived. No new comments can be posted.

Mindcraft Study Validated

Comments Filter:
  • by Anonymous Coward
    Well, I'd like to see them keep slamming the
    servers with hits like that for about a week straight and then we'll see who's servers are still up and running and which machines will require a reboot.
  • by Anonymous Coward
    A basic problem, I have with benchmarking practiced in the PC-week and Mind-Craft is that they are looking at a single high powered machine. A real test should look at several different machines of varying processors, disk, and memory configurations.

    I would recommend a course in statistics before making such a bold statement that the numbers prove it. Numbers in themselves do not prove anything. A single data point for a quad processor mean nothing for the real user.
  • by Anonymous Coward
    Everybody knows that benchmarks are just benchmarks. I would like to compare the situation between NT and Linux servers to that of Mercedes F1 and regular cars.

    If you have a good track and fair weather you'll get some pretty fast lap times with the F1 car, tuned to the maximum. This approach is, however, not practical. In a F1 car you get a bumpy ride.
    The engine blows after about 1000 km and you have to refuel and change tires every 200 km.

    An F1 car will also carry only one person at a time (one prototype with a passenger seat has been built). It's also painful and tough exercise to drive an F1 car.

    But, put a little water and a few LARGE bumps into the circuit and the situation changes. In your regular Mercedes you can enjoy the ride and still make the trip in quite a good time, while listening to music and not having to repair the engine and refuel as often. You can even bring along a few friends to make the trip much more enjoyable.

    My advice: Don't waste money and time on building a F1 car for the public, instead enjoy the ride that you get from your ordinary Linux.
  • by Anonymous Coward
    Looking at the file serving tests.
    Isn't this just a problem that Linux can't use the 4x100M cards to make a full duplex 400M pipe.
    The Linux performance falls off at around 200M just what I'd expect for 1 card.
    Or am I missing some thing obvious?
  • by Anonymous Coward
    I think you guys mean RegClean.exe, i didn't see anything about regedit in the last two links to the risk archives.
  • The survey claims (italics/bold are mine):
    A few weeks before our testing began, LinusTorvalds (creator and keeper of the Linux kernel) released the Linux 2.2 kernel. It promised to overcome the lack of multiprocessor scaling that has hampered performance on our previous tests.

    Unfortunately, we could not test the potential improvement in processor scaling, as the 2.2 kernel included a TCP/IP stack improvement that broke its communication with our Windows 95 clients . This fix was not in the 2.0 kernel, which was the latest available in standard distributions during our testing, so we tested with Caldera's 2.0.35 kernel distribution.

    For some reason my Win95 clients hardly noticed when I upgraded to 2.2.x

    Does Caldera come pre-configured with crypted password support ?

    Did they just forget to run Win95PlainPassword.reg ?

    Is it some evil plot by the Linux community to keep the support costs down by keeping the clueless far away from Linux :) ?

  • When the Webstar was still on the top 3 servers on the net (outnumbering IIS) and was getting slammed in all the benchmarks for all the same reasons that Linux is know.

    In reality, Webstar running on a low-end Mac under Apples brain dead TCP/IP stack could saturate a T1 line. Running on Apple's top of the line hardware it could match a Sun box at up to T3 speeds. Yet all the benchmarks were at 100mhz lan speeds and showed Webstar getting butchered when over 50 simultaneously clients started hammering it. Sound familiar?

    Back then everyone who knew better stayed quiet (you know, all the geeks who admin server farms and read slashdot all day) since it was "just Apple, so who cares." Maybe if people had complained we wouldn't still be seeing these benchmarks 3 years later.
  • by Anonymous Coward
    Linux evolving faster than the competition. Myth.

    The shortcomings of the Linux kernel have been known for ages. Linux first appeared in late 1991. In early 1992 already Linus acknowledged that a microkernel design would have been better.

    "True, linux is monolithic, and I agree that microkernels are nicer. [...] From a theoretical (and aesthetical) standpoint linux looses."

    They had years to fix the shortcomings. Fact is, the linux kernel _architecture_ evolves at a snail-like pace. Just because a new kernel gets release every other day doesn't imply that it evolves in any meaningful way. Don't forget that there's a new release for every new driver and also there's practically NO serious internal testing performed by Linus.
    Proprietary kernels probably evolve much faster but you don't get to watch it.

    Lastly, good kernel programmers are rare. If you were one of the few would you rather spend your working hours coding for love and Linus or earning good money instead? Fact is, everything about Linux is _mediocre_. No great inspirations, no brilliant minds involved, no breakthrough progress. Many people don't mind that. It's "good enough" for them. I personally can't stand it.

  • Don't be fooled by the rants of some of the slashdot audience. The kernel developers are doing what they can to address shortcomings in the area of scalability. It's not so long ago that linux lacked simple things such as SMP, out of the box support for more than 64MB memory , etc. Linux is not terribly scalable ( especially out of the box ), but the developers are making leaqps and bounds ( 2.2 itself is a great improvement over 2.0 , though it's still not great )

    While the mindcraft study does not seem credible ( for a start, it was not independent ) , this study is not severely flawed. However, all the study shows is that linux has its limitations as a high end server.

    What is also interesting, however, is that NT is actually a weak value proposition on a high end machine. Take a look, and you'll see that it can't hold a candle to Solaris ( even x86 solaris ) , especially considering Solaris' superior reliability. HAND, -- AC

  • by Anonymous Coward

    We need to design and make a real world benchamark. Make a benchmark the has a very relistic mix of static, dynamic, and secure HTML. High and slow speed user connections. Also include content searches like looking up T-Shirts and catalog IDs in a catalog. I also want the client machines to look like a mix of machines, and for them to behave in a manner that is simular to a real user. That is to say, one connection calls for a page, pauses for a random number of seconds, then asks for another page, or does a search, or something. Each page returned will need to have a reasonable mix of images, and other content.

    To do this benchmark some things will be needed:

    • Server page content - A set of web pages for the server to serve. I'm thinking of something like a catalog retailer.
    • DB Content - A database of information that dynamic catalog requests are generated from. This should have entries for all products.
    • Client Programs - Simulate the activity of a user. Simulation of slow connections will need to be done. Afterall, not everyone has a T1 or faster to their home. The Client Programs will have knowlege of the site's content, and make dynamic queries based on that content. Queries should be designed to return listings a wide range of numbers of appropriate items.
    • The clients will check the resulting returned selection to see if it is complete and correct. Ie, if a query for part number X returns part number Y, it is wrong. Simulairly if a request for "T-Shrits" returns only 5 items where there are 23 T-Shirt items in the DB.
    Results will be graded on performance as well as correctness. So if a server one gets 1000hps, but has a 25% error rate, it really only served 750hps, on the other hand if server two did 800hps, and managed 100% accuracy, then it did do 800hps.
  • by Anonymous Coward
    I have used IIS & Apache/mod_perl. Apache/mod_perl performs better on a P133 than IIS on a PII 200. It's easier to use, more extensible, comes with cool goodies like ImageMagick. UNIX is far easier to develop on than Win32. Just try to make an ISAPI dll/ODBC vs mod_perl/DBI and you will see.

    It's stupid to take such a test completely out of context of the real world. Use both, then use the one that sucks less and you will probably use Linux/FreeBSD.

    Besides, it's free!
  • by Anonymous Coward
    Slashdot regulars do not seem to notice that concurrent with these tests which show Linux speed
    performance lagging in certain tests with certain
    hardware, the mainstream Windows-loving computer press has run several very long, very positive feature articles about Linux highly recommending it as a server vs. NT. I'm talking about within the last few days, with full knowledge of disappointing comparisons in these tests. One even had a chart with a feature by feature comparison in which Linux came out better. Not to mention the A+ RedHat 6.0 review featured here.

    Sites include several ZD subsidiaries, PC Week, etc., CNet, Wired's Web Monkey and others I can't
    remember. Check the Linux Center (french or english version) site for a list.

    While there is certainly pressure within certain divisions of ZD and the ilk to run performance tests which are tilted towards MS, in general the journalists are being very fair, if not giving Linux the sweetheart treatment. And, at least in the test which is the topic of this thread, Solaris was also thrown in to show perhaps that a
    unix system designed for high-end equipment easily beats NT even with hardware and conditions designed to show NT in the best possible light.
    NT won't even run on what Linux and Solaris will run on (non-intel) but even in foreign territory
    Solaris performed better.

    I am not a sysadmin and don't have much knowledge of networking, but business is business. It seems that to make Linux look better, it may be necessary for comapnies basing their business on Linux like RedHat and Caldera to pay for their own tests under conditions highly favourable to Linux - and to publish the results prominently.
    IT IS NOT UP TO THE "COMMUNITY" to do any of this.
    Linux is not a company, but companies basing their business on Linux are in direct competition with Microsoft.

    Mainstream computer journalists have prominently published charts comparing Linux to NT point for point - it seems to counter MS's "challenge" page in which very unfair and dishonest charts comparing NT and Linux were prominently displayed.
    Are ZDNet and Wired doing RedHat's job for them?
    It seems so.

    Whining in posts here about what is wrong with tests showing Linux unfavourably just compounds the damage. It comes across as sour grapes.

    Run your own tests and publish the results, email
    RedHat and Caldera to do that, or shut up. Hell, nobody cares if you fake the data. Who will know?

    I hope the people responsible for the kernel have better sense than to make unwise modifications just to make Linux perform better in certain artificial benchmarks. It seems that a better approcah is just to keep developing in a way which is natural to Linux, and eventually these benchmarks will take care of themselves.

    In the meantime, if your heart bleeds for Linux, publish and advertise the many areas in which Linux excells as a server, and thank the journalists in the mainstream media who you seem to hold in contempt for taking the inititive to
    do that for you. At least they deserve a thank you.

  • As soon as linux and cox improve the kernel, Microsoft will release widnows2000 and windows
    will take another step forward and then when linus and cox try another kernel release, microsoft
    will release yet another.

    Linux is currently improving at a much faster rate than NT. How old is NT 3.5 and where was linux when this came out ? How far has NT come since 3.5 ? In terms of the work being done on the kernel, and in terms of third party support, linux has closed the gap, with SMP support appearing in 2.2, and several (big) third parties standing behind linux ( IBM, Dell, Oracle, Corel ).

    As for the number of developers ... more is not better. And the linux kernel shows this. Linux is not perfect ( lacks scalability ) but in some ways is miles ahead of NT ( more reliable, better security features , etc )

    BTW, your point about all the MS developers being too busy with Win95 to work on NT doesn't wash: don't forget, Win95 is about 4 years old now. In 1995, linux was in its infancy.

    Face it. ITs lost. linux will continue as a hobby for
    linux but in 5 ro 10 years NT will run in your car to your toaster to your hosue security system

    Fire away with your predictions, but at present, linux retains a growth in market share, at the expense of NT. What makes you think that the rapid adoption of linux will suddenly halt ? All factors that have the potential to influence growth are on the rise. Factors such as third party support, critical acclaim, availablility of applications, mindshare, large scale deployment, and the OS itself have taken leaps and bounds over the last year, whereas NT has either stayed still or even slipped a little. I wouldn't bet on NT killing linux any time soon.

    -- AC

  • by Anonymous Coward
    Ok, here's the deal. ISS looks like it kicks the crap out of Apache (speed wise), but I would like to see NT/apache vs. Linux/apache. Guarantee Linux will win easily. Also comparing Microsoft Networking (WinNT) to another (Win32 box) of course WinXX will be faster than Samba to WinXX.

    Using stock binary's(Apache) from RedHat or Debian or anywhere ain't gonna cut it.

    I think its time the Linux community makes some moves and we show how Linux can compete in the enterprise market, instead of letting these bullshit benchmarks make it look like shit!
  • by Anonymous Coward
    So what if NT outperforms Linux on high end hardware... I'm sure that by the time these systems (quad processor 500Mhz PIII) are commonplace in people's homes, Linux will run faster on them than NT does.

    So what if Apache is slower than IIS... it is certainly more reliable and configurable than IIS and since the average site doesn't have 100Mbs of bandwidth, raw speed is a moot point.

    In the real world, the only companies that can afford the systems tested here are the same ones that can afford to pay for loads of NT licenses and the support staff that would be required to run the servers. Chances are these same companies are already in bed with MS and are using NT anyway. Linux's target audience on the other hand is the little guy, who can't afford all the expensive crap and wants a system that doesn't have to be watched 24/7. Maybe they're a college student or someone with a home business... the point is they understand price/performance tradeoffs, and know how to make the smart decision.

    In short, Linux is open, and even if it only performs half as well as its nearest competitor, that makes it twice as good.
  • by Anonymous Coward
    Look at the cost.

    NT4 server = ~$809.00(5 user pack)
    Linux/FreeBSD/etc = Free - unlimited users.
    Throw in ~$80 for secure server.

    If you want email and such:
    Backoffice/Exchange = up to $5000.00 depends on
    on number of users.

    Sendmail/Imap/Pop = Free

    We all know you get what you pay for in this
    world. Considering prices, numbers
    (which doesn't mean crap anyway) and the reliability of Linux/FreeBSD/etc, free does pretty damn good in this area and in my book. Cost is
    the ultimate equalizer.

    Why would I spend thousands of dollars
    and trade reliability/stability for about a 10 - 20% increase in speed when my speed is adaquate
    enough? Are we all stupid?

    We should benchmark purchasing stupidity
    or gulibity instead.
  • by Anonymous Coward
    I don't think this is so much to worry about. Solaris comes out on top. The moral of the story is that Solaris drubs NT as a high end server (which we knew anyway ) and linux kills it as a low end server. Which leaves us asking : what the hell is NT good for then ? precious little, it appears. Solaris for high end, linux for low end. End of story (-;

    however, the tests also show that if linux wants to go head to head with the "heavy duty" unix flavours such as Solaris, there's work to be done.


  • by Anonymous Coward on Sunday May 16, 1999 @05:19PM (#1889761)
    Linus himself stating that he believed OS design was well understood by the 1970s, and he considers microkernels to be "stupid", plan9 to be "stupid" etc etc.
    While he is undoubtedly a highly talented programmer, I think that there are engineers in the world who are at least, if not more, skilled working for Sun, CMU, Microsoft, DEC and suchlike whose work has proved Linus to be very wrong.

    Pardon, could you please tell me exactly which of the above comments (microkernel, plan9) were proven wrong by Microsoft engineers?
    I don't want to say Linus is good, everything he says is right etc., but I want to see plain facts.

    But for high volume dynamically generated content, for example, or commerce, or databases, NT is more mature and benefits from being developed by engineers rather than hackers. DEC, from whence Cutler came, are very serious about this.

    I'm far from saying Win NT should be avoided at all cost - heck, I use what does the job best for me. But do you want to say for very high traffic, dynamic web sites you would like to use Windows NT?
    Ok, this is not a server-issue, but it is Microsoft, so here follows a description example of "mature" software and the answer of the question: Why is regedit so big? [ncl.ac.uk] (The Risks digest, Vol 20;35)

  • by Anonymous Coward on Sunday May 16, 1999 @08:47PM (#1889762)
    MVS is an IBM mainframe OS. As an OS, it is known for its efficiency, extreme stability, and great process management. It is used in business data centers around the world. It was not deigned to be eaasy-to-use (from the admin level) or easy to program.

    VMS is The Other Unix. It was designed by DEC, and Cuttler was the primary architect. It paralleled Unix in many ways, although it was not as consistent in design, nor as easy to use (as a system). However, it has auditing features and access controls that Unix (and Linux) could really use. VMS was designed to control every little security detail, whereas Unix was designed around trust and flexibility.

    K&R were hardly "amateurs." They were working at Bell Labs on Multics, which was going to be a "real" multiuser OS. But its design was too boroque, with too much squeezed into the design; so Dennis Ritchie designed a better, simpler system in his spare time. (Bell Labs is a very open environment.)

    Now, here's a question to ponder: Multics failed because of its complexity. Can you think of any other operating systems that try to squeeze everything into the OS? If so, can you defend that design in light of history?
  • The shortcomings of the Linux kernel have been known for ages. Linux first appeared in late 1991. In early 1992 already Linus acknowledged that a microkernel design would have been better.

    "True, linux is monolithic, and I agree that microkernels are nicer. [...] From a theoretical (and aesthetical) standpoint linux looses."

    Man, this has to be one of the worst misrepresentations I have ever seen. Yes, Linus did write that in the famed "discussion" (really a flame) with Andrew Tannenbaum, but he was defending his architectural choice, not conceding a mistake.

    Basically, the argument that popped up in the discussion was that a monolithic kernel is quicker and easier to implement.

    If the GNU kernel had been ready last spring, I'd not have bothered to even start my project: the fact is that it wasn't and still isn't. Linux wins heavily on points of being available now. ---Linus

    Anyway, people can go read the USENET thread in question [kde.org] themselves.


  • by rngadam ( 304 )
    Ouch! That hurts! But from what I gather, the link offered for this story is mostly a biased synthetisis of the PCWeek report from a pissed off MCSE.

    The real report still has some nice things to say about Linux, and hopefully this whole mess will give us the kick in the butt to start making everything go faster and be better.

    As customers, we especially need that all the Linux distributors and hardware resellers start working together instead of wasting time "fighting" each other. An industry-wide consortium to develop better hardware with everyone contributing a fixed percentage of their net profit would be nice. That money could be funnelled to the developers to something such as sourcexchange (http://www.sourcexchange.com).

    Still, it wouldn't have been possible just a few months ago to have a comparison of Linux with Solaris, NT, Novell... And since those "mainstream" NOS are often only affordable to bigger corporations, Linux has it's market cut out!
  • But for high volume dynamically generated content, for example, or commerce, or databases, NT is more mature and benefits from being developed by engineers rather than hackers. DEC, from whence Cutler came, are very serious about this.

    Well, this is the biggest load of crap I've read today (I read the ZDNet article last week). By this logic, we should all be running VMS!
  • Well, they said in the article that Intel was the only platform that ran them all. And, of course, NetWare _only_ runs on Intel platforms. But, NT and Linux can both benefit greatly from the Alpha platform, and anyone who's seriously running Solaris is running it on SPARC. Which, of course, further invalidates the test, because the goal of the benchmark tests were supposedly to wring out all possible speed, but 3 of the 4 OSes were running on sub-optimal platforms (speedwise, anyway).
  • It's obvious from the way these guys messed around with their Linux and NetWare setups that they didn't know anything about either.

    1) Linux:
    Well, they used Red Hat 5.2 w/Linux 2.2. Of course, RH5.2 doesn't _come_ with Linux 2.2, so they compiled their own kernel. The possibility they messed up something there is very high. Apache is at a severe disadvantage compared to the other HTTP servers not only because of the lack of multithreading support (which I still wonder _how much slower_ that makes Apache) but also the lack of a reverse cache. Maybe that's Apache's fault, but it is easy to remedy, esp. since Red Hat includes a SQUID RPM.

    I especially find it interesting that elsewhere on ZDNet you can find not only the old-news test of NT vs 3 Linux-distros+Apache+Samba (in which Linux/Samba/Apache trounce NT), but also a newer article in which (Caldera, I think) Linux+Apache again do the same to NT. (I can't find the URL right now--I think is was in Smart Reseller?). Just goes to show benchmark results depend as much on the benchmarker as on the benchmarkee.

    2) NetWare:
    First, these people start off dissing Client 32 (Whose name now, BTW, is simply Novell Client). Am I the only person who realizes that *Microsoft Client* means *Client for Microsoft stuff*? Besides, if Microsoft had implemented NetWare Core Protocol properly in Win9x, Novell wouldn't have *had* to write their own client software. In fact, Novell Client (or Client 32, or whatever) more or less *fixes* things wrong with Win9x networking so things run more smoothly. NC also has an adjustable file cache and can even restore network connections after a server has been rebooted (MS half does this). Novell Client is also a benchmarker's dream since virtually every option can be tuned. NC also enables Novell Directory Services to manage PCs; in fact, it is the _best_ way to manage NT workstations (they sort of glossed over that). What did they focus on? Experimental oplock support. A feature that is not only useless in a shared environment but, more importantly, is a bad test of network throughput since the file is only touched when you open and close it.

    And, going back to their opening paragraph, they remark that by porting NDS to other platforms, Novell is "leaving little to drive new...deployments" of NetWare. That's one of the good things about NDS, and is one of the few things driving Novell's return to profitability. You'd think they'd be happy NetWare uses NDS to work in heterogenous environments, especially with their overall conclusion of that no one NOS stands out in every field.

    Of course, they did realize the NetWare file-and-print services are still single-processor tasks, even with NetWare's new MPK (multiprocessing kernel), which is a real failure on Novell's part, IMHO.

  • You're not counting labor. Let's assume that NT admins are cheaper than Linux admins ($100/hr vs $200/hr, I'm being lenient w. NT and scorning the earning potentials of a MCSE :) )
    Let's also assume that both boxes require an equal amount of maintenance (not necessarily so!)... but the Linux admin does it remotely, where the NT admin has to make house calls or wear a beeper. We'll average it out over the course of a year and say that the NT admin spends 2.5 as many hours working, and then let's call it six minutes of maintenance/2200 for the linux box, 15 minutes of maintenance/2200 for the NT box.
    • $800+25/2200 = $0.37500 a hit/sec
    • $80 +20/2200 = $0.04545 a hit/sec
    It is not necessarily _true_ that Linux admins are paid twice what an MCSE earns, or that NT requires only 2.5 times the maintenance in billable hours (remember the 'server in the closet' paradigm linux has always had, unglamourous but low maintenance). However, even slanted towards NT in this heavyhanded way, NT is eight times the cost for the same performance _including_ administration.
  • Good GOD! (!!!)
    Yah- I laughed! But I don't know whether I should be crying instead! 96 servers? That's an _awful_ lot of their own dogfood to be eating, wouldn't you say?
    Meanwhile, here in Brattleboro, somebody has sold the local Co-op a cash register system that all works on W98 (very possibly including some sort of NT server), and they're still struggling with it. It's easy to see those 'uh-oh' dialog boxes popping up and think the whole problem is unreliability, but it's worrying to think that even _if_ the system works perfectly (which it doesn't seem to be doing) the Co-op has no idea what kind of financial trouble it's now in, maintaining and paying off that system. How soon until they are deemed to need another NT server or three? :P
    Laugh _and_ cry. This sort of thing will kill stores you love, and make many people poorer.
  • Make two distributions.
    One, regular Apache, which would be used for actual HTTP serving.
    Two, 'Apache Pro!' which is tuned for static page serving at all costs and obliterates any other purpose including reliability, stability, dynamic pages, whatever, just to produce benchmarks.
    Then people can go on using Apache for _real_ web servers, but for the benchmarks, you ask them 'Why the hell aren't you using Apache Pro? You trying to handicap the race here?' and get them to use Apache Pro against NT- the 'bytemark version' (!)
    Wouldn't that work? It has to be called 'Apache Pro' though, because it has to have the name Apache and it has to seem like the 'more industrial strength' version. _We_ know that it'll be better to just run Apache, but PHBs and test runners will find it impossible not to use Apache Pro- they'll be trapped by their own assumptions of 'upgrading' and 'standards'. It would be much harder to get another webserver used in benchmarks, but if you call it 'Apache Pro'...
  • Again... Zues!

    Did you mean Zeus perhaps?

    How many times does the "Linux Community Inc." need to tell these people that Apache wasn't ment for speed!? Why is Apache designated as the One True web server? Benchmarking static Apache vs. static IIS is pointless. Any programmer worth his salt could cook up a few dozen lines of code that would outperform both servers on pure static content.

    True, but no webserver should be that horrendous at serving static pages. While not the main purpose of most enterprise servers, some major servers do serve a lot of static data, and most of the rest serve at least a significant amount, so static serving speed is indeed important. Apache needs to improve in this field.

    Rather than bitching about the benchmarks, fix Apache, then you won't have to bitch about the benchmarks anymore.
  • Friday afternoon, the network slashdot was on was messed up for around five hours. However, after it came back up, the slashdot box itself was down for around 2 hours (I kept getting "connection on port 80 refused" from it, so it was obviously up and responding, just FUBARed).
  • The problem is that they don't want OS benchmarks. They want complete system benchmarks of the OS running with the best webserver available for the platform. For NT that's IIS. For Linux that's Apache. You can't say "no fair, they have a better webserver than Linux does" and expect them to downgrade to Apache for the tests.
  • by Thornton ( 600 ) on Sunday May 16, 1999 @04:49PM (#1889774)
    From the Apache homepage, we find that "Apache exists to provide a robust and commercial-grade reference implementation of the HTTP protocol."

    To my experience, Apache is the most stable of all web servers, and the only one that comes close to implementing the whole HTTP protocol.

    Speed is not the Apache group's primary concern, and folks whose main concern is speed might consider looking elsewhere. Despite that, Apache is more than powerful enough to saturate a T1 with a relatively low end machine (we have saturated a T1 with a Pentium 90/96M RAM running Linux), and a fine tuned Apache can easily outperform just about any other web server (when we load mod_mmap we get performance tens or even hundreds of times what IIS can do on a good day).

  • Check this out: http://www.zdnet.co m/pcweek/stories/news/0,4153,2256098,00.html [zdnet.com]

    195Mbps vs 114Mbps

    Its not all bad news!
  • Posted by The Mongolian Barbecue:

    1) Performance of NT as massive numeric processor for simulations.

    2) Performance of NT as specialized SEM driver.

    on second thought screw the second one. its really not possible to go very fast unless you can add speed hacks to your driver.

    if NT wins the first one I'll switch! (It never, ever will)
  • by gavinhall ( 33 ) on Sunday May 16, 1999 @05:14PM (#1889778)
    Posted by ZeeC:

    I love Linux I run an ISP with it. I am also working for a company that wants to relese linux on their high end Intel boxs (Hint, they are currently supporting NT and do UNIX on relly colorful boxs), anyways I am having a tough time finding drivers and HOWTO's on doing high end stuff with Linux. like Fiber channel scsi, Gigabit eathernet, heck even handling more then 9 scsi drives at a time. The OS needs to grow out of the "Keep a 386 useful" to a higher level now.

  • You obviously don't understand what is going on here. Code has nothing to do with it. The test data is such that it is static and just the right size to fit in IIS's cache and not the cache of other servers. This test data does not represent anything in the real world.

  • Unless you can point to the DejaNews articles verifying Mindcraft's claims to have asked for help in more than one "appropriate forum", it's hard not to conclude they're flat liars.
  • Comparing (stable, slower) Apache + no Squid to (fast, unstable) IIS instead of something like Zeus or Boa is, in fact, bullshit. Apples and oranges -- or have you never run a goddamn server?

    The next question is who has the money to buy a quad Xeon with quad fast Ethernet NICs, but can't scrape together the change to get a gigabit NIC and switch instead? Uh, I'll take "no one" for $1000, Alex.

    You're not nearly as realistic as you think.

  • The old data I had when I tried to run NT.

    NT is the only reason these servers were quad-processor PIII/500's, and NT's resource hogging is exactly the reason that I don't run it on my personal box. I have it running on a P133, and I laugh at it sometimes.

    If I had a high-end server, I'd try Linux and NT again, and... well, I bet I'd be laughing at NT pretty soon. It isn't exactly impressive running on uniprocessor PII/400's, but the bluescreens are cute. Really, how much CPU does an OS need just to crash? :)
  • Correct on all counts... however you're missing the point of these benchmarks. Microsoft doesn't care what people run on their desktops (okay, this is simplifying things drastically) because they make their bucks on servers and high end installations. Obviously they want to show that they're still a contender with all the positive (and not so positive) exposure Linux is getting lately.

    These benchmarks expose nothing new: Microsoft will always try to bend and twist information to suit their needs (the first study was an absolute joke), and Linux has a long way to go before we can call it scalable and SMP friendly. It's just not ready for the enterprise. Not that I'd sleep well at night knowing my systems were running on NT mind you ;).

  • All of these benchmakks remind me of a sucker bet a friend of mine made.

    It seems that some guy boasted that his car could outrun anything present over any distance. My friend bet him $20 that he could beat the car on foot. Naturally, the challenge was accepted. My friend then marked the 'course' of 10 feet from start to finish, and the race was run. My friend won. According to that 'benchmark', my friend can outrun the car. (no car can outrun a person in reasonable health for the first 10 feet).

    These benchmarks are the same. Now, I know that the next time I want maximum speed for 30 minutes at a time from a web server w/ 4 100baseT cards (and a network that can keep them busy), I should choose NT. Of course, if the constraint is multiple T3 and minimal downtime, Linux and Apache are the way to go. Guess which is the more likely scenario.

  • >These people haven't a clue what they are benchmarking.

    Of course they don't. The people who really know their stuff when it comes to computers wouldn't be caught dead working at places like PC Week. You might've found them working on the staff of the old COMPUTE! or Byte magazines when they really did cover multiple platforms like the Atari,Commodore,Apple machines and PC clones. That kind of knowlege just doesn't exist anymore in today's computer magazines. Just take a look at what's on the magazine racks of your local bookstore or K/Walmart and compare it to what was on those very same shelves 5-10 years ago. It's really quite sad actually.
  • >Linux' 15 minutes of fame are up. It's time to get serious

    You wish. Linux's future is only beginning. NT's "15 minutes of fame" as you put it is what's up. Take a look around you. Nobody's talking about NT dominating anything anymore. Take a look at what the school kids are running on their home machines. It's *NOT* NT. As they enter college they will bring their Linux/Unix knowlege with them. The only reason NT really got a foot in the door was that people was pretty much only familar with Windows back in the early-mid 90's. This is what is changing and why pro-microsoft people like yourself are yesterday's news. Get over it.
  • This article makes a number of very valid points. Certainly linux performance on low-end hardware is markedly superior to NT, but the question is, how many people deploy such hardware in production? Of course, that's not to say that there's no place for low end deployments, but the skilled engineer picks the right took for the right job in any situation. If I had a very low budget and just needed to deploy personal homepages and a POP3 server, for example, then linux would be a good choice (along with *BSD, naturally).

    But for high volume dynamically generated content, for example, or commerce, or databases, NT is more mature and benefits from being developed by engineers rather than hackers. DEC, from whence Cutler came, are very serious about this.

    However, for midrange work, linux simply isn't up to par yet. I seem to recall Linus himself stating that he believed OS design was well understood by the 1970s [linuxworld.com], and he considers microkernels to be "stupid", plan9 to be "stupid" etc etc.

    While he is undoubtedly a highly talented programmer, I think that there are engineers in the world who are at least, if not more, skilled working for Sun, CMU, Microsoft, DEC and suchlike whose work has proved Linus to be very wrong. And as such, linux is crippled.

  • maybe, if you were just serving static documents, you could get a dozen boxes, copy your site to each one and use round robin DNS.

    but add in session management, personalisation, real-time news feed, content archives, commerce, access control, extensible templating and dynamic page generation and all that other stuff we do in the real world, and your solution starts to look quite naive.
  • by Kaz Kylheku ( 1484 ) on Sunday May 16, 1999 @05:35PM (#1889792) Homepage
    It shows that NT needs a behemoth computer to run well, whereas it is targeted at the low-end market
    which does not use behemoth computers. NT needs the hardware to run well.

    Believe me, NT does not run well, for instance, on my 450 Mhz PIII with 128 megs of RAM, all things considered.

    Linux has a history of keeping abreast with reality. When nearly everyone has a four or eight CPU monster, then Linux will run like hell on them, and so will applications such as Apache, etc.

    When everyone had a 386, Linux ran well on a 386. When everyone had a 486, Linux ran well on that (and still does!). Linux is made to fit a need,
    not to participate in olympic events.

    I have an 8MB 486 at work on which I need Linux to run well. It does. In all likelihood, NT 4 won't even boot on such a machine. The machine has no keyboard or monitor, yet I can completely administer and upgrade it. NT would be useless on it since it requires a graphical display, mouse and keyboard for administration.
  • From the slashdot stats page:

    uptime: 52mins

    Last time I looked it was 3 days. Maybe Rob should explain what's going on with /.

    perl -e 'print scalar reverse q(\)-: ,hacker Perl another Just)'
  • by Matts ( 1628 ) on Sunday May 16, 1999 @05:39PM (#1889795) Homepage
    For those interested in IIS and benchmarking, please take some time to read this [wgg.com] rather long article. It's from a friend of mine who was the manager of the O'Reilly WebSite Pro development team. Some of the key issues were that MS made changes to the Winsock API specifically for IIS (AcceptEx, TransmitFile, Fibers and IOCompletionPort). Should Linux do this to make Apache/Zeus faster just for benchmarks, when really it does fine in the real world? No. Of course not.

    The other interesting point is the fact that ZD came up with the IIS benchmarks specifically to show how good IIS is. Such things as fitting the test harness in the cache, and only doing ISAPI dll's for dynamic content (vs CGI on other servers).

    There are lies, damn lies, and ZD benchmarks. I'll use what works, and live happy in the fact that I won't have to reboot my server this year.


    perl -e 'print scalar reverse q(\)-: ,hacker Perl another Just)'
  • by Jeff Licquia ( 2167 ) on Sunday May 16, 1999 @05:40PM (#1889797) Homepage

    This is really starting to get old.

    Apache running all CGI is compared against IIS running ISAPI, and - surprise! - IIS kicks Apache's butt. I wonder how things would look if we ran a mod_perl test and compared that to IIS running CGI. "News: Linux/Apache Provides 3.5 Times More Hits Than NT!" I will observe, for the record, that Apache, IIS, and Netscape all provided exactly the same behavior on CGI; no dynamic test was ever done with Apache, so we'll never know, but I bet a mod_perl test on Apache would have produced at least somewhat similar numbers to IIS and Netscape.

    And what's all this about Apache modules having to be compiled into the server? My Apache install has a directory full of dynamically loaded shared libraries. Exactly the same way IIS implements ISAPI modules. Only on IIS, you don't have the option of static linking for whatever reasons (less overhead, security, whatever).

    I especially loved all the "process vs. thread" crap. Both PC Magazine and Wugnet (yes, the true authorities on Linux) were all over Apache's "process" model vs. IIS's "thread" model. But on CGI, you invoke a new process with each client request, no matter how many servers you've preforked or how many threads are idle. Presto: poor performance, no matter what the preforking parameters are.

    You know, I wouldn't be all that surprised if NT beat Linux on this high-end hardware for various things in a fair benchmark. I'm just sick of hearing this kind of drivel from the MS camp. I almost hope Linus & Co. do Mindcraft III just so we can have a decent benchmark to compare against and some future directions for development instead of all this blatant lying.

  • For this Shoot-Out of network operating systems (see the Shoot-Out Scorecard), PC Week Labs was unable to find a single server the vendors could agree on (due largely to Solaris 7's and Linux 2.2's limited hardware RAID device support). However, each vendor supplied servers that fit our desired configuration: four Pentium III 500MHz CPUs with 2GB of RAM.

    Excuse me? Who are they refering to as "the Linux vendor" in this situation? Some company like Penguin Computing? RedHat? Linus?

  • by P.J. Hinton ( 3748 ) on Sunday May 16, 1999 @05:23PM (#1889805)
    Michael Surkan, who is usually the first to come to Microsoft's defense, minimizes the significance of PC Week's tests.

    http://www.zdnet.com/pcweek/stories/columns/0,43 51,402634,00.html

    It is surprising that he sings the praises of non NT OSes in their ability to use resources more efficiently on non high-end machines.
  • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Sunday May 16, 1999 @08:35PM (#1889807)
    In early 1992 already Linus acknowledged that a microkernel design would have been better.

    "True, linux is monolithic, and I agree that microkernels are nicer. [...] From a theoretical (and aesthetical) standpoint linux looses."

    Umm, he said that it loses "from a theoretical (and aesthetical) standpoint". This is inequivalent to saying that it "would have been better" from a pragmatic standpoint. uote>Proprietary kernels probably evolve much faster but you don't get to watch it.

    Perhaps, perhaps not. The fact that "you don't get to watch it" means you can only guess (unless you happen to be one of the people who "get to watch it").

    How much have, say, the Solaris or NT kernel architectures changed, relative to the extent that the Linux (or *BSD) kernels have changed? (BTW, neither of them are what I would call a microkernel, not even NT - NT's device drivers, file systems, and networking stack run in kernel mode, for example.)

    Lastly, good kernel programmers are rare. If you were one of the few would you rather spend your working hours coding for love and Linus or earning good money instead?

    I have the impression that at least some of the developers of kernel code for free OSes do both.

  • by LunaticLeo ( 3949 ) on Sunday May 16, 1999 @05:18PM (#1889808) Homepage
    This review was just testing the ability to
    deliver static documents. So the configuration
    he suggested is valid for the benchmarks he
    was responding to.

    I think what everyone of these "benchmarks" of
    apache are missing is that delivering static
    content is the least of the reasons to use apache.

    Slashdot is a real example of a dynamic website.
    No one is benchmarking dynamic content delivery
    thru web servers.

  • Maybe Rob should change to a NT system :-)

    Actually, that would be quite interesting. Take an NT server (I have a copy I would gladly donate for this test) and install Perl, Apache, MySQL and mod_perl. Copy Slashdot over to the new machine and transfer the load and see what happens.

    I'd wager that it'd fall all over itself.

    The wheel is turning but the hamster is dead.

  • Step out of that box. Quit promoing Linux as the be-all and end-all. Promo NetBSD as *the most appropriate solution* to server needs. Promo BeOS as *the most appropriate solution* to multimedia needs. And so on.

    NetBSD may some day become the most appropriate solution; it isn't yet. Chuck Cranor has done a very good job on UVM, but it *is not finished*. Of the free Unices, the only one that has a virtual memory system that is state of the art as of today is FreeBSD. NetBSD and OpenBSD will probably get there; I doubt Linux will (due to the very strong defensive reactions Linus' has towards some aspects of the Linux code.) In some ways, I hope I'm wrong - it is a pity if that many people will be left with an inferior VM system :-(


  • I'm sure that's the world we want. No OS choice. Windows NT everywhere?

    Don't you NT zealots see that statements like this only help Linux?

    It gives those of us that actually enjoy computers for what they are more incentive to make sure things don't get worse than they are.

    One company running the computer industry is just as bad as the railroad tycoons, or any monopoly that controls every facet of computers. I'm not sure even you would want to live in a world like that unless you were a Microsoft shareholder.

  • by edgy ( 5399 ) on Sunday May 16, 1999 @04:42PM (#1889828)
    These benchmarks weren't a complete loss to all of those. Hidden in those benchmarks was a rather interesting admission found in this story on Linux Today (note, I don't work for them, but they do have some good stories sometimes):

    http://linuxtoday.com/stories/5906.html [linuxtoday.com]

    This story reveals that Linux with Samba achieved 197Mbps, which was significantly higher than the Mindcraft benchmarks, severely invalidating the original Mindcraft benchmarks. Also, Apache did MUCH better on these benchmarks than on the original Mindcraft tests.

    The article also shows that NT achieved only 150Mbps against NT clients, 31% slower than Linux. In tests with 60 clients, Windows NT managed only 110Mbbps throughput, compared with 183Mbps for Samba.

    So, we got something out of these benchmarks. Linux serves Samba to NT clients 31% faster than NT on high end hardware!

    Now, if we only tested IIS against Zeus to make a more fair benchmark for static tests, Linux wouldn't look so bad after all overall.

    I don't see how these new benchmarks validate Mindcraft at all.

  • 2200 hits/second of **HTML**... No one has such a problem, because that's a damn lot of hits, fast -- The problem with NT is UPTIME... Let's subject both OS's on the same hardware to the same load, have them serving cgi/php, and see which one crashes first.... Uptime is a MUCH bigger concern with a webserver than if it can serve 1000 or 2000 _TEXT_ pages per second...
  • by FFFish ( 7567 ) on Sunday May 16, 1999 @04:41PM (#1889836) Homepage
    It appears that most Linuxheads have finally come around to admit that Linux doesn't perform well as a server. Yet.

    But it's pretty well acknowledged that NetBSD kicks ass in that department.

    Time for Linux groupies to take the blinders off. Quit getting your shorts in a knot about the unfair Mindcruft tests, quit trying to pit Linux against NT in server applications...

    ...and start *heavily* promoting NetBSD as the ultimate server solution. Mob the media with it.

    As long as you play by Microsoft rules, you lose by Microsoft rules. And fiercely protecting one's "turf" is a Microsoft rule.

    Step out of that box. Quit promoing Linux as the be-all and end-all. Promo NetBSD as *the most appropriate solution* to server needs. Promo BeOS as *the most appropriate solution* to multimedia needs. And so on.

    This tactic will emphasize to the media that people should make active choices re: their OS needs; emphasize that Windows is not the most appropriate OS for most cases; and emphasize that the Linux community plays big and puts the user first and foremost.

    It's a no-lose situation. Choice is the ultimate goal.
  • by Lazy Jones ( 8403 ) on Monday May 17, 1999 @01:10AM (#1889839) Homepage Journal
    I propose the following as a fair benchmark for web servers:

    Every day, ./ chooses a web server on the internet at random. It then presents a link to that server somewhere on the start page, calling it the "benchmark link" or whatever (so people know what it's for). It is then ./-ed by the readers, and at the same time monitored for its uptime. Its server OS and software are determined (if possible, should be) and as the days pass, statistics are put together for the average time a server OS lasted under that strain.

    Not entirely serious, but a good "real world" benchmark, and I'd enjoy that.

  • It's hard for me to believe that NT is faster than anything from my experiences with it. Few sites that I run across that end in .asp are in any hurry to dish out the content I request.

    Anyway. Instead of simply blasting benchmark results that don't match what we expect, we should work to fix the problems in Linux, Apache, Samba or whatever is causing our bottlenecks. The fact that we can do that is one area of significant advantage that we have over NT.
  • please note the monster hardware they had to throw at that to keep up with all those hits.
  • ZDNet's benchmark is relatively correct over several points. I would say they "touch" the reality. I know that because I work with every OS shown in those benchmarks. Well that not completely true... I *worked* with Windows NT. But two months ago I scrapped the last box at my work. (Farewell Microsoft. You recently lost 2500 users. Soon you will loose 3-4000 more).

    The article presents some points one may usually find on working either with Linux, Solaris or Novell. However some points are really the result of not caring on doing some tuning. Besides the article is purely biased on the point of "choosing the ONE final high-end".

    Somehow the article suggests a person to compare Linux & Solaris. Well sorry both systems have their ++ & --. However I agree that Linux is mostly fitted for an average computer rather than a super-high-end machine like UltraSPARC 4500. Here Solaris beats it.

    But does that mean that Linux is not and high-end system? Well let's not forget the cluster systems. Even Linux has a place on top-500 of supercomputers. And it beats some serious machines around there.

    Somehow someone may have forgotten here one of the contenders. Novell Netware is a very specific system oriented mostly for a very specific sort of tasks. But it does its job much better than Linux or Solaris. Both in safety and preformance. And don't dare to compare it with NT. One machine now running Novell for the 7th month couldn't even hold a simple transfer of 100Megs over the net. Not talking about preformance (hey Redmonds, I also like to burn some time with my family!)

    Really the NT stuff there is pure hype. On my "practical benchmark" NT Servers lived no more than 1 month of real, serious work. After that very sad experience we returned to Novell. On what concerns workstations we recently destroyed every MazDiee soft over 70 machines serving more than 2000 users. And on what concerns "high-end" we don't even dream about Redmond. Everything runs either on Solaris, AIX or Linux.

    Some people say that my relation with NT is due to the fact that I didn't taste the "real thing", that I should have been more systematic on "tuning" it. I know only one _real_thing. Two months ago I had several machines running with a miserable preformance and suffering several crashes every day. Now I don't hear complains about slow preformance and the majority of workstations carry uptimes of almost two monthes.
    Two monthes ago my wife almost forgot that someone else lived in the same apartment. Now I have some time to share with my family.

    Oh besides. Now we have the chance to make an high-end machine out of this workstation stuff :)
  • by Arkham ( 10779 ) on Sunday May 16, 1999 @05:19PM (#1889856)
    Well, I work for a very large worldwide online/television news organization. Our main web site gets about 130,000 hits/minute on a normal day. That comes out to about 2200 hits/sec. This doesn't include any of our partners, which each generate a good bit of traffic. We have 6 T3s and an OC3 for our bandwidth, and we run multiple servers to balance the load.

    What do we run? Netscape web servers on Solaris. When big news like the Starr report came out, all the servers at MSNBC running NT came crashing down under the load, but we didn't. That's what UNIX (and Linux) are about, reliability. Apache can be performance-tuned if you need it to be fast (Netscape's server is the same code base as Apache), but for most of us it's fine as-is. I bet that Microsoft.com doesn't get 2200 hits/sec.
  • Before the NT marketing machine went into full steam, after Windows 95 shipped, OS/2 servers were the #2 servers used.

    I'd be willing to bet that 90%+ of those OS/2 servers have been replaced by Windows NT. Don't forget that a large portion the OS/2 server market in the early 90s was Microsoft LAN Manager.

    Warp 5 is essentially just a bone thrown to the legacy customer base - it's no shock they didn't test it.

  • I suggest you do just that -- Call up your IBM rep and ask them if they are actively selling "OS/2 Warp for e-Business" to any customers who are not currently running OS/2.

    What you will find is that IBM is selling AS/400s and Windows NT support services. Sorry.

    (Note that I never said that Warp wasn't/isn't a capable OS. It's just a capable OS that's seen it's last major release.)
  • The new data would simply corroborate the old data.

    What old data?

    The only thing I have dug up is a Linux/Samba blurb on ZDNet, where no lab data was given. (It turns out that a member of the Samba team tuned Linux.) That and a bunch of anecdotal evidence that Linux runs faster and better than about anything on a Pentium-90 with 48MB.

    I'm not saying that the recent benches are fair by any means, but Linux has gotten larger than a bunch of guys on the Internet. That means that objective data is going to come in (something that hasn't necessarily happened yet, especially on high end x86 hardware), and some of that data is going to be sponsored by competitve vendors, and some of it is going to be cooked.

    There isn't a commercial software product available that hasn't withstood this sort of 'objective' marketing attack, and especially when you're dealing with Microsoft, you have to do more than yell and maintain moral superiority. Someone (err, RedHat, Caldera, and SuSe) is going to have to post their own benchmarks and their own data.

    (And, yes, Linux has enough commercial interests attached to it that you can count certain distros as commercial operating systems.)

  • If what I heard was correct, the NT conversion at Hotmail failed because of limitations in MS Exchange, *not* IIS.

    I don't think there's any question that Exchange has it's problems.
  • How much have, say, the Solaris or NT kernel architectures changed, relative to the extent that the Linux (or *BSD) kernels have changed?

    Not that this destructs your point, but in the early 90s, Sun was willing to dump the SunOS kernel, and Microsoft was willing to dump the OS/2 kernel. So it's possible for a big commercial vendor to completely switch over to a new kernel.

    I'm no computer scientist, but it seems that the maxium that Microkernels are slower than monolithic kernels is only true until it isn't. Perhaps something will come out of Apple Darwin.

  • I woudn't say that they are, only that they carry considerably more weight than personal testimonials to those making buying decisions.

    (Now that I used the word "Objective", I have the sudden dread that an Ayn Rand person is going to jump on me!)

  • Use the write operating system and hardware for your job. If all you want to is stuff out files via HTTP at a tremendous rate, tune Apache and run it on FreeBSD. If you actually want to get some work done, use Solaris/USparc for high-end or GNU/Linux for low-end to medium-end. If you want to run a server for Outlook 98 clients, use NT.
  • by Zico ( 14255 ) on Sunday May 16, 1999 @07:19PM (#1889879)

    Admit it folks, if the tables were turned and Linux was beating NT in these benchmarks, we wouldn't be hearing all these excuses about the relevance of the benchmark.

    Not that this is a new thing, since it happens every single time that someone shows that Linux might not be the best solution for everything under the sun. Whether it's lack of certain quality applications available on other OSes, or poor performance by Linux on a certain benchmark, we can always be assured of hearing the shriek of, "But nobody needs to do that anyway!"


    And no, Linux doesn't actually suck at this current benchmark, but it definitely doesn't measure up to NT or Solaris in it.

    Slashdot Realist

  • Agreed. It says quite clear in the Apache's "Why Use Apache?" Doc, that it is not built for speed but stability. All these benchmark people are pretty brain dead, if you ask me.
    Even in some "benchmarks" that compare AIX vs NT, NT chures out more static pages than AIX, but chokes on highloads, while AIX just keeps chugging along. I have a good idea that on one and two CPUs, Linux is exactly the same.
    If you gave me the choice between stablility and reliablilty vs speed, I would take the stability and reliablilty any day! No one ever got fired for have stable and reliable servers!!
  • by Raven667 ( 14867 ) on Sunday May 16, 1999 @09:19PM (#1889885) Homepage
    I wish that they would have continued the load until the graphs started to fall off. Riding it out to peak performance and then stopping doesn't tell the whole story. Most people that use NT say that NT craps out under high loads. That wasn't tested here.

    All in all though I think that this is a good test and points out some flaws in Linux and the software that people use on it. Yes folks, Samba doesn't always work right, Apache isn't the best web server for every job and Linux doesn't scale up on multi-processor systems the way the big boys do. Hint: Run these tests on a monster 32+ processor, multi GB RAM computer and see the results--compare with a single CPU 1GB RAM with the same NOS.

    The winner in this test, IMHO, is Solaris. All the free publicity for Linux is publicity for UNIX in general. While you might put Linux on a small local server you aren't going to use it on an E10K sized computer.

  • ...unless you make it matter. Remember what Linus said at his keynote address? He said: let's focus on the low end server and desktop market. It may sound "sexy" but we're not focusing on high end machines with bunches of processors.

    These are two totally different areas, and Linux was always designed with the lower end in mind. How convenient then for them to do all these tests on huge computers nobody would actually use for a web server, unless they run one of the top 100 sites on the internet! Not to mention the fact that this is more of an apache benchmark than a linux one.

    If you run a huge smp machine and want to squeeze every last drop of speed out of it, you probably won't run linux anyway. It's not that linux isn't "good enough", it is designed for a different purpose. For a job like that, you would want Solaris or FreeBSD (still not NT)

    NT has its own design purposes, which are different from any unix type system. There are two main design goals I can see in NT: 1. Be easy for even an idiot to maintain, since most of the time all he will have to do to is follow wizards or reboot the machine. 2. Be monolithic and slow, but for benchmarking purposes, have a way for those few people who know the OS inside and out to tweak it to insane levels for one or two particular services at the expense of stability and resembalance to "real life" situations.
  • It's funny you should say that, since Quote.com, Inc has been doing all those things you mention using a load-balanced multiple web-server environment. What's more, they've been doing it for at *least* three years.

    It's not a naive solution...it's very workable, sensible, and much more affordable than the "one giant box" business model.

    Do a little more research...load-balancing is far more involved than just slapping multiple IP addresses into a DNS record.
  • Netscape's server is the same code base as Apache

    Err... No, it isn't. I agree with most of your post, but there are significant differences between Apache and Netscape's server software. Netscape in fact might perform better on the high-end hardware for static pages than Apache does because I believe it uses a different (threading) model than Apache (forking).

  • by Tardigrade ( 17769 ) on Sunday May 16, 1999 @05:17PM (#1889897)
    The study was flawed and apache is slower. The two are not mutually exclusive. It is possible to make something appear worse that it actually is. They were also using Apache's slow performance to deride Linux, such is not good practice.

  • by schporto ( 20516 ) on Sunday May 16, 1999 @05:00PM (#1889907) Homepage
    If you look at the numbers and not just the graphs, aren't these numbers just a little ridiculous? I'm not real sure exactly what is meant by these numbers, but here's my reading of them. I mean at 60 clients linux has 2000 requests per sec, solaris has 5000, and NT has 4000. That means each client is requesting ~30 items/sec for linux, ~83 items/sec for solaris, and ~66 for NT. That's either really quick clicking, or really complicated pages. Then there's the other test, which is more understandable, throughput vs client load. Even so its not bad. Look 300 Mb/sec vs 200 Mb/sec for 60 clients is not bad. Really each client gets about 5Mb/sec or 3Mb/sec. For most uses this is probably fine. I expext that multi-Mb document to take a while to open. Even from my own hard drive. Bandwidth is more of a concern I'd imagine at this point.

    That said let's actually look at the graph for a minute. On the WebBench test 60 clients is about the point it seems that NT levels off, can't really tell they cut the graph off. Yet the quote below has you believing different "Solaris and NT had plenty of CPU cycles to spare." And linux wasn't exactly losing any ground at that point - ok a little lower but not much. It seemed to hit a stable point. What about more clients? And then there's the Netbench graph. I mean look at NT plumet. Linux hits 16 clients and levels off at 200 Mbps. Nt hits 48 clients with 350Mbps then falls down to 300Mbps by adding 12 clients. Linux added 44 cleints and lost maybe 50Mbps. To me this looks like linux acts like a marathon runner, getting to a distance then setting cruise and holding steady. NT on the other hand is like s sprinter, burning itself out and working hard quickly but won't last real long. Yeah the sprinter will beat the marathoner in a 1 - 2 mile race, but look out for that 5 - 10 - 26.2 mile race.

    My point is you get fine, predictable preformance, regardless of the amount of work asked of linux. Meanwhile NT seems fine for small amounts but the more you ask the less likely you are to get it. I want to see the benchmarks with higher numbers. I'd expect linux to hold around its same mark, and NT to fall steadily. Why 60? Why not 100? 100 is even (ish), why not 50? 60 just seems like an odd number.
  • These types of "studies", whether accurate or not, make me as livid as the next person, however, aren't we missing the point. For years, whenever any "newbie" ( and Microsoft really is a newbie in the Linux World - they have barely contributed anything to the Gnu/Linux/OSS/Free Software/*BSD (or whatever you wish to call it )Community) said anything about a "feature" they considered "missing", or that something was "not good enough", what was the answer? "Fix it yourself!" (I paraphrase here...) Why, now, have we changed our tune? Our answer, in one voice to Microsoft, and other's like them should be: If you don't like it, then fix it!
    The next time one of these comes out, then how's about Everyone who posts on Slashdot, posts "Please, Microsoft, if you don't like it, then please contribute some resources to fixing it, otherwise, shut up!" or something like that.

    Microsoft are still in the Business Mould... What do we care!!!!???? [idea for a poll - how many people's income depend on the market share of Linux against NT/Solaris/Netware etc. - OK, so the likes of Redhat/Suse/Caldera etc. do, and I wish them well, but your average kernel hacker is (as Microsoft themselves pointed out) just in it for the recognition/fun etc.
    Therefore, Linux will still be around, as long as people still wish to develop it, and when they don't, then it will die, and there will be no one who cares, since we will be all hacking some other OS, or project or something! When NT dies, which I'm sure it will, there will be lots of MS shareholders who do care, and will be most unhappy... Justice!

    Eric the Cat

  • by Doke ( 23992 ) on Sunday May 16, 1999 @05:28PM (#1889916) Homepage
    NT is far less mature than the Unix family, of which Linux is a member. M$ foolishly ignored 30 years of research and accumulated wisdom. As a result, they've been repeating all the old mistakes.

    Most of NT (and other M$ code) was written by lower echelon programers, under the direction of computer scientists and managers. Many of them had only recently graduated from MSD training classes. In generaly they were operating under marketing imposed time constraints. This shows in the quality of the product.

    If you want proof, try working with the IP routing table metrics under NT, or look at their publicly released code, ie the frontpage extentions for apache. Also look at a security model that requires everyone to buy third party virus scanners.

    In contrast, most of Linux was based on an established tradition. Most of the major holes were already known. It was written by people who cared about the quality of their code. They loved programming, and their personal reputations were at stake. Then that code was reviewed publicly, and contributions were fed back to the author.

    I forget who said "If I have seen far, it is by standing on the shoulders of giants". M$ forgot it was ever said.
  • I think that this kind of benchmarks is quite interesting to read, but not so helpful if I were to buy a server: in first place even the "low performance" of linux is able to saturate a more than resonable connection and it is quite sure that a 10Mbit lan is going to have some problems in coping with the stream of data;
    secondly I belive that serving *STATIC* html pages is not meaningful at all for "real-world applications". This is not to say that NT cannot outperform linux on high-end hardware or in given configurations, but
    I want to stress that this results seem to make little sense. It is more or less like comparing the MIPS rating of two completely different architectures and deducing that one is better than another; not quite so and,
    more importantly, everything depends on the kind of task one has to have the machine doing! (for example Alpha processors are just great when we consider Floating Point performance; when running my code -aimed at symbolic calculus and not FP- they outperform intel processors by just a factor of around 2 or 3).
    Linux is "work in progress" and bad results should lead people to improve it, rather than complaining on how unfair the test has been (even if the test has been unfair, like this seems to be the case); on the other hand linux
    (and the various *BSD) has the huge advantage of a nice standard interface and of the disponibility of a huge code base. Security patches usally are released quicker for
    Unix-like systems than for NT and this is a good thing. Now just a remark concerning my experience: when I first tried linux (more than 5 years ago) it was
    neither very stable nor exceptionally fast, but it was an unix-like system with OpenLook (that I was using also under Solaris on a SPARC) and it allowed me to share easily my code between the machine at the University and the one I had at home.
    The improvements linux has done since then are quite impressive (and nowadays I think I prefer having linux rather than SunOS 7 on a sun4m and I consider it to be definitely better than DU 4) but there is still a long way to go, and even if now it is slower than NT while performing some tasks
    stability and "user friendlyness" (at least for somebody that is writing this text in lynx and loves command line!) are things I would not underestimate. Moreover I guess that in 4/5 weeks we will have some patches that address the "lack of performance" these tests have shown.
  • by Josh Turpen ( 28240 ) on Sunday May 16, 1999 @04:56PM (#1889923) Homepage

    "Apache is a general webserver, which is designed to be correct first, and fast second."

    That is the first sentence in the performance tuning document.

  • by Josh Turpen ( 28240 ) on Sunday May 16, 1999 @08:15PM (#1889924) Homepage

    The pure humor of it all...

    From behind the scenes of www.microsoft.com [microsoft.com]


    Six internal Ethernets provide 100 megabits of capacity each
    2 OCI2s provide 1.2 gigabits of capacity to the Internet
    Runs on Compaq Proliant 5000s and 5500s, with 4 Pentium Pro processors and 512 megabytes (MB) of RAM each.


    Microsoft Windows NT Server 4.0
    Microsoft Internet Information Server 4.0 (IIS)Microsoft Index Server 2.0
    Microsoft SQL Server 7
    Other Microsoft tools and applications

    Powerful Solutions

    www.microsoft.com started out as a single box beneath a developer's desk in 1994, handling about a million hits a
    day. That seems almost laughable now. A sleek data center in Redmond, Wash., receives more than 228 million
    hits a day while data centers in London and Tokyo shoulder the international load of about 12 million daily hits.
    How has the site handled its explosive growth while keeping its hardware to a minimum? How does it administer
    one of the largest databases in the world? How does it manage the challenges of a decentralized publishing
    environment? How does it come close to achieving 100 percent site availability? The answers lie in the
    strength of its software, according to site architects. The whole shebang runs on Microsoft Windows NT 4.0,
    IIS 4.0, and SQL 7.0. "Our site showcases Microsoft technology," says systems operations manager Todd
    Weeks. "We prove every day that we can run one of the largest sites in the world 100 percent off of
    Microsoft technology."

    The Challenge

    Not only is www.microsoft.com an enormous site with hundreds of thousands of pages of content. Not only
    does it receive millions of hits a day. Not only has its growth been unrelenting. Those are some of the
    easy challenges, site architects say. One of the most interesting challenges is that www.microsoft.com
    functions within a decentralized publishing environment. More than 300 writers and developers working in more
    than 51 locations around the world provide information for the site. These content providers are able to update
    their sites within the www.microsoft.com umbrella as often as eight times a day. In fact, 5 percent to 6
    percent of the site is updated every day. The complexity of that publishing environment is daunting
    when you consider that each of the 29 content servers in Redmond contains the nearly 300,000 pages of
    information that comprise www.microsoft.com. But the end result is that the information on www.microsoft.com
    is as current and up-to-date as possible. A team of about eight people staffs three shifts around the clock
    to ensure www.microsoft.com stays up and running 24 hours a day, seven days a week. "Our goal is to make the
    site available to users 99.8 percent of the time," Weeks says. So how do we reach that lofty goal of 99.8
    percent availability? (The 0.2 percent down time is required for routine maintenance.)

    First, the Hardware

    The physical architecture behind www.microsoft.com seems surprisingly modest. Twenty-nine servers host
    general Web content; 25 servers host SQL, 6 respond to site searches; 3 service download requests along
    with another 30 in distributed data centers; and 3 host FTP content. Additional servers overseas handle
    some of the international load.

    Did you count all of that? That's 96 Compaq Reliant 5000s & 5500s (Quad Pentium Pro boxes with 512Mb RAM) running
    www.microsoft.com using NT, IIS, Index Server, and SQL Server.

    Standard .message file for ftp.cdrom.com [cdrom.com]

    This machine is a P6/200 with 1GB of memory & 1/2 terabyte of RAID 5.
    The operating system is FreeBSD.
    Should you wish to get your own copy of
    FreeBSD, see the pub/FreeBSD directory or visit http://www.freebsd.org
    for more information. FreeBSD on CDROM can be ordered using the WEB at
    http://www.cdrom.com/titles/os/freebsd.htm or by sending email to

    Now, which site do you suppose has set more download records?

  • by Josh Turpen ( 28240 ) on Sunday May 16, 1999 @04:27PM (#1889925) Homepage
    How many times does the "Linux Community Inc." need to tell these people that Apache wasn't ment for speed!? Why is Apache designated as the One True web server? Benchmarking static Apache vs. static IIS is pointless. Any programmer worth his salt could cook up a few dozen lines of code that would outperform both servers on pure static content.

    They should benchmark how many dynamic perl generated pages NT can vomit out :)

  • by DonkPunch ( 30957 ) on Sunday May 16, 1999 @04:59PM (#1889929) Homepage Journal
    Somebody please help me out here, because I evidently just don't get it.

    Why is SPEED the overwhelming issue? IMHO, there is so much more involved in choosing a server OS. Do we really need to measure the number of milliseconds it takes to rename a file on the server? Isn't that a little silly?

    Picking a hardware/operating system configuration is not a drag race. You care about cost. You care about uptime. You care about security. You care about support.

    The skills of your existing personnel are important too. If you have a staff of freshly-certified MSCEs, it's very unlikely that you will use a Unix-like system. OTOH, if your network admins love Unix, they will want to work in a familiar environment.

    In the end, speed is not really the same thing as "performance". Benchmarks like these provide nice soundbites for the winner (whoever it may be). They also improve magazine sales and web traffic for the publications. If you choose to commit your organization to an operating system based on them, however, then maybe you deserve what you get.

    As my mom used to say, "When that lawnmower cuts off your feet, don't come running to me."
  • (Note: this is an odder reference than most on Slashdot. The reference is to a Japanese anime by the name of Neon Genesis Evangelion. For further research, see http://www.anipike.com)

    Actually, the nice thing is that Linux already solves the problem that the original Magi triad had. Besides the inherent virus protection, the sheer number of daemons running around your average Linux box should be sufficient to defeat any attacking Angels.

    And don't get into a huff about Absolute Terror fields, either. Linux holds its own against Microsoft's AT field (well, what better description for FUD than Absolute Terror?).

  • I'll admit it; if Linux were beating NT in these benchmarks, we wouldn't be looking for reasons. We wouldn't need to. The new data would simply corroborate the old data. If Microsoft had a problem with the results, countering it is their problem.

    If you perform an experiment to study gravity, and you get a value for g of 32 ft per second squared, you don't go looking for what you did wrong--other experiments show this result as well. If you find that g=14 feet per second squared, you start analyzing the experiment rather than rewriting the physics texts.

    In the Real World, Linux appears to outperform NT. In most benchmarks, Linux solutions appear to outperform NT solutions. Two Microsoft organs create benchmarks, and the NT solution outperforms the Linux solution.

    We look for holes in the benchmarks because we smell a rat. We've found a rat or N--some big ones.

    What we have learned is that benchmarks can be easily cooked. If someone with a vested interest controls enough variables, one can create a pathological case where one's interests win.

    If it wasn't a cooked test, there would still be people yelling. This is not a good thing. However, this is a cooked test. Linux can beat NT in a lot of ways, including performance-wise. Linux isn't strong enough to beat NT with one arm tied behind its back, especially when Microsoft chooses the arm.

    OTOH, anybody know how well NT does at ray-tracing? IBM had some fun with Linux and ray-tracing a while back...

  • An IT with a billon to spend, "shure" don't will ever use Linux.

    An IT department with that sort of a budget will find Linux to be rather useful for some applications, actually. With that size of a budget, one can make an in-house Linux support team. Having such a team and using Linux keeps you from relying on a vendor's support team. Such a team allows you to implement mission-critical bug fixes on your schedule, not that of your vendor. And believe you me, if you are big enough to have a $1B budget, time is measured in thousands of dollars per minute. Waiting a month for a bug that takes a week to fix is expensive.

  • by remande ( 31154 ) <<moc.toofgib> <ta> <ednamer>> on Sunday May 16, 1999 @07:07PM (#1889934) Homepage
    This article makes a number of very valid points. Certainly linux performance on low-end hardware is markedly superior to NT, but the question is, how many people deploy such hardware in production? Of course, that's not to say that there's no place for low end deployments, but the skilled engineer picks the right took for the right job in any situation. If I had a very low budget and just needed to deploy personal homepages and a POP3 server, for example, then linux would be a good choice (along with *BSD, naturally).

    If the question is "how many people deploy low end solutions?", then it is important to note that the situation being tested has no relation to the real world. If someone needed to serve thousands of static pages per second, they would be out of their gourd to select a quad Intel box in the first place, regardless of OS. Better to have a small, cheaper farm of lesser computers to do this job.

    Given that one has chosen Linux on any hardware platform for this task, one would also be out of one's gourd to choose Apache. Apache engineers will tell you this. Apache is built for flexibility at the expense of performance. Thus, the simpler the job, the slower Apache is, on any platform, for the job.

    If you are comparing OS speeds for Web serving, either use the same Web server on both sides or use optimal Web servers on each platform. Apache engineers will be the first to admit that other Web servers can outperform Apache, on any platform, for this test.

    But for high volume dynamically generated content, for example, or commerce, or databases, NT is more mature and benefits from being developed by engineers rather than hackers. DEC, from whence Cutler came, are very serious about this.

    Maturity is relative. NT has more runtime hours than Linux, so there has been a longer time to detect bugs. Linux, due to its huge potential developer base, may well have more developer hours invested in it. It also has more debugging hours invested in it, because most Linux users are potential debuggers. When NT fails, one just reboots. When Linux fails, one often looks at the messages file (or has a sysadmin do the same) and track it through the tech support infrastructures.

    The specification for Linux draws heavily from Unix, which is an incredibly mature model for high-volume computing. Most of the specification for Linux predates DOS, never mind any flavor of Windows.

    Linux is developed by engineers at their best. The best engineers are hackers, really; they're the ones who build software for the love of building software. And Linux hackers contribute with only their best.

    If you are building a commercial software product, you are expected to put 5-6 days of work into the project each week. You cannot maintain top productivity, top quality, on that sort of a schedule. Employers understand this, and they deal with it. The code produced in a commercial setting tends to be "good enough".

    When someone contributes software, the same drive that causes them to want to do something like this for no money causes them to work at peak performance. It also causes them to work at precisely what they're good at. Sure, they may only put in 100 hours in 3 months, but you will get their best 100 hours, easily worth 3-400 average engineering hours. Commercial software is produced by marathon. Free software is performed by relay sprint.

    Besides, if commercial OSs are superior by virtue of being developed by engineers rather than hackers (that is, by virtue of being commercial), then why are shops like Sun putting so much effort and money into Linux? Methinks that the Solaris guys see something in Linux that they envy, and I don't think that it's just the salaries.

    However, for midrange work, linux simply isn't up to par yet. I seem to recall Linus himself stating that he believed OS design was well understood by the 1970s, and he considers microkernels to be "stupid", plan9 to be "stupid" etc etc. Whatever you think of Linus' talents as a kernel hacker, the fact remains that Linux works. It works in commercial production environments. Sysadmins have been disobeying management by deploying Linux where NT was requested--and they've been doing it for years. This isn't politics.

    A sysadmin has one overriding virtue: laziness. Larry Wall gave us the prototype: more on this in the Camel book. They want to do the job once, they want to do the job right, and they want to forget the whole affair afterwards. These sysadmins have been putting Linux in back because they do their jobs and are easy to handle--and because the performance boost gives them fewer boxes to administer (and fewer hassles with acquisition budgets).

    While he is undoubtedly a highly talented programmer, I think that there are engineers in the world who are at least, if not more, skilled working for Sun, CMU, Microsoft, DEC and suchlike whose work has proved Linus to be very wrong. And as such, linux is crippled.

    Linus doesn't have to be the best programmer on the planet. In fact, he needs never write another line of code. There probably are better kernel hackers writing code for their respective companies--and also writing code for Linux.

    Linux isn't an optimal OS. There are places that it is the best one out there, and other places where it does poorly. Like every OS, however, it evolves. Its openness simply lets it evolve faster than the competition. Per Darwin, evolve or die.

    It doesn't matter how wrong Linus is in his coding, because it does well enough for commercial use today. Maybe microkernels have overriding advantages. GNU has a microkernel OS (GNU Hurd) in beta or GA by now. If it outperforms Linux, it will only be a matter of time before somebody crosspollenates them. Then RMS will have a better case to call it GNU/Linux. Whenever somebody finds a major improvement one can make to an OS, somebody else will port that improvement to Linux. Perhaps every line of Linus' original code will be optimized out.

    Linux is far from crippled. By my lights, it is the first OS to sprout wings.

  • by rnt ( 31403 ) on Sunday May 16, 1999 @08:07PM (#1889936)
    I realise that the webserver benchmark is just a small part of the tests, but practically all of the benchmarks make claims like "NT beats Linux" and substantiate that by giving a lot of numbers on how much webpages were served by a Linux server and a NT server.

    Apache runs on a whole bunch of other platforms, even on MS-Windows. Probably even NT... Wouldn't it make more sense to make claims like "Apache on NT beats Apache on Linux"?

    That wouldn't prove the superiority of NT over Linux either, but it would IMHO make just a little bit more sense...

    The same goes for Samba: Samba runs on Linux but also on other systems.

    All these tests only test NT-running-some-software versus one-of-many-Linux-distros-running-other-software and then make claims like "NT kicks Linux' ass".

    "Linux" is just the kernel... or have I gotten things completely wrong?

    Benchmarkers should at least prove that bad scoring is caused by Linux (kernel) and not a program they're running on top of that!

    If a webserver running Apache on freeBSD is doing better than Apache on Linux, that would be an indication of shortcomings in the kernel (although some people may dispute that as well).

    Ah well, I never really cared for benchmarks anyway...
  • Look out there. You're pointing out a weakness, from the point of view of commercial users, when you amplify the fact that there's no single (or even authoritative) "Linux Vendor." Business clients need a vendor they can rely on, not a ragtag bunch of Usenet readers. Commercial supprt is an area where Linux is growing. It isn't an area where Linux is mature.
  • >But on CGI, you invoke a new process with each client request, no matter how many servers you've preforked or how many threads are idle. Presto: poor performance, no matter what the preforking parameters are.

    Isn't this one of the things FastCgi [fastcgi.com] is supposed to be fixing, instead of launching one process per perl script, it launched one perl interpeteror and passes it all the perl scripts, hence less overhead, and more speed (with the drawback that the scripts have to explicitly free memory and be slightly modified) (with a loop about the script)

    Not quite thread like, but definitly not on process per cgi request.


  • by Random Frequency ( 34459 ) on Sunday May 16, 1999 @04:29PM (#1889955)

    Thats what 2200 hits/sec gets you. You'll be doing 190 million hits/day. Pretty damn impressive. I'd like to work for you, considering the monster bw you'll have.

    I'd basically ignore any current benchmarks because they're based of versions of linux that have known issues.

    You're also comparing a multi-process server, which works faster at lower loads, to a multi-threaded server, which scales better, although might not/does not return documents back faster.

    I'd like to see the avg connection times on these things.
  • There is nothing special about "engineers" that makes them better than "hackers". Those labels are not even exclusive; the best hacker I know has an engineering degree. You do not know what a hacker is, if you think they are necessarily unaware of computer science and engineering principles; and in my experience, the more eager a person is to call themself a "software engineer", the less competent they are.

    As for Cutler, his work on VMS doesn't give me great confidence in him. VMS is stable and useful to some, but it's far from being my favorite OS. He may be awfully serious about it; he may be awfully serious about NT, too, but that doesn't mean I want to spend any time using it, or that it meets my needs.

    Linus has a proven track record of writing solid code and coordinating a massive development effort. He does not just say that microkernels are stupid--he demonstrates by example that the monolithic approach is still viable. As elegant as I think microkernel architectures are, Linux is still what runs on my servers.

  • by zatz ( 37585 ) on Sunday May 16, 1999 @05:04PM (#1889966) Homepage

    Ah, but the PC Week test was just static documents! Redhat 6.0 comes with an RPM for Squid, but instead of installing that, they use Apache and then gripe about how expensive it is to fork for each request.

    It's unclear to me what use there is for a web server that is eating bandwidth about the way ftp.cdrom.com does, anyway. That doesn't strike me as a typical "enterprise application". That part of the benchmark is obviously contrived.

  • by zatz ( 37585 ) on Sunday May 16, 1999 @05:36PM (#1889967) Homepage

    The numbers get even more interesting when comparing the results of NetWare with and without Opp Locks. When we turned on Opp Locks, NetWare's overall performance improved by about 40 percent.

    However, this gain is deceiving. With Opp Locks enabled, almost every operation in NetWare actually slows by 25 percent. The exception is file write operations, which are faster by 300 percent. Because writing files takes up almost 40 percent of the NetBench test, it's no wonder we saw a huge overall performance boost in our results.

    These people haven't a clue what they are benchmarking. Opportunistic locks allow the client to do whatever it likes to a file (or regions thereof) without synchronizing with the server. Of course write speed increases; the network isn't involved anymore! You haven't increased server performance one whit, but rather prevented more than one client from opening the file for writing at the same time.

  • by blooher ( 40990 ) on Sunday May 16, 1999 @07:00PM (#1889970)
    You still don't get it, do you?

    What have been people saying here?

    In these benchmarks linux was not so good NOT due to kernel, but due to www server.
    You mean NetBSD can use less CPU while sending TCP/IP stuff?
    Or NetBSD uses less CPU while running many processes?

    At these tests people are actually comparing www server aplications. They should run the SAME www server on all OSs if they wanted OS benchmarks.


  • by Xemu ( 50595 ) on Sunday May 16, 1999 @05:08PM (#1889979) Homepage

    Or Roxen. [roxen.com] Roxen is a pretty cool webserver, too. And it won the Best-of-Comdex award.

  • who as a multiple ethernetcard machine handy ?

    If I understood it right, the channel bonding has to be configured !
    I saw no hint they did. Neither Mindcraft nor ZD

    http://beowulf.gsfc.nasa.gov/software/bonding.ht ml

    someone's got experience ?
  • The ``high-end web server market'' is a myth. You are better off with a bunch of servers and some form of load balancing.

In less than a century, computers will be making substantial progress on ... the overriding problem of war and peace. -- James Slagle