Linux Beats Win2000 In SpecWeb 2000 315
PraveenS writes: "While not conclusive, the SPEC group released benchmarks for a variety of systems submitted by various manufacturers (i.e. Dell, Compaq, HP, etc...) and tested their Web-serving capability. Two very similar machines from Dell, one loaded with Linux and the other with Win2000 had very different results; Linux beat Win2000 by a factor of almost 3 . Here's a synopsis of the results from LinuxToday. The actual spec benchmarks are available here for Win2000 and here for Linux."
As Marty of LinuxToday puts it, though, "What does this mean? In the real world, probably not as much as it would seem. Benchmarks in general are typically set up in an ideal environment. Real world environments tend to be quite different. However, this does indicate that Linux is moving in the right direction."
Zoran points out that "[o]ther current SPECweb99 results can be found here." They make an interesting comparison.
Re:Where's the variations in hardware and software (Score:4)
more info about TUX 1.0 (Score:5)
'TUX' comes from 'Threaded linUX webserver', and is a kernel-space HTTP subsystem. TUX was written by Red Hat and is based on the 2.4 kernel series. TUX is under the GPL and will be released in a couple of weeks. TUX's main goal is to enable high-performance webserving on Linux, and while it's not as feature-full as Apache, TUX is a 'full fledged' HTTP/1.1 webserver supporting HTTP/1.1 persistent (keepalive) connections, pipelining, CGI execution, logging, virtual hosting, various forms of modules, and many other webserver features. TUX modules can be user-space or kernel-space.
The SPECweb99 test was done with a user-space module, the source code can be found
here [spec.org]. We expect TUX to be integrated into Apache 2.0 or 3.0, as TUX's user-space kernel-space API is capable of supporting a mixed Apache/TUX webspace.
TUX uses a 'object cache' which is much more than a simple 'static cache'. TUX objects can be freely embedded in other web replies, and can be used by modules, including CGIs. You can 'mix' dynamically generated and static content freely.
While written by Red Hat, TUX relies on many scalability advances in the 2.4 kernel done also by kernel hackers from SuSE, Mandrake and the Linux Community as a whole. TUX is not one single piece of technology, rather a final product that 'connects the dots' and proves the scalability of Linux's high end features. I'd especially like to highlight the role of extreme TCP/IP networking scalability in 2.4, which was a many months effort lead by David Miller and Alexey Kuznetsov. We'd also like to acknowledge the pioneering role of khttpd - while TUX is independent of khttpd, it was an important experiment we learned alot from.
Other 2.4 kernel advances TUX uses are: async networking and disk IO, wake-one scheduling, interrupt binding, process affinity (not yet merged patch), per-CPU allocation pools (not yet merged patch), big file support (the TUX logfile can get bigger than 5GB during SPECweb99 runs), highmem support, various VFS enhancements (thanks Al Viro), the new IO-scheduler done by SuSE folks, buffer/pagecache scalability and many many other Linux features.
Re:dynamic content benchmarks? (Score:2)
There are several issues.
1) Database speed. In a typical web based environment or read mostly-write rarely mysql would stomp on sql server introduce frequent writes and the reverse would occur. For a transactional environment try interbase it's fast and robust and stable as hell.
2) Web server speed. This is a close one but I think aolserver might edge IIS either way it will be a close call.
3) Middleware. This is where it gets very very tricky. If you are writing simple aps pages using ADO to open and close databases AOL server will trounce asp so will php. In order to write scalable ASP pages you will need to utilize MTS heavily. You will need to write COM objects either in C++ or VB and register them with MTS. In other words you will need to double or triple your developement time and run into insane debugging problems. This is where the Aolserver or php environment really shines. Automatic database pooling and very parid developement easily pays for whatever performance hit you may take.
Of course the real solution may be to use JAVA servlets. For complex web sites J2EE is a compelling solution and much easier to program then DCOM.
Re:two words.. (Score:4)
Well, it could be that we notice that a standard benchmark was used rather than one tailored by a company with an axe to grind. Or it could be that the benchmarks were submitted by hardware vendors, whose primary interest is in making their hardware look good (i.e., it's really hard to imagine Dell fudging a benchmark to make Linux look better than Windows). Or it could be that c't already told us how Linux and NT measured up on more equitable benchmarks. Or it could be that Microsoft's own tests showed W2K performing worse than NT on systems with > 4Mb of RAM. Or it could be that testers have been saying that W2K needs +300 MHz in hardware to perform as "well" as NT did.
In short, there's no reason for surprise at all. This benchmark is only quantifying what the attentive already knew qualitatively. If there are flaws with the benchmark, they almost certainly won't be enough to tilt it against what we already knew; if they do, we'll air our suspicions again.
I do agree that it's still a benchmark, and is therefore susceptible to all the follies associated with benchmarks. But at least this one wasn't obviously rigged.
--
Why are the backlogs different? (Score:5)
An interesting test... (Score:5)
For instance, on the Windows side you might have an 8 way xeon with 2 gigs of RAM. On the Linux side you might have (for instance) an S390 with a terabyte or two of RAM. Then just start loading them down with network clients until they start to stagger.
I'd be interested in the oucome...
Re:And you assume Dell is objective because...? (Score:2)
Yeah, the more you look at it, the more it looks like they were just doing a Mindcraft '00. The most memorable joke then was that the only sites to use big static pages on such expensive hardware and matching network bandwidth would be the more profitable p0rn sites.
If the configuration was bullshit then, it's still bullshit now. Someone please tell me that Red Hat hasn't spent the past year tweaking things just to beat Microsoft on Mindcraft '00. I guess these things play well with the PHB crowd, but surely there are better things Red Hat could be doing with their time and money.
--
Re:the crucial difference (Score:5)
You are confusing two completely different architectural concepts.
"threads" (which get created) and "processes" (which get forked) are 'context of execution' entities. Linux has both, TUX 1.0 uses both.
A "threaded TCP/IP stack" is a slightly mis-named thing, it means "SMP-threaded TCP/IP-stack", which in turn means that the TCP/IP stack has been "SMP-deserialized" (in Windows speak) - TCP/IP code on different CPUs can execute in parallel without any interlock/big-kernel-lock overhead or other serialization.
A 'threaded TCP/IP stack' has no connection whatsoever to a 'threads'.
FYI, the Linux TCP/IP stack was completely redesigned and deserialized during the 2.3 kernel cycle, this redesign/deserialization was done by David Miller and Alexey Kuznetsov. The TUX webserver of course relies on the deserialization heavily, but this is not the only architectural element TUX relies on.
Re:dynamic content benchmarks? (Score:2)
of course, what a nightmare that would be to configure for benchmarking.. i guess you'd have to use oracle on NT and on Linux, but it's a DOG on NT, and, as much as i hate windows, it just wouldn't be fair.
Use a better database program then. DB2 UDB is available for both platforms (bias alert: I work on DB2) and from what I see, DB2 runs pretty well on both platforms. That should even the playing field as far as database servers are concerned. There is no point in having one database vendor on one platform and a different database vendor product on another. DB2 is faster than SQL Server on NT anyway, so you'd be biasing the results before you started.
Cheers,
Toby Haynes
Re:And you assume Dell is objective because...? (Score:4)
You assume that IBM, HP, Mindcraft and Dell are all in a big conspiracy to make Windows 2000 numbers look bad - are you kidding? The reality is that there is fierce competition for best SPECweb99 numbers, and Linux/TUX is just plain faster.
The other flaw in your argument is this [spec.org] TUX dynamic module. Check out the source code, TUX does dynamic modules. (besides, the SPECweb99 workload includes 30% dynamic load, so all SPECweb99 webservers must support dynamic applications.)
Re:Linux leads the way (Score:2)
You haven't seen enlightenment [enlightenment.org] window manager yet, have you ? Check out the EFM pages, and yes, it has had transparent menus for a while. But it also antialiases fonts and alpha mixes them for the transparency.
Re:What I would like to see (Score:2)
Hard to swallow. (Score:2)
Re:real world (Score:2)
Thank you.
--
Re:Threaded TCP/IP stack? (Score:2)
The tcp/ip stack runs in kernel space. In the context of the kernel, there are no threads and there are no processes. Both of these are concepts that userland programs can rely upon because this same kernel imposes these virtual constructs upon them.
HTH. HAND.Pi
Re:two words.. (Score:4)
Not really a benchmark? (Score:2)
I'd like to see Linux win this battle, but lets do it again on common ground with the same clients, same cables and switches, etc... Standardization, yea that's the ticket.
From the SPEC site disclaimer:
These are submissions by member companies and the contents of any SPEC reporting page are the submittor's responsibility. SPEC makes no warranties about the accuracy or veracity of this data. Please note that other results, those not appearing here and from non-member companies, are in circulation; by license agreement, these results must comply with SPEC run and reporting rules but SPEC does not warrant that they do.
Re:Looks like Mindcraft is now available for linux (Score:2)
Re:How about the other features (Score:2)
Apache vs. Tux == less market share? (Score:2)
--
Re:the crucial difference (Score:2)
Re:Red Hat Tux 1.0 ??? (Score:2)
--JRZ
Re:the crucial difference (Score:2)
Linux, the official OS of China (Score:2)
Yes, that would be reasonable. (Score:3)
Thus, if your personal experience tells you that Linux kicks the shit out of MS operating systems for Web server performance, a benchmark test whose results accord with that experience is more believable than one which contradicts it.
That's just good sense, isn't it?
--
Re: FreeBSD and Linux (Score:3)
Strange discrepancy (Score:2)
Does this indicate that the pages delivered were not identical?
Re:two words.. (Score:2)
Somehow that doesn't keep Dell from being the biggest MS suckup in the whole business.
--
Re:Linux moving in the right direction? (Score:4)
http://geocrawler.com/lists/3/Linux/35/150/3977
its VERY important that linux users are aware that the linux devleopment process is hitting a roadblock right now. things dont look too bright.
Looks like Mindcraft is now available for linux (Score:4)
Comment removed (Score:4)
Re:Ignore this! (Score:5)
Looks like about an hour to me. Maybe an hour and a half if you want samba and frontpage extentions installed.
Just for kicks, lets take a look at NT.
When I installed it yesterday it took about the same about of time to install as redhat, so lets figure 30 minutes. Configure for network and reboot 5 minutes, Setup IIS 15 minutes, add webpage to IIS 5 minutes. Reboot the machine just to make sure.
Ok looks like a total of 55 minutes. Great, MS just saved you 5 or 35 minutes depending on what you're looking for. Is it really worth a few hundred dollars, if not more for an MS webserver if you really don't need one?
Also, with the linux box, I can ssh in and fix things remotely, I don't even have to be there to apply a patch when it comes out. As a consultant I find that very appealing. I just scp a file over, install it, restart the service and I'm set. NT I actually have to be there, when some of my clients are almost 2 hours away, I'd much prefer the linux method.
Where's the variations in hardware and software? (Score:4)
Re:Skewed results?? (Score:2)
Personally, I want to see the BSD's go head to head. (as if they don't have enough rifts between them as-is) ;-)
Re:And you assume Dell is objective because...? (Score:2)
SPECweb99 requirements (Score:2)
What does this mean? Vendors obviously try to maximize # of connections, but they have to keep the bitrate above 320kbits to have a valid benchmark run. You can test with 1 million connections as well, but you'll get an invalid run because the kbit rate will be somewhere around 0.1kbits/sec. This is why you see almost identical kbits values (and all are a bit above 320kbits/sec), but different connections and ops/sec values. I hope this explains things.
See the SPEC-enforced Web99 Run Rules [spec.org], there are alot of very strict requirements for a result to be accepted by SPEC.
Re:two words.. (Score:2)
it's really hard to imagine Dell fudging a benchmark to make Linux look better than Windows.
No it ain't. HW vendors like Linux because they no longer have to pay the "Microsoft Tax".
Re:the crucial difference (Score:2)
An example would be, if computer A couldn't use AGP video with the current bios, but computer B could, you could benchmark both with a Voodoo 3 PCI card, then benchmark computer B with a Voodoo 3 AGP card, and say "We can't test directly, but we suspect computer A is missing out on n% performance by not properly supporting AGP." where n% is the difference on computer B. It still shows computer B kicking ass in the default config, but instead of making the whole machine look superior, it narrows the results down to the problem areas.
Re:Fair Benchmark (Score:2)
However, if you go further down the results list, Mindcraft have also submitted a set of benchmark results which are broadly comparable to the Dell results on a different but comparable setup. It doesn't seem likely that both companies have made the same crippling mistake.
So it looks as though Red Hat hove done some serious magic with their threaded web server to me. Will they release the source I wonder?
Re:the crucial difference (Score:3)
How can anyone claim that any MS sponsored benchmark has any legitimacy whatsoever as long as MS insists that there be no benchmarking of their products in the EULAs?
I wonder how SPEC was able to perform this benchmark?
-Jordan Henderson
Re:Fair Benchmark (Score:2)
But they still send question marks for quotes and ticks... :-)
--Joe--
How many files are being served? (Score:2)
One of the things that they did is force tests that stressed various parts of the OS. For me one of the more telling ones was the selection against many files, where the ability to serve off of disk (as opposed to out of RAM) was being pushed.
Linux won, of course. But I wonder whether Win2K is better at this than NT was...
Cheers,
Ben
Re:real world (Score:2)
# nmap -nvvO www.andover.net
Starting nmap V. 2.52 by fyodor@insecure.org ( www.insecure.org/nmap/ )
No tcp,udp, or ICMP scantype specified, assuming vanilla tcp connect() scan. Use -sP if you really don't want to portscan (and just want to see what hosts are up).
Host (209.207.165.16) appears to be up
Initiating TCP connect() scan against (209.207.165.16)
Adding TCP port 32771 (state open).
Adding TCP port 4045 (state open).
Adding TCP port 80 (state open).
Adding TCP port 21 (state open).
Adding TCP port 873 (state open).
Adding TCP port 32773 (state open).
Adding TCP port 22 (state open).
Adding TCP port 25 (state open).
Adding TCP port 111 (state open).
Adding TCP port 32772 (state open).
The TCP connect scan took 11 seconds to scan 1520 ports.
For OSScan assuming that port 21 is open and port 1 is closed and neither are firewalled
Interesting ports on (209.207.165.16):
(The 1509 ports scanned but not shown below are in state: closed)
Port State Service
21/tcp open ftp
22/tcp open ssh
25/tcp open smtp
80/tcp open http
111/tcp open sunrpc
139/tcp filtered netbios-ssn
873/tcp open unknown
4045/tcp open lockd
32771/tcp open sometimes-rpc5
32772/tcp open sometimes-rpc7
32773/tcp open sometimes-rpc9
TCP Sequence Prediction: Class=random positive increments
Difficulty=286136 (Good luck!)
Sequence numbers: 7472F6AA 747E8F6E 748CD0F1 74931A18 7498F243 749AF9A2
Remote operating system guess: Solaris 2.6 - 2.7
OS Fingerprint:
TSeq(Class=RI%gcd=1%SI=45DB8)
T1(Resp=Y%DF=Y%W=FFF7%ACK=S++%Flags=AS%Ops=NNTN
T2(Resp=N)
T3(Resp=N)
T4(Resp=Y%DF=Y%W=0%ACK=O%Flags=R%Ops=)
T5(Resp=Y%DF=Y%W=0%ACK=S++%Flags=AR%Ops=)
T6(Resp=Y%DF=Y%W=0%ACK=O%Flags=R%Ops=)
T7(Resp=Y%DF=Y%W=0%ACK=S%Flags=AR%Ops=)
PU(Resp=N)
Nmap run completed -- 1 IP address (1 host up) scanned in 39 seconds
Re:Kids and computers (Score:2)
Seriously, if your kids can understand it, that's NOT a good indicator that other adults will understand it. I know -- I used to be that "6 year old whiz" myself
Unfortunately, they aren't "whiz kids". They are quite average. I just wish they spent the same amount of effort on school work that they do memorizing fscking Pokemon cards.
Re:dynamic content benchmarks? (Score:4)
Everyone here knows that MS zealots will say "Yeah, but W2k can spit out dynamic content faster...". It would be nice to have proof either way.
Kinda like when us Linux zealots said, "Yeah, but Linux can spit out dynamic content faster.." ;) I do agree that it would be more meaningful to see dynamic benchmarks. After all you can saturate a T-1 with a Pentium if you are just spitting out flat HTML.
Re:Skewed results?? (Score:2)
and I think no SMP support in their TCP/IP stack), so I think it
would get roasted on this setup. FreeBSD's strengths are elsewhere.
One of the great advantages of open source is that one can have a
high-level of confidence that the OS doesn't cheat on benchmarks
(ie. by making changes to behaviour that increase benchmark
performance at the expense of overall performance). The temptation to
do so in a closed source environment must be pretty much irresistible.
Re:two words.. (Score:2)
Ever use any other UNIX platforms? Linux is actually the easiest to get going out of the box, because so much crap is preloaded.
Solaris is a very popular server OS (on Sun hardware), and isn't "Windows user-friendly". Once could say the same about almost any UNIX platform that REAL servers run. Linux is actually pretty easy of an OS to use as 'nixes run.
Although I keep wondering why FreeBSD keeps getting ignored. FreeBSD makes a really nice server OS, and has it's own zealots too (many of whom are professional sysadmins, and not college students). Oh, and FreeBSD is actually a *faster* OS than Linux.
How many Ethernet cards? (Score:2)
--
Re:Looks like Mindcraft is now available for linux (Score:2)
Or perhaps we will merely take it to be the truth because it jibes with our personal experience with Linux and Microsoft products?
--
Re:It all comes down to the hard drives (Score:2)
Wrong! The OS don't search the drives for a file, that is an extremely stupid way of doing it. The OS knows what drive the file is on, and gets it from there. This leaves the other drives free for other work, such as serving other files.
Assuming the network traffic they built up was the same in each test (again, they are a little shaky on that as well), Windows is taking more time to search across 7 drives vs. Linux's 5.
Except it doesn't work that way. More drives gives better performance - not worse. Windows merely was uncapable of taking advantage of a better setup.
The main reason for not putting more than about 4 drives in a machine isbottlenecks. More than 4 drives on a scsi bus may saturate it. (You don't get worse performance, it just don't get better either.) Easily fixable by using several scsi adapters, then the next obstacle is a saturated pci bus. You may then use a machine with several PCI buses, or simply use two machines. The latter might be cheaper.
Why such honking machines? (Score:2)
Of course, in the MS world, you probably need 8GB RAM and 4 processors to run a Web server....
Re:Linux Zealots: come out and play! (Score:4)
First of all, these benchmarks (both the Win2k benchmarks and the Linux benchmarks) were posted by Dell, not by some random Linux zealots. Not only is that the case, but the other WinTel vendors have very similar scores for their WinTel hardware. Does this suddenly mean that all of the W2K vendors are conspiring to make Linux + TUX on Dell hardware look good? Or could it possibly mean that all research that Microsoft funded in the Mindcraft benchmarks is coming to fruition? My guess is that the folks at Microsoft are going to start to truly understand the power of release early, release often. While W2K has sat relatively still basking in its Mindcraft glory the Linux community has targetted the specific problems Linux had that caused it to do poorly in the Mindcraft benchmarks, and has rectified them.
Second of all this is a SPECweb benchmark. The web part of SPECweb would tend to indicate that it is a benchmark of http performance. If you read the spec you would notice that it specifically measures both static and dynamic http content serving. So while this does not necessarily mean that Linux is better than Windows 2000 it probably does mean that Linux + TUX is better than Windows 2000 + IIS (for the things measured by the benchmarks).
Your observation that most Internet facing sites don't have anywhere near this sort of bandwidth is certainly correct. However, my Intranet server does have this much bandwidth (not that I would appreciate it if it saturated this bandwidth). Besides, if you are going to let bandwidth be the limiting factor then it really doesn't matter what kind of web server you are using. A 486 running Apache will happily saturate a T1 with static content.
Not that any of this matters. The two most important features, to me anyway, of Linux are 1) Freedom, and 2) Cost. Linux wins hands-down if these are the factors that you value most.
From the results you must either conclude that Dell (and the rest of the WinTel vendors) are either trying to make Windows 2000 look bad, or you must conclude that Linux + TUX is going to make one heck of a compelling case as a web platform.
Either way it looks bad for Windows 2000 as a web server OS.
HTTP accelerated by the kernel? (Score:3)
No, i dont think there is any such divide, and i think TUX does not contradict Unix concepts. CPUs get faster and protocols get more complex every day. Right now the HTTP protocol is common enough to be accelerated by kernel-space - just like the TCP/IP protocol got common enough 10-15 years ago to move into the kernel in many other OSs.
The question thus is not 'should we put HTTP into the kernel', but rather '*when* and *how* should we put HTTP into the kernel'. Think of this as an act of 'caching', the OS caches and should cache 'commonly used protocols'.
Where is the limit? There is no clear limit, but the limit is definitely being pushed outwards every day. HTTP is becoming a universal communication standard, with the emergence of XML the role of HTTP cannot be overhyped i think.
And the last but not least argument, if you dont need it, you can always turn CONFIG_TUX off.
Ah, more benchmarks (Score:2)
The reality is that most web servers are needed for stability and uptimes, and performance is second. The company I work at has both Linux and Solaris, as well I think that there may be some BSD here too. Solaris is stable and its hardware is also pretty good BUT not great. Most of the problems that we have had were because of hardware failures or defective hardware, not the OS. We also use multiple servers so clustering is a must and loadbalancing also.
Who puts a 4 procesor box on a web site? We do! We have many boxes that are 2 to 8 processor boxes. Of course we don't use that many intel boxes either except for Linux.
What makes this really funny are all the people who defend windows all the time probably not realizing that Yahoo, MS Hotmail, and yes slashdot all use UNIX or Linux. Yahoo uses Solaris and FreeBSD, Microsoft Hotmail does too. And guess what slashdot uses Linux / perl / and mysql. Hmm you visit a site that runs hardware you hate. hmm aren't you the hypocrite?
What windows 2000 really needs to prove is not that it can outperform LInux or Solaris, cause I am sure you can tweak it to be just as good if not better, it needs to prove that it can have 200+ days of uptimes on an extremely busy web site. So what company will be the first to have a large scale site and use win2k?
Lastly we recently bought a site that uses windows servers and we are moving them all to Solaris. Hmmm. Can we see some Solaris 4 processor boxes benchmarked against Linux and Windows? Oh we have and it blew both of them away!
send flames > /dev/null
Re:real world (Score:2)
http://www.netcraft.com/whats/? host=www.hotmail.com [netcraft.com] ???
I guess there are more people who don't trust Microsoft's Webservers
--
B10m
Re:Difference in Platform, Dates? (Score:2)
Re:dynamic content benchmarks? (Score:3)
of course, what a nightmare that would be to configure for benchmarking.. i guess you'd have to use oracle on NT and on Linux, but it's a DOG on NT, and, as much as i hate windows, it just wouldn't be fair.
maybe SQL Server + IIS on the NT boxen and Oracle + Apache on the leenuchs boxens.
Win2K & Benchmarks (Score:2)
I can't believe that people are sitting here saying "yeah, but this isn't the real world." Ok, no offense guys, I actually can believe it.
In Big-O notation, any scalar factor is neglible, it's factors like powers of the algorithm that arent, but this ain't an algorithm, this is a server.
If you haven't noticed, Win2K makes my computer at work crawl by relation to my computer at home... And my computer at work is MUCH faster. Trust me.
It beat Win 2k THREEFOLD. I don't care WHAT your real world situation is, THREEFOLD is a LOT. If it does THREEFOLD, that means that daggonit, it's probably going to be faster in "real world" situations too. Wake up and smell the coffee. Win 2K isn't the holy grail of computing. Linux isn't either, but it's serving 3 TIMES AS FAST, which is significant, unless the skew the benchmarks they were also running 50 copies of photoshop...
Re:two words.. (Score:2)
No, my theory is that they used the instructions for optimizing IIS 4.0 on NT 4.0 to set up IIS 5.0; which isn't good.
This is borne out by doing a search for the settings used on the Microsoft website; they're taken straight from an IIS4.0 tuning document.
There are separate and entirely different IIS 5.0 tuning docs out there.
Not to mention that most of the settings aren't registry settings, and appeared to have been set in the registry; IIS 5.0 doesn't use the registry much at all for perf. reasons.
Simon
Re:the crucial difference (Score:2)
We can still denounce Mindcraft as being a test that would be representative of real-world conditions to very few people (those who could afford a $ 50,000 server).
But in the end, it's good that kick was given - and we should congratulate everyone in the Linux community who worked hard to make those improvements possible.
D
----
real world (Score:4)
On a side note I think you should all visit this address and see what andover.net is running:
http://www.netcraft.com/whats/?host=www.andover
Solaris eh? Whats the front page of andover say?
"Leading the linux destination" great example you're setting there.
Re:dynamic content benchmarks? (Score:5)
From http://www.spec.org/osg/web99/ [spec.org]:
It certainly looks like they are testing Dynamic content as well as static. Check out http://www.spec.org/osg/web99/results/ api-src/ [spec.org] for the source for the dynamic content.
Re:How many Ethernet cards? (Score:2)
Doomy, old time
--
Linux leads the way (Score:4)
Now the only advantage Win2K has over linux is a transparent start menu.
Oh please. Disgusting. (Score:3)
These systems, although very similar, are not identical. Different drive arrangements, different scsi controllers.
And, to boot, one is running IIS 5 and one is running Tux 1.0 (whatever that is...).
What does this prove about the individual Operating systems? ABSOLUTELY NOTHING!
It shows that operating system 'A' running web software 'B' on machine 'C' is faster than operating system 'X' running web software 'Y' on machine 'Z'.
What the hell is 'Tux 1.0?' Yes.. I could look it up. WHy not at least benchmark Apache, so at least you could say 'benchmark of most common intenret platform for each OS' or something..
Ignore this! (Score:5)
A faster Windows still locks me into it's
stupid upgrade treadmill... Benchmark
results are just statistics.. and as you
know, there are "Lies, Damn Lies, and Statistics".
You can't just jump up and down when Linux
beats Windows on a benchmark. Then you're
setting yourself up to hang your head when
Linux loses one every now and then (Mindcraft)..
In so doing, you're missing the point:
The speed, usability, or even stability of
free software is not the driving force behind
its existence, It's the FREEDOM!
On Independence Day, of all days, you lose
sight of this? I'm so tired of these benchmarks.
Re:Where's the variations in hardware and software (Score:2)
Because Macs--and yes, even your "quality-built" G4--are terrible at the most important factor for web-serving performance: memory bandwidth. "Apple's systems generally have had only about 60 to 70% of the effective memory bandwidth of contemporary x86 systems. This is due to Power Mac configurations that run the system bus at lower clock rates than comparable x86 PCs, and the simple fact that Apple's system ASICs cannot match the technical excellence of the best x86 chipsets like the 440BX." (source: Paul DeMone's Mac performance article [realworldtech.com] at realworldtech.com)
Furthermore, as you'll learn if you read the rest of that article, Apple refuses to submit any Macs to any standard, fair benchmarking organizations, and in particular to SPEC, instead preferring to use decade-old discredited benchmarks incorrectly (BYTEmark) or make up their own. I wonder why?
the G4 is a damned powerful, quality built piece of equipment--better than most x86 boxes slapped together at some cheap ISP.
First off, it's OEM, but I'm sure that was just a typo. More serious is your perception that the OEM does anything which impacts the performance of the computer other than pick the components. The only thing that could possibly make a computer "quality-built" by an OEM would be making sure everything is screwed in tight. What matters is that the components themselves are quality-engineered. And in the case of chipsets--again, the most important part of a good web server--a plain old Intel 440BX knocks today's Mac chipsets silly.
And let's not even get into the x86 chipsets which are actually built to be used in web servers. Apple simply doesn't have anything to compete.
And there's no reason why they should. Apple has never ever pretended their boxes make good web servers. And considering all the things Apple has pretended over the years, that fact alone should clue you in that they probably don't.
Re:Win2K & Benchmarks (Score:2)
As I said when the Mindcraft results came out - anything more than 30% difference in performance is suspect. 300% difference stinks of an error in the benchmarking procedure.
Simon
Hang on a minute! (Score:2)
I mean sure, they did throughput test, CPU tests etc etc, but they were very calculated tests designed to test one thing only at a time (or something like that) and had little bearing on how a system/subsystem/software would perform in real life situations.
So the fact that Linux outperformed Win2k by a factor of 3 is pretty much useless as a comparison of real life performance.
Of course I could be wrong. I'm at work at the moment and can't get my hands on those dusty uni notes ... :-)
Linux Zealots: come out and play! (Score:3)
Yes, there were problems with the Mindcraft benchmarks - and yes there are problems with this one. Namely, what in gods name are they comparing? They certainly arent comparing operating systems - there are way too many differences in this case to do that objectively. Next time somebody runs benchmarks between the two OSes, please try to keep the following things in mind:
(1) USE THE SAME HARDWARE! I cannot stress this point enough. What you people may call minor differences may often have a MAJOR effect on the outcome of a benchmark such as this.
(2) Use the same Webserver Software. How in gods name can you blame or claim any of these benchmarks on the operating system? Both are using completely different HTTP servers (one which is isnt even publically available and shouldn't have been used = TUX). If you want a legitimate operating system benchmark and not and HTTP server benchmark - try to compare Win2k running Apache for NT and Linux running Apache. Otherwise climb off your high horses right now - these are webserver benchmarks NOT OS benchmarks. I for one will say that Apache for NT consumes a lot less memory than IIS 5.0 - though on my small intranet site I've yet to notice any speed difference.
(3) The results are unrealistic. What kind of server has 4 gigabytes of bandwith?
(4) Also - make a point to configure both servers equally - it seems to me you guys scrimped here and there on IIS configuration - I wonder why?
If the Linux world wants credibility - its time to grow up and earn it. You guys sure talk a great game - but when it comes down to the numbers you are either whining when you get trounced or creaming in your pants over benchmarks which are obviously flawed.
While Im on my soapbox - let me say this also: Its amazing how many of the news story's of this so-called "News for Nerds" site appear to be blantantly attacking Microsoft and promoting Linux. Its obvious that whatever sense of objectivity Slashdot once had (if ever) has long been lost to the horde of pre-pubescent teenagers who only have one goal: To get something for nothing.
So there you go - take it or leave it - I dont really care. You may either post a reply or email one to darkgamorck@home.com
Gam
Re:Where's the variations in hardware and software (Score:3)
Suns, sure, but Macintoshes? I don't think I'm aware of anybody using Macs for even semi-serious webserving. Neither the OS (OSX is a different ballgame, granted) nor the hardware is designed for this kind of thing. Correct me if I'm wrong, please :)
As regards to the number of HTTP servers, maybe they just ran out of time and money to benchmark ten squillion different configuration, and chose the ones that they believed were in most common usage. Testing more of them would certainly be a good thing, though.
Re:How Fair? How Conclusive? (Score:2)
Let me say though.. win2k feels much smoother and runs much cleaner than previous NT versions. It's more stable. IT *does* work better. it *IS* faster. And regardless of what everyone says, including me from time to time, at it's core, the NT kernel is *good* technology. I just wish MS would quit fucking it up. It's what they chose to do with it that sucks.. not the kernel itself.
And you are right. THis benchmark is absolutely meaningless.
OS 'A' running server 'B' on hardware 'C' beat out
OS 'X' running server 'Y' on hardware 'Z'.
That is meaningless.
Just to clarify the point of my previous post ... (Score:2)
Re:Looks like Mindcraft is now available for linux (Score:2)
Thanks to Linux's open nature it has recently been ported to IBM's mainframes. I truly doubt that that Windows 2000 could compete with Linux on a S/390. Which is sort of ironic, because it also can't compete with Linux on an old 386.
According to this survey it would appear that Dell doesn't even think that Windows 2000 competes with Linux plus TUX for web serving on Dell hardware.
I wonder what that could possibly mean :).
Re:two words.. (Score:5)
They said that when someone performs a benchmark in the future and it shows Linux outperforming Windows NT or 2000 by a sizeable margin, the Linux zealots will claim that THIS benchmark is the correct one and Mindcraft will be PROVEN wrong.
This post seems to me like exactly that behavior. Mindcraft doesn't tune Linux the right way and WinNT trounces it. Linux zealots scream bloody murder and inspect the process with a microscope. Someone else does a benchmark that shows Linux 3 times faster than Win 2k, and they are content that the Mindcraft fiasco has been avenged.
Take a look at yourselves. I'm not a Linux lover. I think it has a long long way to go before the mainstream starts to take it seriously. There are so many problems with it right now..installing programs, removing them, x windows interface complexity, simple text editors..the list goes on. Honestly, I don't think it will ever become mainstream - it will get replaced by something else that will before long.
I don't love Windows either. There are of course many problems with it. However, it's not the spawn of Satan and Linux is not the Great Hope or messiah.
Be objective, people. Please. You'll do your "cause" some good.
the crucial difference (Score:5)
1) these tests compare Win2k to Linux. By contrast, the Mindcraft study compared WinNT4.0 to Linux.
2) in the "Operating System" column of the Linux boxes, we see a revealing note:
Operating System: Red Hat Linux 6.2 Threaded Web Server Add-On
It seems as though RHAT has taken the trouble to render its TCP/IP stack into a multi-threaded model, rather than the forked model I understand it used to be. This was identified as the primary deficiency in the previous benchmarks.
At the time, Linux afficianados claimed that the superiority would be short lived. Assuming these stats are otherwise legit, it seems as though they were right, and in such a brief period of time as well. I'm impressed! Keep pumping out impressive turn-arounds like this one, and very soon commercial entities will have to give open source its just props as a development model.
I am slightly curious whether this "web server add-on" is available to consumers, and also whether it is a fully-featured web server. If not, and this is just a hack, that might cast a pall of illegitimacy. Anyone have the inside scoop?
-konstant
Yes! We are all individuals! I'm not!
Re:the crucial difference (Score:3)
Re:Ignore this! (Score:2)
Which probably explains things like this [theregister.co.uk].
--
Re:two words.. (Score:2)
I disagree; would anyone care to explain why:
The Linux setup had on-NIC buffers of 300 bytes, whereas the Windows setup was set to use buffers of 10,000 bytes - thus giving higher latency?
The Linux setup was set to use (from the get-go) 10Mb of memory for its TCP/IP buffers, whereas (it looks like) Windows was set to use 17Kb?
The size of the TimeWait buckets buffer in the Linux configuration was HALF that of the Windows NT configuration?
Why was the logfile on the Windows box set to flush every 60 seconds instead of the default of every 30 seconds?
The thread pooling settings on the NT box are suspect; they seem artificially high, which can degrade performance.
Sure, this could all be moot. But before jumping on the "this Benchmark is THE WORD OF GOD" bandwagon, I'd like to see why these changes were made.
Re:Ignore this! (Score:2)
The administration style used to administer and install linux boxes is *NOT* the same that is required for NT boxes. Not that this is a good thing, mind you, but if you approach NT work in the right manner, it can be done quickly.
I roll out new workstations in ~20 minutes now, unattended. (Norton Ghost, several other nifty autoconfiguration things I whipped up, some neat network stuff)
Also, choice of servers is usually do to the fact, still, that *nobody out there has linux/unix experience*. It is still percieved as something that the general admin 'cannot understand'. THAT is the REAL PROBLEM.
I've tried to roll out linux servers at many companies. There main reasons for not doing it are they CANNOT SUPPORT IT. They don't have any linux people.
Believe me.. if the other admin in the company was a linux nut like me, I would have *no* problems convincing management to go with it.
Re:Ignore this! (Score:2)
Actually this depends completely on the amount of experience required. If you want to get an NT admin with the same amount of experience as your "experienced" NT admin, then you will generally find that the NT admin is more expensive. Likewise if you want a Linux admin with the same level of expertise as your recent MCSE graduate then you will probably pay less than you would for an NT admin.
This doesn't even get into the fact that with Linux upgrades are free, and hardware stretches a lot further. Nor does it recognize the fact that most Linux admins are capable of adminning far more hosts.
The fact of the matter is that Microsoft has been pitting the skills of entry level MCSEs against hardcore Unix veterans in their TCO evaluations for some time now. They have completely glossed over the fact that hardcore NT veterans are often more expensive than their Unix counterparts. The popularity of Linux, and it's down to earth prices, have made it relatively easy to get a hold of Junior level Linux admins at rational prices. Heck, colleges these days are pumping out kids that know Linux like they were going out of style.
Interpreting the results: The REAL bubble! (Score:3)
1) the maximum filesize in the SPECweb99 benchmark is 900kb, this is why there is a 1MB limit set. Your claim that there are 1MB objects in the benchmark is false.
2) the CGI executable is mandated by the SPECweb99 Run Rules. A process must be created and destroyed. But the total amount of CGI requests is 0.1%! All the other 99.9% of the workload was handled with IIS 'low application priority' modules, which is a DLL loaded into IIS's address space, not a .EXE.
3) the IIS object cache was set to 2GB (not 2MB). It's set to 2GB because Windows 2000 + IIS has a serious limitation, threads (such as the IIS threads) can only address 2GB. This is a design flaw in Windows 2000, which hunts them in the enterprise now.
4) are you really seriously promoting the idea that the top 4 PC OEMs (Dell, IBM, Compaq, HP) and Microsoft did not tune IIS to the max and somehow conspired in making Linux+TUX numbers look good?
Fact is, the only reason why the TUX result was compared to the same Dell system is that the Dell system also happened to have the fastest Windows 2000 results. Your whole line of argumentation is obviously flawed if you compare IBM's similar Windows 2000 SPECweb99 result to the [spec.org]TUX result [spec.org].
Re:Ignore this! (Score:2)
--
So what? (Score:2)
I've stopped caring about linux now. I think Open source is a great thing. I like CLI's, so naturally I like the unix idea. But until something major changes, I don't think Linux will take over from microsoft in the consumer-arena. Why you ask? Because
1) Its very difficult to configure. I have very mainstream hardware. Nothing funky on the motherboard, 3com NIC, graphics card from Diamond, SB soundcard, etc. But I could never get everything to work at once. Keep in mind I'm fairly computer literate (I 've only built my own computers since I was in 8th grade.... ) and I know what I'm doing. But I could never get everything to work together, and this is with three different distros keep in mind. If someone like myself, who knows about computers, can't get the damn thing working what makes people thing that average joe-consumer and idiot-boss will be able to make it work on their computer? And that's not even getting into installing software ("make install" my ass, there's better ways to do things if you want it made easy). "Well, just buy it pre-installed then!" you might say, which brings up
2) A Micro$oft OS is pre-installed on almost every computer on the planet. "But dell has linux preinstalled on some laptops!" If they do, they're not making it very visible, a search a couple weeks ago in their home-user laptop section turned up nothing with linux. ditto for small-business section. Which leaves the average joe to install it himself. Refer to 1) for the impossibilities of that happening.
Now, you're probably wondering why I said "Be zealots" up there, right? Well, that's my solution. I think with the right pushing that Be actually has a chance against the gorilla. Unfortunetely, it dosen't look like that's going to happen, it looks like another OS/2 (that was a fun one to play with, btw). Coulda, shoulda, but didn't because of piss-poor advertising. Make no mistake, I think Be is great, I use it as often as I can. Its easier than anything to setup (just install it) and it works great out of the box. I urge everyone to try it. Hell, its even free [be.com]. And parts of it have been open sourced.
My rant is done. I guess I could sum it up by saying " We're screwed, the good stuff always gets squashed by the gorilla ". Have a nice day :)
Predictions for the moderation: Troll, Offtopic, Flamebait. Lets see how close I get.
Linux is only Free if your time is worth Nothing
Re:the crucial difference (Score:2)
OK. I want to do a Linux vs. W2K benchmark. I'm going to run it on an IBM S/390. Fair?
--
Skewed results?? (Score:2)
I'm sure each vendor did everything they possibly could to improve their SPECWeb99 results, since it's in their best interests. Does this mean that Linux is just better overall? Does it mean that it can be twisted the most to win any benchmark if you try hard enough?
The notion you get from reading linux-kernel is that they're totally against patching the kernel just to win a specific benchmark, but it obviously did very well in this one.
Then again, there are lies, damn lies, AND BENCHMARKS. I see this as being more credible than the Mindcraft benchmarks (Mindcraft, haha, that sounds suspicious) since it wasn't simply NT vs. Linux and multiple vendors are involved.
It would have been interesting to see FreeBSD thrown in, just because it's another open source system. Maybe there's a trend here? Easier to tweak open source systems to win benchmarks? Maybe they're just clearly better? Hmm.
Not everyone wants freedom (I do though) (Score:3)
So while the benchmarks don't directly impact on us, their influence over business computing does give benchmarks some significance.
tangent - art and creation are a higher purpose
Errrm, 4... GIG?? (Score:4)
Reality check guys. Does anyone have 4 gig of external connectivity? And doesn't 4,200 simultaneous connections of 350kbit/sec each represent, like, Yahoo? (without doing the sums)
This would also seem to spurn a more serious debate in terms of web performance testing. If we can get a single server to munge through this kind of quantity of throughput - why have clusters of servers at all? Clearly real world servers perform nothing like as well as this, and we need to have a better look at why.
Dave
Re:the crucial difference (Score:2)
What surprises me is the fact that even though NT relies on threads heavily and IIS is so tightly integrated with the OS it still lags behind.
Re:Ignore this! (Score:3)
Think about it from the point of view of someone who is trying to justify a linux web server in a business environment. I'm going to assume that most businesses have a budget dealing with what they're going to spend for the year on equipment and software. Isn't it worth it to prove that you could save hundreds of dollars on Windows and the licenses if linux met the businesses needs?
Say the web server is going to be spitting out static HTML on DSL or a T1, what's the point in having an NT/Win2k box for that when Linux or BSD would do the job for a considerable savings. The money saved would be money for another project.
If you're working for a company thats money to spend on replacing some of the old junk that gives you problems (10mbit nics, hubs where you need switches, a few larger hard drives....etc).
If you're consulting thats more value to the customer while acomplishing the project. The last thing I want to do is give an improper solution when my reputation is on the line.
I think that the more "fair" benchmarks out there the better. Even Mindcraft's benchmarks were helpful because they showed how far linux had to go in certain situations. Right now in my opinion the more linux gets talked about the better, it needs to become a household name before a lot of business owners will consider it.
I really need one of these (Score:2)
I've got a 66Mhz 486 running GNU/Linux, 450 days uptime serving up to 10 000 hits a day...
Danny [danny.oz.au].
More whore based comments. (Score:2)
Re:Where's the variations in hardware and software (Score:2)
ok, you're wrong. Mac OS X Server has been out for well over a year, and it does a handy job of serving up web pages with Apache. also Web Objects, from the NeXT world is a nice piece of software for delivering web-based applications.
more info can be found here [apple.com].
also, how do you figure that Macs are any less "desinged for this" than x86 boxes? the G4 is a damned powerful, quality built piece of equipment--better than most x86 boxes slapped together at some cheap ISP. sure it may not be a high-end Sun box, but there's no reason a Macintosh can't serve web pages with the right software (and better than an x86).
-j
Re:NT still sucks (Score:2)
But businesses can. Maybe they can't pay more than minimum wage cause they spent all their money on hardware and software licensing...
because none of the godamn computer companies want to hire us because we don't have "degrees" or "credentials" or any of that bullshit
It doesn't take "degrees" OR "credentials" to at least make more than minimum wage, if you have any sort of technical intelligence at all. You may not have your dream job, but you can definitely make above minimum wage...
Besides, our high school has a several thousand dollar tech budget and a T1 line and the shit still crashes every time you turn it on. That's no exaggeration, that's the literal truth
Uhm.. "several thousand dollars" doesn't go very far when it's spread out across an entire school. Does that cost include the T1 line? If so, you have even less money to spend.
"Why can't I be a network administrator making 6 figures? I mean, I know I'm still in high school and have never had a job before, but just wait 'til you see the job I'll do! I'll take all those servers and reload linux on them, and they'll run so much faster you won't even need half of them! Then I'll take them home and make a beowulf cluster out of them to crack DVDs and encode MP3s."
"Security!"
[ok, so maybe i overstereotyped at the end a little bit]
Re:two words.. (Score:2)
If you think that those Windows 2000 systems are not tuned well enough then more power to you, i'm sure you'll be hired immediately by any of these companies, good SPECweb99 performance is a top priority for every hardware vendor.
No thanks; did that for a couple of years (I used to work on capacity planning tools for mainframe and server applications). I'm much happier writing cool applications for Sierra.
Threads, Processes & NT (Score:2)
Re:Where's the variations in hardware and software (Score:3)
You mean besides the military [slashdot.org]. A G4 running OS X Server is nothing to sneeze at [apple.com], and if memory serves correctly, web serving on the mac using WebObjects is a pretty sweet combo... If you are going to include commodity x86's then you should include macs...
course if you're running linux PPC, then you can run Apache on a G4 and really rock and roll......
Re:the crucial difference (Score:4)
No, the malignance was just. Even though Mindcraft II addressed some of the obvious technical problems with the first round, they still ran an extremely odd benchmark on an extremely odd selection of hardware, which left them open to charges of having tuned the test to provide the desired results. (I.e., "Here's one we can win!") These charges were confirmed by the suite of benchmarks run by c't at about the same time, where Linux won on almost every test, even though there was a realistic and reasonable variety between the specific c't tests, rather than a single bizarre test as in Mindcraft. (Another poster has given a link to those results.)
Even though Red Hat was foolish enough to participate in Mindcraft II [*] and thereby gave the benchmarks an appearance of legitimacy, many of us said in advance that we would not accept the results if they did not use a more relistic benchmark on a more realistic selection of hardware. I, for one, still stand by that.
It's absurd to put any stock in a benchmark that is sponsored by a company with a direct interest in the outcome and that does not even reflect a standard benchmark.
[*] Or not, as the case may be. Perhaps they were just trying to get a close look at the behavior so that they could get started on their "add on". Indeed, this may be what happened - see the details [spec.org] and notice the "Each NIC IRQ bound to a different CPU; Each TUX thread's listening address bound to 1 NIC's associated network", which sound like a direct response to Mindcraft.
> I am slightly curious whether this "web server add-on" is available to consumers
The linked page says that the "HTTP Software", "Operating System", and "Supplemetal System" (whatever that is) will be available in August 2000, so it does sound a bit vaprous.
--
Comment removed (Score:3)
And you assume Dell is objective because...? (Score:2)
So why not make sure Linux wins a benchmark in an area where they know Linux is popular already? Fudged numbers and fudged hardware aside (we'll assume they were honest), it would certainly seem a logical thing for them to do. As is evident from the results page [spec.org], they blew away the competition with TUX 1.0. I've been unable to find any information on this (please enlighten), but it appears that it's a kernel patch, because the options are set with the kernel interface. Is this the khttpd that was discussed after the Mindcraft fiasco?
In any case, if it's in kernelspace, it's most likely not a full-featured HTTP server like Apache, Zeus, IIS. So it can spit out static pages as fast as you'd possibly need. Big deal. Fireworks accelerate faster than space shuttles, but you wouldn't create dynamic content with fireworks. (Erm... where was I?)
My point is, Dell has proven that a specially designed static page server is faster than servers with more features. That doesn't really tell us anything we didn't know. It doesn't demonstrate that one OS is better than another, nor does it make deployment decisions any easier (except for fools).