Mindcraft Study Validated 345
!ErrorBookmarkNotDefined writes "Another study has appeared validating the
Mindcraft comparison of Linux and NT. This time,
PC week benchmarked Solaris, Linux and NT. Using a monster machine, NT handily defeated Linux. The study found fault with Apache, mostly. (For low-end machines, Linux would easily beat all comers; but how far along is Linux in the highend market?) "
Keep them up (Score:1)
servers with hits like that for about a week straight and then we'll see who's servers are still up and running and which machines will require a reboot.
Re:once again the numbers prove it (Score:1)
I would recommend a course in statistics before making such a bold statement that the numbers prove it. Numbers in themselves do not prove anything. A single data point for a quad processor mean nothing for the real user.
F1 power on your server. (Score:1)
If you have a good track and fair weather you'll get some pretty fast lap times with the F1 car, tuned to the maximum. This approach is, however, not practical. In a F1 car you get a bumpy ride.
The engine blows after about 1000 km and you have to refuel and change tires every 200 km.
An F1 car will also carry only one person at a time (one prototype with a passenger seat has been built). It's also painful and tough exercise to drive an F1 car.
But, put a little water and a few LARGE bumps into the circuit and the situation changes. In your regular Mercedes you can enjoy the ride and still make the trip in quite a good time, while listening to music and not having to repair the engine and refuel as often. You can even bring along a few friends to make the trip much more enjoyable.
My advice: Don't waste money and time on building a F1 car for the public, instead enjoy the ride that you get from your ordinary Linux.
4x100M cards? (Score:1)
Isn't this just a problem that Linux can't use the 4x100M cards to make a full duplex 400M pipe.
The Linux performance falls off at around 200M just what I'd expect for 1 card.
Or am I missing some thing obvious?
Re:Comparison (Score:1)
What is this infamous kernel 2.2 improvement (Score:1)
Does Caldera come pre-configured with crypted password support ?
Did they just forget to run Win95PlainPassword.reg ?
Is it some evil plot by the Linux community to keep the support costs down by keeping the clueless far away from Linux :) ?
This is like a bad rerun from 3 years ago ... (Score:1)
In reality, Webstar running on a low-end Mac under Apples brain dead TCP/IP stack could saturate a T1 line. Running on Apple's top of the line hardware it could match a Sun box at up to T3 speeds. Yet all the benchmarks were at 100mhz lan speeds and showed Webstar getting butchered when over 50 simultaneously clients started hammering it. Sound familiar?
Back then everyone who knew better stayed quiet (you know, all the geeks who admin server farms and read slashdot all day) since it was "just Apple, so who cares." Maybe if people had complained we wouldn't still be seeing these benchmarks 3 years later.
You're dreaming (Score:1)
The shortcomings of the Linux kernel have been known for ages. Linux first appeared in late 1991. In early 1992 already Linus acknowledged that a microkernel design would have been better.
"True, linux is monolithic, and I agree that microkernels are nicer. [...] From a theoretical (and aesthetical) standpoint linux looses."
They had years to fix the shortcomings. Fact is, the linux kernel _architecture_ evolves at a snail-like pace. Just because a new kernel gets release every other day doesn't imply that it evolves in any meaningful way. Don't forget that there's a new release for every new driver and also there's practically NO serious internal testing performed by Linus.
Proprietary kernels probably evolve much faster but you don't get to watch it.
Lastly, good kernel programmers are rare. If you were one of the few would you rather spend your working hours coding for love and Linus or earning good money instead? Fact is, everything about Linux is _mediocre_. No great inspirations, no brilliant minds involved, no breakthrough progress. Many people don't mind that. It's "good enough" for them. I personally can't stand it.
You're wrong. The linux crowd *are* taking notice (Score:1)
While the mindcraft study does not seem credible ( for a start, it was not independent ) , this study is not severely flawed. However, all the study shows is that linux has its limitations as a high end server.
What is also interesting, however, is that NT is actually a weak value proposition on a high end machine. Take a look, and you'll see that it can't hold a candle to Solaris ( even x86 solaris ) , especially considering Solaris' superior reliability. HAND, -- AC
We Need A Real World Benchmark (Score:1)
We need to design and make a real world benchamark. Make a benchmark the has a very relistic mix of static, dynamic, and secure HTML. High and slow speed user connections. Also include content searches like looking up T-Shirts and catalog IDs in a catalog. I also want the client machines to look like a mix of machines, and for them to behave in a manner that is simular to a real user. That is to say, one connection calls for a page, pauses for a random number of seconds, then asks for another page, or does a search, or something. Each page returned will need to have a reasonable mix of images, and other content.
To do this benchmark some things will be needed:
Re:COmparison (Score:1)
It's stupid to take such a test completely out of context of the real world. Use both, then use the one that sucks less and you will probably use Linux/FreeBSD.
Besides, it's free!
The Good News - ZD likes Linux (Score:1)
performance lagging in certain tests with certain
hardware, the mainstream Windows-loving computer press has run several very long, very positive feature articles about Linux highly recommending it as a server vs. NT. I'm talking about within the last few days, with full knowledge of disappointing comparisons in these tests. One even had a chart with a feature by feature comparison in which Linux came out better. Not to mention the A+ RedHat 6.0 review featured here.
Sites include several ZD subsidiaries, PC Week, etc., CNet, Wired's Web Monkey and others I can't
remember. Check the Linux Center (french or english version) site for a list.
While there is certainly pressure within certain divisions of ZD and the ilk to run performance tests which are tilted towards MS, in general the journalists are being very fair, if not giving Linux the sweetheart treatment. And, at least in the test which is the topic of this thread, Solaris was also thrown in to show perhaps that a
unix system designed for high-end equipment easily beats NT even with hardware and conditions designed to show NT in the best possible light.
NT won't even run on what Linux and Solaris will run on (non-intel) but even in foreign territory
Solaris performed better.
I am not a sysadmin and don't have much knowledge of networking, but business is business. It seems that to make Linux look better, it may be necessary for comapnies basing their business on Linux like RedHat and Caldera to pay for their own tests under conditions highly favourable to Linux - and to publish the results prominently.
IT IS NOT UP TO THE "COMMUNITY" to do any of this.
Linux is not a company, but companies basing their business on Linux are in direct competition with Microsoft.
Mainstream computer journalists have prominently published charts comparing Linux to NT point for point - it seems to counter MS's "challenge" page in which very unfair and dishonest charts comparing NT and Linux were prominently displayed.
Are ZDNet and Wired doing RedHat's job for them?
It seems so.
Whining in posts here about what is wrong with tests showing Linux unfavourably just compounds the damage. It comes across as sour grapes.
Run your own tests and publish the results, email
RedHat and Caldera to do that, or shut up. Hell, nobody cares if you fake the data. Who will know?
I hope the people responsible for the kernel have better sense than to make unwise modifications just to make Linux perform better in certain artificial benchmarks. It seems that a better approcah is just to keep developing in a way which is natural to Linux, and eventually these benchmarks will take care of themselves.
In the meantime, if your heart bleeds for Linux, publish and advertise the many areas in which Linux excells as a server, and thank the journalists in the mainstream media who you seem to hold in contempt for taking the inititive to
do that for you. At least they deserve a thank you.
Re:Well. Microsoft will just improve NT faster. (Score:1)
will take another step forward and then when linus and cox try another kernel release, microsoft
will release yet another.
Linux is currently improving at a much faster rate than NT. How old is NT 3.5 and where was linux when this came out ? How far has NT come since 3.5 ? In terms of the work being done on the kernel, and in terms of third party support, linux has closed the gap, with SMP support appearing in 2.2, and several (big) third parties standing behind linux ( IBM, Dell, Oracle, Corel ).
As for the number of developers
BTW, your point about all the MS developers being too busy with Win95 to work on NT doesn't wash: don't forget, Win95 is about 4 years old now. In 1995, linux was in its infancy.
Face it. ITs lost. linux will continue as a hobby for
linux but in 5 ro 10 years NT will run in your car to your toaster to your hosue security system
Fire away with your predictions, but at present, linux retains a growth in market share, at the expense of NT. What makes you think that the rapid adoption of linux will suddenly halt ? All factors that have the potential to influence growth are on the rise. Factors such as third party support, critical acclaim, availablility of applications, mindshare, large scale deployment, and the OS itself have taken leaps and bounds over the last year, whereas NT has either stayed still or even slipped a little. I wouldn't bet on NT killing linux any time soon.
-- AC
NT vs. Linux (Come on!) (Score:1)
Using stock binary's(Apache) from RedHat or Debian or anywhere ain't gonna cut it.
I think its time the Linux community makes some moves and we show how Linux can compete in the enterprise market, instead of letting these bullshit benchmarks make it look like shit!
What Does it Matter? (Score:2)
So what if Apache is slower than IIS... it is certainly more reliable and configurable than IIS and since the average site doesn't have 100Mbs of bandwidth, raw speed is a moot point.
In the real world, the only companies that can afford the systems tested here are the same ones that can afford to pay for loads of NT licenses and the support staff that would be required to run the servers. Chances are these same companies are already in bed with MS and are using NT anyway. Linux's target audience on the other hand is the little guy, who can't afford all the expensive crap and wants a system that doesn't have to be watched 24/7. Maybe they're a college student or someone with a home business... the point is they understand price/performance tradeoffs, and know how to make the smart decision.
In short, Linux is open, and even if it only performs half as well as its nearest competitor, that makes it twice as good.
Benchmark purchasing stupidity instead! (Score:2)
NT4 server = ~$809.00(5 user pack)
Linux/FreeBSD/etc = Free - unlimited users.
Throw in ~$80 for secure server.
If you want email and such:
Backoffice/Exchange = up to $5000.00 depends on
on number of users.
Sendmail/Imap/Pop = Free
We all know you get what you pay for in this
world. Considering prices, numbers
(which doesn't mean crap anyway) and the reliability of Linux/FreeBSD/etc, free does pretty damn good in this area and in my book. Cost is
the ultimate equalizer.
Why would I spend thousands of dollars
and trade reliability/stability for about a 10 - 20% increase in speed when my speed is adaquate
enough? Are we all stupid?
We should benchmark purchasing stupidity
or gulibity instead.
And the winner is ... NOT NT (Score:2)
however, the tests also show that if linux wants to go head to head with the "heavy duty" unix flavours such as Solaris, there's work to be done.
--AC
Re:Comparison (Score:3)
[...]
While he is undoubtedly a highly talented programmer, I think that there are engineers in the world who are at least, if not more, skilled working for Sun, CMU, Microsoft, DEC and suchlike whose work has proved Linus to be very wrong.
Pardon, could you please tell me exactly which of the above comments (microkernel, plan9) were proven wrong by Microsoft engineers?
I don't want to say Linus is good, everything he says is right etc., but I want to see plain facts.
But for high volume dynamically generated content, for example, or commerce, or databases, NT is more mature and benefits from being developed by engineers rather than hackers. DEC, from whence Cutler came, are very serious about this.
I'm far from saying Win NT should be avoided at all cost - heck, I use what does the job best for me. But do you want to say for very high traffic, dynamic web sites you would like to use Windows NT?
Ok, this is not a server-issue, but it is Microsoft, so here follows a description example of "mature" software and the answer of the question: Why is regedit so big? [ncl.ac.uk] (The Risks digest, Vol 20;35)
Re:NT is less mature than the Unix design. (Score:3)
VMS is The Other Unix. It was designed by DEC, and Cuttler was the primary architect. It paralleled Unix in many ways, although it was not as consistent in design, nor as easy to use (as a system). However, it has auditing features and access controls that Unix (and Linux) could really use. VMS was designed to control every little security detail, whereas Unix was designed around trust and flexibility.
K&R were hardly "amateurs." They were working at Bell Labs on Multics, which was going to be a "real" multiuser OS. But its design was too boroque, with too much squeezed into the design; so Dennis Ritchie designed a better, simpler system in his spare time. (Bell Labs is a very open environment.)
Now, here's a question to ponder: Multics failed because of its complexity. Can you think of any other operating systems that try to squeeze everything into the OS? If so, can you defend that design in light of history?
Re:You're dreaming (Score:2)
Man, this has to be one of the worst misrepresentations I have ever seen. Yes, Linus did write that in the famed "discussion" (really a flame) with Andrew Tannenbaum, but he was defending his architectural choice, not conceding a mistake.
Basically, the argument that popped up in the discussion was that a monolithic kernel is quicker and easier to implement.
Anyway, people can go read the USENET thread in question [kde.org] themselves.
---
Ouch! (Score:1)
The real report still has some nice things to say about Linux, and hopefully this whole mess will give us the kick in the butt to start making everything go faster and be better.
As customers, we especially need that all the Linux distributors and hardware resellers start working together instead of wasting time "fighting" each other. An industry-wide consortium to develop better hardware with everyone contributing a fixed percentage of their net profit would be nice. That money could be funnelled to the developers to something such as sourcexchange (http://www.sourcexchange.com).
Still, it wouldn't have been possible just a few months ago to have a comparison of Linux with Solaris, NT, Novell... And since those "mainstream" NOS are often only affordable to bigger corporations, Linux has it's market cut out!
Re:COmparison (Score:1)
Well, this is the biggest load of crap I've read today (I read the ZDNet article last week). By this logic, we should all be running VMS!
Only platform that ran _all_ OSes (Score:1)
These guys don't understand Linux or NetWare (Score:2)
1) Linux:
Well, they used Red Hat 5.2 w/Linux 2.2. Of course, RH5.2 doesn't _come_ with Linux 2.2, so they compiled their own kernel. The possibility they messed up something there is very high. Apache is at a severe disadvantage compared to the other HTTP servers not only because of the lack of multithreading support (which I still wonder _how much slower_ that makes Apache) but also the lack of a reverse cache. Maybe that's Apache's fault, but it is easy to remedy, esp. since Red Hat includes a SQUID RPM.
I especially find it interesting that elsewhere on ZDNet you can find not only the old-news test of NT vs 3 Linux-distros+Apache+Samba (in which Linux/Samba/Apache trounce NT), but also a newer article in which (Caldera, I think) Linux+Apache again do the same to NT. (I can't find the URL right now--I think is was in Smart Reseller?). Just goes to show benchmark results depend as much on the benchmarker as on the benchmarkee.
2) NetWare:
First, these people start off dissing Client 32 (Whose name now, BTW, is simply Novell Client). Am I the only person who realizes that *Microsoft Client* means *Client for Microsoft stuff*? Besides, if Microsoft had implemented NetWare Core Protocol properly in Win9x, Novell wouldn't have *had* to write their own client software. In fact, Novell Client (or Client 32, or whatever) more or less *fixes* things wrong with Win9x networking so things run more smoothly. NC also has an adjustable file cache and can even restore network connections after a server has been rebooted (MS half does this). Novell Client is also a benchmarker's dream since virtually every option can be tuned. NC also enables Novell Directory Services to manage PCs; in fact, it is the _best_ way to manage NT workstations (they sort of glossed over that). What did they focus on? Experimental oplock support. A feature that is not only useless in a shared environment but, more importantly, is a bad test of network throughput since the file is only touched when you open and close it.
And, going back to their opening paragraph, they remark that by porting NDS to other platforms, Novell is "leaving little to drive new...deployments" of NetWare. That's one of the good things about NDS, and is one of the few things driving Novell's return to profitability. You'd think they'd be happy NetWare uses NDS to work in heterogenous environments, especially with their overall conclusion of that no one NOS stands out in every field.
Of course, they did realize the NetWare file-and-print services are still single-processor tasks, even with NetWare's new MPK (multiprocessing kernel), which is a real failure on Novell's part, IMHO.
Don't be ridiculous (Score:1)
Let's also assume that both boxes require an equal amount of maintenance (not necessarily so!)... but the Linux admin does it remotely, where the NT admin has to make house calls or wear a beeper. We'll average it out over the course of a year and say that the NT admin spends 2.5 as many hours working, and then let's call it six minutes of maintenance/2200 for the linux box, 15 minutes of maintenance/2200 for the NT box.
O_O (Score:1)
Yah- I laughed! But I don't know whether I should be crying instead! 96 servers? That's an _awful_ lot of their own dogfood to be eating, wouldn't you say?
Meanwhile, here in Brattleboro, somebody has sold the local Co-op a cash register system that all works on W98 (very possibly including some sort of NT server), and they're still struggling with it. It's easy to see those 'uh-oh' dialog boxes popping up and think the whole problem is unreliability, but it's worrying to think that even _if_ the system works perfectly (which it doesn't seem to be doing) the Co-op has no idea what kind of financial trouble it's now in, maintaining and paying off that system. How soon until they are deemed to need another NT server or three?
Laugh _and_ cry. This sort of thing will kill stores you love, and make many people poorer.
Fork Apache! (Score:2)
One, regular Apache, which would be used for actual HTTP serving.
Two, 'Apache Pro!' which is tuned for static page serving at all costs and obliterates any other purpose including reliability, stability, dynamic pages, whatever, just to produce benchmarks.
Then people can go on using Apache for _real_ web servers, but for the benchmarks, you ask them 'Why the hell aren't you using Apache Pro? You trying to handicap the race here?' and get them to use Apache Pro against NT- the 'bytemark version' (!)
Wouldn't that work? It has to be called 'Apache Pro' though, because it has to have the name Apache and it has to seem like the 'more industrial strength' version. _We_ know that it'll be better to just run Apache, but PHBs and test runners will find it impossible not to use Apache Pro- they'll be trapped by their own assumptions of 'upgrading' and 'standards'. It would be much harder to get another webserver used in benchmarks, but if you call it 'Apache Pro'...
Re:Again... Zues! (Score:1)
Did you mean Zeus perhaps?
How many times does the "Linux Community Inc." need to tell these people that Apache wasn't ment for speed!? Why is Apache designated as the One True web server? Benchmarking static Apache vs. static IIS is pointless. Any programmer worth his salt could cook up a few dozen lines of code that would outperform both servers on pure static content.
True, but no webserver should be that horrendous at serving static pages. While not the main purpose of most enterprise servers, some major servers do serve a lot of static data, and most of the rest serve at least a significant amount, so static serving speed is indeed important. Apache needs to improve in this field.
Rather than bitching about the benchmarks, fix Apache, then you won't have to bitch about the benchmarks anymore.
Re:Change to NT server please, Rob (Score:1)
Re:NT, Linux, NetBSD (Score:2)
The Goals of the Apache Project (Score:3)
To my experience, Apache is the most stable of all web servers, and the only one that comes close to implementing the whole HTTP protocol.
Speed is not the Apache group's primary concern, and folks whose main concern is speed might consider looking elsewhere. Despite that, Apache is more than powerful enough to saturate a T1 with a relatively low end machine (we have saturated a T1 with a Pentium 90/96M RAM running Linux), and a fine tuned Apache can easily outperform just about any other web server (when we load mod_mmap we get performance tens or even hundreds of times what IIS can do on a good day).
Samber FASTER than NT serving NT! (Score:1)
195Mbps vs 114Mbps
Its not all bad news!
OK, NT wins. Now lets benchmark this... (Score:1)
1) Performance of NT as massive numeric processor for simulations.
2) Performance of NT as specialized SEM driver.
on second thought screw the second one. its really not possible to go very fast unless you can add speed hacks to your driver.
if NT wins the first one I'll switch! (It never, ever will)
Need more high end help. (Score:4)
I love Linux I run an ISP with it. I am also working for a company that wants to relese linux on their high end Intel boxs (Hint, they are currently supporting NT and do UNIX on relly colorful boxs), anyways I am having a tough time finding drivers and HOWTO's on doing high end stuff with Linux. like Fiber channel scsi, Gigabit eathernet, heck even handling more then 9 scsi drives at a time. The OS needs to grow out of the "Keep a 386 useful" to a higher level now.
Re:IIS and benchmarking (Score:1)
You obviously don't understand what is going on here. Code has nothing to do with it. The test data is such that it is static and just the right size to fit in IIS's cache and not the cache of other servers. This test data does not represent anything in the real world.
--Simon
It was a fraud beyond any doubt. (Score:1)
--
It's a myth because it's bullshit (Score:1)
The next question is who has the money to buy a quad Xeon with quad fast Ethernet NICs, but can't scrape together the change to get a gigabit NIC and switch instead? Uh, I'll take "no one" for $1000, Alex.
You're not nearly as realistic as you think.
This old data... (Score:1)
NT is the only reason these servers were quad-processor PIII/500's, and NT's resource hogging is exactly the reason that I don't run it on my personal box. I have it running on a P133, and I laugh at it sometimes.
If I had a high-end server, I'd try Linux and NT again, and... well, I bet I'd be laughing at NT pretty soon. It isn't exactly impressive running on uniprocessor PII/400's, but the bluescreens are cute. Really, how much CPU does an OS need just to crash?
Re:What this really shows... (Score:1)
These benchmarks expose nothing new: Microsoft will always try to bend and twist information to suit their needs (the first study was an absolute joke), and Linux has a long way to go before we can call it scalable and SMP friendly. It's just not ready for the enterprise. Not that I'd sleep well at night knowing my systems were running on NT mind you ;).
Benchmark vs. real world (Score:1)
All of these benchmakks remind me of a sucker bet a friend of mine made.
It seems that some guy boasted that his car could outrun anything present over any distance. My friend bet him $20 that he could beat the car on foot. Naturally, the challenge was accepted. My friend then marked the 'course' of 10 feet from start to finish, and the race was run. My friend won. According to that 'benchmark', my friend can outrun the car. (no car can outrun a person in reasonable health for the first 10 feet).
These benchmarks are the same. Now, I know that the next time I want maximum speed for 30 minutes at a time from a web server w/ 4 100baseT cards (and a network that can keep them busy), I should choose NT. Of course, if the constraint is multiple T3 and minimal downtime, Linux and Apache are the way to go. Guess which is the more likely scenario.
Re:oplocks (Score:1)
Of course they don't. The people who really know their stuff when it comes to computers wouldn't be caught dead working at places like PC Week. You might've found them working on the staff of the old COMPUTE! or Byte magazines when they really did cover multiple platforms like the Atari,Commodore,Apple machines and PC clones. That kind of knowlege just doesn't exist anymore in today's computer magazines. Just take a look at what's on the magazine racks of your local bookstore or K/Walmart and compare it to what was on those very same shelves 5-10 years ago. It's really quite sad actually.
Re:wakeup call for open source/linux (Score:1)
You wish. Linux's future is only beginning. NT's "15 minutes of fame" as you put it is what's up. Take a look around you. Nobody's talking about NT dominating anything anymore. Take a look at what the school kids are running on their home machines. It's *NOT* NT. As they enter college they will bring their Linux/Unix knowlege with them. The only reason NT really got a foot in the door was that people was pretty much only familar with Windows back in the early-mid 90's. This is what is changing and why pro-microsoft people like yourself are yesterday's news. Get over it.
COmparison (Score:2)
But for high volume dynamically generated content, for example, or commerce, or databases, NT is more mature and benefits from being developed by engineers rather than hackers. DEC, from whence Cutler came, are very serious about this.
However, for midrange work, linux simply isn't up to par yet. I seem to recall Linus himself stating that he believed OS design was well understood by the 1970s [linuxworld.com], and he considers microkernels to be "stupid", plan9 to be "stupid" etc etc.
While he is undoubtedly a highly talented programmer, I think that there are engineers in the world who are at least, if not more, skilled working for Sun, CMU, Microsoft, DEC and suchlike whose work has proved Linus to be very wrong. And as such, linux is crippled.
Re:Multiple servers + load balancing (Score:2)
but add in session management, personalisation, real-time news feed, content archives, commerce, access control, extensible templating and dynamic page generation and all that other stuff we do in the real world, and your solution starts to look quite naive.
What this really shows... (Score:4)
which does not use behemoth computers. NT needs the hardware to run well.
Believe me, NT does not run well, for instance, on my 450 Mhz PIII with 128 megs of RAM, all things considered.
Linux has a history of keeping abreast with reality. When nearly everyone has a four or eight CPU monster, then Linux will run like hell on them, and so will applications such as Apache, etc.
When everyone had a 386, Linux ran well on a 386. When everyone had a 486, Linux ran well on that (and still does!). Linux is made to fit a need,
not to participate in olympic events.
I have an 8MB 486 at work on which I need Linux to run well. It does. In all likelihood, NT 4 won't even boot on such a machine. The machine has no keyboard or monitor, yet I can completely administer and upgrade it. NT would be useless on it since it requires a graphical display, mouse and keyboard for administration.
Re:Stupidity... (Score:2)
uptime: 52mins
Last time I looked it was 3 days. Maybe Rob should explain what's going on with
perl -e 'print scalar reverse q(\)-:
IIS and benchmarking (Score:5)
The other interesting point is the fact that ZD came up with the IIS benchmarks specifically to show how good IIS is. Such things as fitting the test harness in the cache, and only doing ISAPI dll's for dynamic content (vs CGI on other servers).
There are lies, damn lies, and ZD benchmarks. I'll use what works, and live happy in the fact that I won't have to reboot my server this year.
Matt.
perl -e 'print scalar reverse q(\)-:
Apache Lies (Score:3)
This is really starting to get old.
Apache running all CGI is compared against IIS running ISAPI, and - surprise! - IIS kicks Apache's butt. I wonder how things would look if we ran a mod_perl test and compared that to IIS running CGI. "News: Linux/Apache Provides 3.5 Times More Hits Than NT!" I will observe, for the record, that Apache, IIS, and Netscape all provided exactly the same behavior on CGI; no dynamic test was ever done with Apache, so we'll never know, but I bet a mod_perl test on Apache would have produced at least somewhat similar numbers to IIS and Netscape.
And what's all this about Apache modules having to be compiled into the server? My Apache install has a directory full of dynamically loaded shared libraries. Exactly the same way IIS implements ISAPI modules. Only on IIS, you don't have the option of static linking for whatever reasons (less overhead, security, whatever).
I especially loved all the "process vs. thread" crap. Both PC Magazine and Wugnet (yes, the true authorities on Linux) were all over Apache's "process" model vs. IIS's "thread" model. But on CGI, you invoke a new process with each client request, no matter how many servers you've preforked or how many threads are idle. Presto: poor performance, no matter what the preforking parameters are.
You know, I wouldn't be all that surprised if NT beat Linux on this high-end hardware for various things in a fair benchmark. I'm just sick of hearing this kind of drivel from the MS camp. I almost hope Linus & Co. do Mindcraft III just so we can have a decent benchmark to compare against and some future directions for development instead of all this blatant lying.
What bugs me... (Score:2)
Excuse me? Who are they refering to as "the Linux vendor" in this situation? Some company like Penguin Computing? RedHat? Linus?
Surkan: benchmark perf =/> good file server (Score:3)
http://www.zdnet.com/pcweek/stories/columns/0,4
It is surprising that he sings the praises of non NT OSes in their ability to use resources more efficiently on non high-end machines.
Re:You're dreaming (Score:3)
Umm, he said that it loses "from a theoretical (and aesthetical) standpoint". This is inequivalent to saying that it "would have been better" from a pragmatic standpoint. uote>Proprietary kernels probably evolve much faster but you don't get to watch it.
Perhaps, perhaps not. The fact that "you don't get to watch it" means you can only guess (unless you happen to be one of the people who "get to watch it").
How much have, say, the Solaris or NT kernel architectures changed, relative to the extent that the Linux (or *BSD) kernels have changed? (BTW, neither of them are what I would call a microkernel, not even NT - NT's device drivers, file systems, and networking stack run in kernel mode, for example.)
I have the impression that at least some of the developers of kernel code for free OSes do both.
Re:Multiple servers + load balancing (Score:3)
deliver static documents. So the configuration
he suggested is valid for the benchmarks he
was responding to.
I think what everyone of these "benchmarks" of
apache are missing is that delivering static
content is the least of the reasons to use apache.
Slashdot is a real example of a dynamic website.
No one is benchmarking dynamic content delivery
thru web servers.
-LL
Re:Change to NT server please (Interesting Idea!) (Score:2)
Actually, that would be quite interesting. Take an NT server (I have a copy I would gladly donate for this test) and install Perl, Apache, MySQL and mod_perl. Copy Slashdot over to the new machine and transfer the load and see what happens.
I'd wager that it'd fall all over itself.
The wheel is turning but the hamster is dead.
Re:NT, Linux, NetBSD (Score:2)
NetBSD may some day become the most appropriate solution; it isn't yet. Chuck Cranor has done a very good job on UVM, but it *is not finished*. Of the free Unices, the only one that has a virtual memory system that is state of the art as of today is FreeBSD. NetBSD and OpenBSD will probably get there; I doubt Linux will (due to the very strong defensive reactions Linus' has towards some aspects of the Linux code.) In some ways, I hope I'm wrong - it is a pity if that many people will be left with an inferior VM system :-(
Eivind.
I'm sure that's the world we want (Score:2)
Don't you NT zealots see that statements like this only help Linux?
It gives those of us that actually enjoy computers for what they are more incentive to make sure things don't get worse than they are.
One company running the computer industry is just as bad as the railroad tycoons, or any monopoly that controls every facet of computers. I'm not sure even you would want to live in a world like that unless you were a Microsoft shareholder.
An interesting thing from these benchmarks (Score:5)
http://linuxtoday.com/stories/5906.html [linuxtoday.com]
This story reveals that Linux with Samba achieved 197Mbps, which was significantly higher than the Mindcraft benchmarks, severely invalidating the original Mindcraft benchmarks. Also, Apache did MUCH better on these benchmarks than on the original Mindcraft tests.
The article also shows that NT achieved only 150Mbps against NT clients, 31% slower than Linux. In tests with 60 clients, Windows NT managed only 110Mbbps throughput, compared with 183Mbps for Samba.
So, we got something out of these benchmarks. Linux serves Samba to NT clients 31% faster than NT on high end hardware!
Now, if we only tested IIS against Zeus to make a more fair benchmark for static tests, Linux wouldn't look so bad after all overall.
I don't see how these new benchmarks validate Mindcraft at all.
Stupidity... (Score:2)
NT, Linux, NetBSD (Score:5)
But it's pretty well acknowledged that NetBSD kicks ass in that department.
Time for Linux groupies to take the blinders off. Quit getting your shorts in a knot about the unfair Mindcruft tests, quit trying to pit Linux against NT in server applications...
...and start *heavily* promoting NetBSD as the ultimate server solution. Mob the media with it.
As long as you play by Microsoft rules, you lose by Microsoft rules. And fiercely protecting one's "turf" is a Microsoft rule.
Step out of that box. Quit promoing Linux as the be-all and end-all. Promo NetBSD as *the most appropriate solution* to server needs. Promo BeOS as *the most appropriate solution* to multimedia needs. And so on.
This tactic will emphasize to the media that people should make active choices re: their OS needs; emphasize that Windows is not the most appropriate OS for most cases; and emphasize that the Linux community plays big and puts the user first and foremost.
It's a no-lose situation. Choice is the ultimate goal.
Proposal: the ./ benchmark! (Score:4)
Every day, ./ chooses a web server on the internet at random. It then presents a link to that server somewhere on the start page, calling it the "benchmark link" or whatever (so people know what it's for). It is then ./-ed by the readers, and at the same time monitored for its uptime. Its server OS and software are determined (if possible, should be) and as the days pass, statistics are put together for the average time a server OS lasted under that strain.
Not entirely serious, but a good "real world" benchmark, and I'd enjoy that.
Two things (Score:2)
Anyway. Instead of simply blasting benchmark results that don't match what we expect, we should work to fix the problems in Linux, Apache, Samba or whatever is causing our bottlenecks. The fact that we can do that is one area of significant advantage that we have over NT.
Re:This is all kinda silly right now (Score:2)
So why reality is different? (Score:2)
The article presents some points one may usually find on working either with Linux, Solaris or Novell. However some points are really the result of not caring on doing some tuning. Besides the article is purely biased on the point of "choosing the ONE final high-end".
Somehow the article suggests a person to compare Linux & Solaris. Well sorry both systems have their ++ & --. However I agree that Linux is mostly fitted for an average computer rather than a super-high-end machine like UltraSPARC 4500. Here Solaris beats it.
But does that mean that Linux is not and high-end system? Well let's not forget the cluster systems. Even Linux has a place on top-500 of supercomputers. And it beats some serious machines around there.
Somehow someone may have forgotten here one of the contenders. Novell Netware is a very specific system oriented mostly for a very specific sort of tasks. But it does its job much better than Linux or Solaris. Both in safety and preformance. And don't dare to compare it with NT. One machine now running Novell for the 7th month couldn't even hold a simple transfer of 100Megs over the net. Not talking about preformance (hey Redmonds, I also like to burn some time with my family!)
Really the NT stuff there is pure hype. On my "practical benchmark" NT Servers lived no more than 1 month of real, serious work. After that very sad experience we returned to Novell. On what concerns workstations we recently destroyed every MazDiee soft over 70 machines serving more than 2000 users. And on what concerns "high-end" we don't even dream about Redmond. Everything runs either on Solaris, AIX or Linux.
Some people say that my relation with NT is due to the fact that I didn't taste the "real thing", that I should have been more systematic on "tuning" it. I know only one _real_thing. Two months ago I had several machines running with a miserable preformance and suffering several crashes every day. Now I don't hear complains about slow preformance and the majority of workstations carry uptimes of almost two monthes.
Two monthes ago my wife almost forgot that someone else lived in the same apartment. Now I have some time to share with my family.
Oh besides. Now we have the chance to make an high-end machine out of this workstation stuff
Re:This is all kinda silly right now (Score:5)
What do we run? Netscape web servers on Solaris. When big news like the Starr report came out, all the servers at MSNBC running NT came crashing down under the load, but we didn't. That's what UNIX (and Linux) are about, reliability. Apache can be performance-tuned if you need it to be fast (Netscape's server is the same code base as Apache), but for most of us it's fine as-is. I bet that Microsoft.com doesn't get 2200 hits/sec.
Re:The OS not tested (Score:2)
I'd be willing to bet that 90%+ of those OS/2 servers have been replaced by Windows NT. Don't forget that a large portion the OS/2 server market in the early 90s was Microsoft LAN Manager.
Warp 5 is essentially just a bone thrown to the legacy customer base - it's no shock they didn't test it.
--
Re:The OS not tested (Score:2)
I suggest you do just that -- Call up your IBM rep and ask them if they are actively selling "OS/2 Warp for e-Business" to any customers who are not currently running OS/2.
What you will find is that IBM is selling AS/400s and Windows NT support services. Sorry.
(Note that I never said that Warp wasn't/isn't a capable OS. It's just a capable OS that's seen it's last major release.)
--
Re:It's only a "myth" because Linux sucks at it (Score:2)
What old data?
The only thing I have dug up is a Linux/Samba blurb on ZDNet, where no lab data was given. (It turns out that a member of the Samba team tuned Linux.) That and a bunch of anecdotal evidence that Linux runs faster and better than about anything on a Pentium-90 with 48MB.
I'm not saying that the recent benches are fair by any means, but Linux has gotten larger than a bunch of guys on the Internet. That means that objective data is going to come in (something that hasn't necessarily happened yet, especially on high end x86 hardware), and some of that data is going to be sponsored by competitve vendors, and some of it is going to be cooked.
There isn't a commercial software product available that hasn't withstood this sort of 'objective' marketing attack, and especially when you're dealing with Microsoft, you have to do more than yell and maintain moral superiority. Someone (err, RedHat, Caldera, and SuSe) is going to have to post their own benchmarks and their own data.
(And, yes, Linux has enough commercial interests attached to it that you can count certain distros as commercial operating systems.)
--
Re:NT servers (Score:2)
If what I heard was correct, the NT conversion at Hotmail failed because of limitations in MS Exchange, *not* IIS.
I don't think there's any question that Exchange has it's problems.
--
Re:You're dreaming (Score:2)
Not that this destructs your point, but in the early 90s, Sun was willing to dump the SunOS kernel, and Microsoft was willing to dump the OS/2 kernel. So it's possible for a big commercial vendor to completely switch over to a new kernel.
I'm no computer scientist, but it seems that the maxium that Microkernels are slower than monolithic kernels is only true until it isn't. Perhaps something will come out of Apple Darwin.
--
"Objectivity" (Score:2)
I woudn't say that they are, only that they carry considerably more weight than personal testimonials to those making buying decisions.
(Now that I used the word "Objective", I have the sudden dread that an Ayn Rand person is going to jump on me!)
--
Use what's appropriate for the job. (Score:2)
It's only a "myth" because Linux sucks at it (Score:5)
Admit it folks, if the tables were turned and Linux was beating NT in these benchmarks, we wouldn't be hearing all these excuses about the relevance of the benchmark.
Not that this is a new thing, since it happens every single time that someone shows that Linux might not be the best solution for everything under the sun. Whether it's lack of certain quality applications available on other OSes, or poor performance by Linux on a certain benchmark, we can always be assured of hearing the shriek of, "But nobody needs to do that anyway!"
Uh-huh.
And no, Linux doesn't actually suck at this current benchmark, but it definitely doesn't measure up to NT or Solaris in it.
Cheers,
ZicoKnows@hotmail.com
Slashdot Realist
Re:Again... Zues! (Score:2)
Even in some "benchmarks" that compare AIX vs NT, NT chures out more static pages than AIX, but chokes on highloads, while AIX just keeps chugging along. I have a good idea that on one and two CPUs, Linux is exactly the same.
If you gave me the choice between stablility and reliablilty vs speed, I would take the stability and reliablilty any day! No one ever got fired for have stable and reliable servers!!
Re:Numbers (Score:3)
All in all though I think that this is a good test and points out some flaws in Linux and the software that people use on it. Yes folks, Samba doesn't always work right, Apache isn't the best web server for every job and Linux doesn't scale up on multi-processor systems the way the big boys do. Hint: Run these tests on a monster 32+ processor, multi GB RAM computer and see the results--compare with a single CPU 1GB RAM with the same NOS.
The winner in this test, IMHO, is Solaris. All the free publicity for Linux is publicity for UNIX in general. While you might put Linux on a small local server you aren't going to use it on an E10K sized computer.
Um, you realize none of this matters, of course... (Score:5)
These are two totally different areas, and Linux was always designed with the lower end in mind. How convenient then for them to do all these tests on huge computers nobody would actually use for a web server, unless they run one of the top 100 sites on the internet! Not to mention the fact that this is more of an apache benchmark than a linux one.
If you run a huge smp machine and want to squeeze every last drop of speed out of it, you probably won't run linux anyway. It's not that linux isn't "good enough", it is designed for a different purpose. For a job like that, you would want Solaris or FreeBSD (still not NT)
NT has its own design purposes, which are different from any unix type system. There are two main design goals I can see in NT: 1. Be easy for even an idiot to maintain, since most of the time all he will have to do to is follow wizards or reboot the machine. 2. Be monolithic and slow, but for benchmarking purposes, have a way for those few people who know the OS inside and out to tweak it to insane levels for one or two particular services at the expense of stability and resembalance to "real life" situations.
Re:Multiple servers + load balancing (Score:2)
It's not a naive solution...it's very workable, sensible, and much more affordable than the "one giant box" business model.
Do a little more research...load-balancing is far more involved than just slapping multiple IP addresses into a DNS record.
Re:This is all kinda silly right now (Score:2)
Err... No, it isn't. I agree with most of your post, but there are significant differences between Apache and Netscape's server software. Netscape in fact might perform better on the high-end hardware for static pages than Apache does because I believe it uses a different (threading) model than Apache (forking).
Re:Again... Zues! (Score:3)
Numbers (Score:5)
That said let's actually look at the graph for a minute. On the WebBench test 60 clients is about the point it seems that NT levels off, can't really tell they cut the graph off. Yet the quote below has you believing different "Solaris and NT had plenty of CPU cycles to spare." And linux wasn't exactly losing any ground at that point - ok a little lower but not much. It seemed to hit a stable point. What about more clients? And then there's the Netbench graph. I mean look at NT plumet. Linux hits 16 clients and levels off at 200 Mbps. Nt hits 48 clients with 350Mbps then falls down to 300Mbps by adding 12 clients. Linux added 44 cleints and lost maybe 50Mbps. To me this looks like linux acts like a marathon runner, getting to a distance then setting cruise and holding steady. NT on the other hand is like s sprinter, burning itself out and working hard quickly but won't last real long. Yeah the sprinter will beat the marathoner in a 1 - 2 mile race, but look out for that 5 - 10 - 26.2 mile race.
My point is you get fine, predictable preformance, regardless of the amount of work asked of linux. Meanwhile NT seems fine for small amounts but the more you ask the less likely you are to get it. I want to see the benchmarks with higher numbers. I'd expect linux to hold around its same mark, and NT to fall steadily. Why 60? Why not 100? 100 is even (ish), why not 50? 60 just seems like an odd number.
-cpd
Aren't we losing the plot here... (Score:2)
The next time one of these comes out, then how's about Everyone who posts on Slashdot, posts "Please, Microsoft, if you don't like it, then please contribute some resources to fixing it, otherwise, shut up!" or something like that.
Microsoft are still in the Business Mould... What do we care!!!!???? [idea for a poll - how many people's income depend on the market share of Linux against NT/Solaris/Netware etc. - OK, so the likes of Redhat/Suse/Caldera etc. do, and I wish them well, but your average kernel hacker is (as Microsoft themselves pointed out) just in it for the recognition/fun etc.
Therefore, Linux will still be around, as long as people still wish to develop it, and when they don't, then it will die, and there will be no one who cares, since we will be all hacking some other OS, or project or something! When NT dies, which I'm sure it will, there will be lots of MS shareholders who do care, and will be most unhappy... Justice!
Eric the Cat
NT is less mature than the Unix design. (Score:5)
Most of NT (and other M$ code) was written by lower echelon programers, under the direction of computer scientists and managers. Many of them had only recently graduated from MSD training classes. In generaly they were operating under marketing imposed time constraints. This shows in the quality of the product.
If you want proof, try working with the IP routing table metrics under NT, or look at their publicly released code, ie the frontpage extentions for apache. Also look at a security model that requires everyone to buy third party virus scanners.
In contrast, most of Linux was based on an established tradition. Most of the major holes were already known. It was written by people who cared about the quality of their code. They loved programming, and their personal reputations were at stake. Then that code was reviewed publicly, and contributions were fed back to the author.
I forget who said "If I have seen far, it is by standing on the shoulders of giants". M$ forgot it was ever said.
Benchmark frenzy (Score:2)
secondly I belive that serving *STATIC* html pages is not meaningful at all for "real-world applications". This is not to say that NT cannot outperform linux on high-end hardware or in given configurations, but
I want to stress that this results seem to make little sense. It is more or less like comparing the MIPS rating of two completely different architectures and deducing that one is better than another; not quite so and,
more importantly, everything depends on the kind of task one has to have the machine doing! (for example Alpha processors are just great when we consider Floating Point performance; when running my code -aimed at symbolic calculus and not FP- they outperform intel processors by just a factor of around 2 or 3).
Linux is "work in progress" and bad results should lead people to improve it, rather than complaining on how unfair the test has been (even if the test has been unfair, like this seems to be the case); on the other hand linux
(and the various *BSD) has the huge advantage of a nice standard interface and of the disponibility of a huge code base. Security patches usally are released quicker for
Unix-like systems than for NT and this is a good thing. Now just a remark concerning my experience: when I first tried linux (more than 5 years ago) it was
neither very stable nor exceptionally fast, but it was an unix-like system with OpenLook (that I was using also under Solaris on a SPARC) and it allowed me to share easily my code between the machine at the University and the one I had at home.
The improvements linux has done since then are quite impressive (and nowadays I think I prefer having linux rather than SunOS 7 on a sun4m and I consider it to be definitely better than DU 4) but there is still a long way to go, and even if now it is slower than NT while performing some tasks
stability and "user friendlyness" (at least for somebody that is writing this text in lynx and loves command line!) are things I would not underestimate. Moreover I guess that in 4/5 weeks we will have some patches that address the "lack of performance" these tests have shown.
Re:Again... Zues! (Score:3)
"Apache is a general webserver, which is designed to be correct first, and fast second."
That is the first sentence in the performance tuning document.
It's funny. Laugh (Score:4)
The pure humor of it all...
From behind the scenes of www.microsoft.com [microsoft.com]
Hardware
Six internal Ethernets provide 100 megabits of capacity each
2 OCI2s provide 1.2 gigabits of capacity to the Internet
Runs on Compaq Proliant 5000s and 5500s, with 4 Pentium Pro processors and 512 megabytes (MB) of RAM each.
Software
Microsoft Windows NT Server 4.0
Microsoft Internet Information Server 4.0 (IIS)Microsoft Index Server 2.0
Microsoft SQL Server 7
Other Microsoft tools and applications
Powerful Solutions
www.microsoft.com started out as a single box beneath a developer's desk in 1994, handling about a million hits a
day. That seems almost laughable now. A sleek data center in Redmond, Wash., receives more than 228 million
hits a day while data centers in London and Tokyo shoulder the international load of about 12 million daily hits.
How has the site handled its explosive growth while keeping its hardware to a minimum? How does it administer
one of the largest databases in the world? How does it manage the challenges of a decentralized publishing
environment? How does it come close to achieving 100 percent site availability? The answers lie in the
strength of its software, according to site architects. The whole shebang runs on Microsoft Windows NT 4.0,
IIS 4.0, and SQL 7.0. "Our site showcases Microsoft technology," says systems operations manager Todd
Weeks. "We prove every day that we can run one of the largest sites in the world 100 percent off of
Microsoft technology."
The Challenge
Not only is www.microsoft.com an enormous site with hundreds of thousands of pages of content. Not only
does it receive millions of hits a day. Not only has its growth been unrelenting. Those are some of the
easy challenges, site architects say. One of the most interesting challenges is that www.microsoft.com
functions within a decentralized publishing environment. More than 300 writers and developers working in more
than 51 locations around the world provide information for the site. These content providers are able to update
their sites within the www.microsoft.com umbrella as often as eight times a day. In fact, 5 percent to 6
percent of the site is updated every day. The complexity of that publishing environment is daunting
when you consider that each of the 29 content servers in Redmond contains the nearly 300,000 pages of
information that comprise www.microsoft.com. But the end result is that the information on www.microsoft.com
is as current and up-to-date as possible. A team of about eight people staffs three shifts around the clock
to ensure www.microsoft.com stays up and running 24 hours a day, seven days a week. "Our goal is to make the
site available to users 99.8 percent of the time," Weeks says. So how do we reach that lofty goal of 99.8
percent availability? (The 0.2 percent down time is required for routine maintenance.)
First, the Hardware
The physical architecture behind www.microsoft.com seems surprisingly modest. Twenty-nine servers host
general Web content; 25 servers host SQL, 6 respond to site searches; 3 service download requests along
with another 30 in distributed data centers; and 3 host FTP content. Additional servers overseas handle
some of the international load.
Did you count all of that? That's 96 Compaq Reliant 5000s & 5500s (Quad Pentium Pro boxes with 512Mb RAM) running
www.microsoft.com using NT, IIS, Index Server, and SQL Server.
Standard
This machine is a P6/200 with 1GB of memory & 1/2 terabyte of RAID 5.
The operating system is FreeBSD. Should you wish to get your own copy of
FreeBSD, see the pub/FreeBSD directory or visit http://www.freebsd.org
for more information. FreeBSD on CDROM can be ordered using the WEB at
http://www.cdrom.com/titles/os/freebsd.htm or by sending email to
orders@cdrom.com.
Now, which site do you suppose has set more download records?
Again... Zues! (Score:5)
They should benchmark how many dynamic perl generated pages NT can vomit out
Again with the speed thing (Score:5)
Why is SPEED the overwhelming issue? IMHO, there is so much more involved in choosing a server OS. Do we really need to measure the number of milliseconds it takes to rename a file on the server? Isn't that a little silly?
Picking a hardware/operating system configuration is not a drag race. You care about cost. You care about uptime. You care about security. You care about support.
The skills of your existing personnel are important too. If you have a staff of freshly-certified MSCEs, it's very unlikely that you will use a Unix-like system. OTOH, if your network admins love Unix, they will want to work in a familiar environment.
In the end, speed is not really the same thing as "performance". Benchmarks like these provide nice soundbites for the winner (whoever it may be). They also improve magazine sales and web traffic for the publications. If you choose to commit your organization to an operating system based on them, however, then maybe you deserve what you get.
As my mom used to say, "When that lawnmower cuts off your feet, don't come running to me."
Re:Magi (Score:2)
Actually, the nice thing is that Linux already solves the problem that the original Magi triad had. Besides the inherent virus protection, the sheer number of daemons running around your average Linux box should be sufficient to defeat any attacking Angels.
And don't get into a huff about Absolute Terror fields, either. Linux holds its own against Microsoft's AT field (well, what better description for FUD than Absolute Terror?).
Re:It's only a "myth" because Linux sucks at it (Score:2)
If you perform an experiment to study gravity, and you get a value for g of 32 ft per second squared, you don't go looking for what you did wrong--other experiments show this result as well. If you find that g=14 feet per second squared, you start analyzing the experiment rather than rewriting the physics texts.
In the Real World, Linux appears to outperform NT. In most benchmarks, Linux solutions appear to outperform NT solutions. Two Microsoft organs create benchmarks, and the NT solution outperforms the Linux solution.
We look for holes in the benchmarks because we smell a rat. We've found a rat or N--some big ones.
What we have learned is that benchmarks can be easily cooked. If someone with a vested interest controls enough variables, one can create a pathological case where one's interests win.
If it wasn't a cooked test, there would still be people yelling. This is not a good thing. However, this is a cooked test. Linux can beat NT in a lot of ways, including performance-wise. Linux isn't strong enough to beat NT with one arm tied behind its back, especially when Microsoft chooses the arm.
OTOH, anybody know how well NT does at ray-tracing? IBM had some fun with Linux and ray-tracing a while back...
Re:Multiple servers + load balancing (Score:2)
An IT department with that sort of a budget will find Linux to be rather useful for some applications, actually. With that size of a budget, one can make an in-house Linux support team. Having such a team and using Linux keeps you from relying on a vendor's support team. Such a team allows you to implement mission-critical bug fixes on your schedule, not that of your vendor. And believe you me, if you are big enough to have a $1B budget, time is measured in thousands of dollars per minute. Waiting a month for a bug that takes a week to fix is expensive.
Re:COmparison (Score:5)
If the question is "how many people deploy low end solutions?", then it is important to note that the situation being tested has no relation to the real world. If someone needed to serve thousands of static pages per second, they would be out of their gourd to select a quad Intel box in the first place, regardless of OS. Better to have a small, cheaper farm of lesser computers to do this job.
Given that one has chosen Linux on any hardware platform for this task, one would also be out of one's gourd to choose Apache. Apache engineers will tell you this. Apache is built for flexibility at the expense of performance. Thus, the simpler the job, the slower Apache is, on any platform, for the job.
If you are comparing OS speeds for Web serving, either use the same Web server on both sides or use optimal Web servers on each platform. Apache engineers will be the first to admit that other Web servers can outperform Apache, on any platform, for this test.
But for high volume dynamically generated content, for example, or commerce, or databases, NT is more mature and benefits from being developed by engineers rather than hackers. DEC, from whence Cutler came, are very serious about this.
Maturity is relative. NT has more runtime hours than Linux, so there has been a longer time to detect bugs. Linux, due to its huge potential developer base, may well have more developer hours invested in it. It also has more debugging hours invested in it, because most Linux users are potential debuggers. When NT fails, one just reboots. When Linux fails, one often looks at the messages file (or has a sysadmin do the same) and track it through the tech support infrastructures.
The specification for Linux draws heavily from Unix, which is an incredibly mature model for high-volume computing. Most of the specification for Linux predates DOS, never mind any flavor of Windows.
Linux is developed by engineers at their best. The best engineers are hackers, really; they're the ones who build software for the love of building software. And Linux hackers contribute with only their best.
If you are building a commercial software product, you are expected to put 5-6 days of work into the project each week. You cannot maintain top productivity, top quality, on that sort of a schedule. Employers understand this, and they deal with it. The code produced in a commercial setting tends to be "good enough".
When someone contributes software, the same drive that causes them to want to do something like this for no money causes them to work at peak performance. It also causes them to work at precisely what they're good at. Sure, they may only put in 100 hours in 3 months, but you will get their best 100 hours, easily worth 3-400 average engineering hours. Commercial software is produced by marathon. Free software is performed by relay sprint.
Besides, if commercial OSs are superior by virtue of being developed by engineers rather than hackers (that is, by virtue of being commercial), then why are shops like Sun putting so much effort and money into Linux? Methinks that the Solaris guys see something in Linux that they envy, and I don't think that it's just the salaries.
However, for midrange work, linux simply isn't up to par yet. I seem to recall Linus himself stating that he believed OS design was well understood by the 1970s, and he considers microkernels to be "stupid", plan9 to be "stupid" etc etc. Whatever you think of Linus' talents as a kernel hacker, the fact remains that Linux works. It works in commercial production environments. Sysadmins have been disobeying management by deploying Linux where NT was requested--and they've been doing it for years. This isn't politics.
A sysadmin has one overriding virtue: laziness. Larry Wall gave us the prototype: more on this in the Camel book. They want to do the job once, they want to do the job right, and they want to forget the whole affair afterwards. These sysadmins have been putting Linux in back because they do their jobs and are easy to handle--and because the performance boost gives them fewer boxes to administer (and fewer hassles with acquisition budgets).
While he is undoubtedly a highly talented programmer, I think that there are engineers in the world who are at least, if not more, skilled working for Sun, CMU, Microsoft, DEC and suchlike whose work has proved Linus to be very wrong. And as such, linux is crippled.
Linus doesn't have to be the best programmer on the planet. In fact, he needs never write another line of code. There probably are better kernel hackers writing code for their respective companies--and also writing code for Linux.
Linux isn't an optimal OS. There are places that it is the best one out there, and other places where it does poorly. Like every OS, however, it evolves. Its openness simply lets it evolve faster than the competition. Per Darwin, evolve or die.
It doesn't matter how wrong Linus is in his coding, because it does well enough for commercial use today. Maybe microkernels have overriding advantages. GNU has a microkernel OS (GNU Hurd) in beta or GA by now. If it outperforms Linux, it will only be a matter of time before somebody crosspollenates them. Then RMS will have a better case to call it GNU/Linux. Whenever somebody finds a major improvement one can make to an OS, somebody else will port that improvement to Linux. Perhaps every line of Linus' original code will be optimized out.
Linux is far from crippled. By my lights, it is the first OS to sprout wings.
apache != linux, samba != linux, etc. (Score:5)
Apache runs on a whole bunch of other platforms, even on MS-Windows. Probably even NT... Wouldn't it make more sense to make claims like "Apache on NT beats Apache on Linux"?
That wouldn't prove the superiority of NT over Linux either, but it would IMHO make just a little bit more sense...
The same goes for Samba: Samba runs on Linux but also on other systems.
All these tests only test NT-running-some-software versus one-of-many-Linux-distros-running-other-software and then make claims like "NT kicks Linux' ass".
"Linux" is just the kernel... or have I gotten things completely wrong?
Benchmarkers should at least prove that bad scoring is caused by Linux (kernel) and not a program they're running on top of that!
If a webserver running Apache on freeBSD is doing better than Apache on Linux, that would be an indication of shortcomings in the kernel (although some people may dispute that as well).
Ah well, I never really cared for benchmarks anyway...
Re:What bugs me... (Score:2)
what about fastcgi? (Score:2)
Isn't this one of the things FastCgi [fastcgi.com] is supposed to be fixing, instead of launching one process per perl script, it launched one perl interpeteror and passes it all the perl scripts, hence less overhead, and more speed (with the drawback that the scripts have to explicitly free memory and be slightly modified) (with a loop about the script)
Not quite thread like, but definitly not on process per cgi request.
--
This is all kinda silly right now (Score:5)
Thats what 2200 hits/sec gets you. You'll be doing 190 million hits/day. Pretty damn impressive. I'd like to work for you, considering the monster bw you'll have.
I'd basically ignore any current benchmarks because they're based of versions of linux that have known issues.
You're also comparing a multi-process server, which works faster at lower loads, to a multi-threaded server, which scales better, although might not/does not return documents back faster.
I'd like to see the avg connection times on these things.
Linus has earned more of my respect than Cutler (Score:3)
There is nothing special about "engineers" that makes them better than "hackers". Those labels are not even exclusive; the best hacker I know has an engineering degree. You do not know what a hacker is, if you think they are necessarily unaware of computer science and engineering principles; and in my experience, the more eager a person is to call themself a "software engineer", the less competent they are.
As for Cutler, his work on VMS doesn't give me great confidence in him. VMS is stable and useful to some, but it's far from being my favorite OS. He may be awfully serious about it; he may be awfully serious about NT, too, but that doesn't mean I want to spend any time using it, or that it meets my needs.
Linus has a proven track record of writing solid code and coordinating a massive development effort. He does not just say that microkernels are stupid--he demonstrates by example that the monolithic approach is still viable. As elegant as I think microkernel architectures are, Linux is still what runs on my servers.
Re:Multiple servers + load balancing (Score:5)
Ah, but the PC Week test was just static documents! Redhat 6.0 comes with an RPM for Squid, but instead of installing that, they use Apache and then gripe about how expensive it is to fork for each request.
It's unclear to me what use there is for a web server that is eating bandwidth about the way ftp.cdrom.com does, anyway. That doesn't strike me as a typical "enterprise application". That part of the benchmark is obviously contrived.
oplocks (Score:5)
The numbers get even more interesting when comparing the results of NetWare with and without Opp Locks. When we turned on Opp Locks, NetWare's overall performance improved by about 40 percent.
However, this gain is deceiving. With Opp Locks enabled, almost every operation in NetWare actually slows by 25 percent. The exception is file write operations, which are faster by 300 percent. Because writing files takes up almost 40 percent of the NetBench test, it's no wonder we saw a huge overall performance boost in our results.
These people haven't a clue what they are benchmarking. Opportunistic locks allow the client to do whatever it likes to a file (or regions thereof) without synchronizing with the server. Of course write speed increases; the network isn't involved anymore! You haven't increased server performance one whit, but rather prevented more than one client from opening the file for writing at the same time.
Re:NT, Linux, NetBSD (Score:3)
What have been people saying here?
In these benchmarks linux was not so good NOT due to kernel, but due to www server.
You mean NetBSD can use less CPU while sending TCP/IP stuff?
Or NetBSD uses less CPU while running many processes?
At these tests people are actually comparing www server aplications. They should run the SAME www server on all OSs if they wanted OS benchmarks.
plz
Re:An interesting thing from these benchmarks (Score:4)
Or Roxen. [roxen.com] Roxen is a pretty cool webserver, too. And it won the Best-of-Comdex award.
Re:4x100M cards? (Score:2)
If I understood it right, the channel bonding has to be configured !
I saw no hint they did. Neither Mindcraft nor ZD
http://beowulf.gsfc.nasa.gov/software/bonding.h
someone's got experience ?
Multiple servers + load balancing (Score:2)