Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet

Web Server Comparisons 212

Anonymous Coward writes "ZDNet is running a story today (yes I know it's Christmas) about several web servers. They have apparently benchmarked these servers. They tested Sun, Linux and MS servers. I'm not sure that I like the way they tested, they didn't include BeOS or tune the configs. Check it out at their web site"My fault - this is totally out of date. Blame it on the egg-nog.
This discussion has been archived. No new comments can be posted.

Web Server Comparasions

Comments Filter:
  • ... so you know what web server to get for Christmas. :-)
  • by father_guido ( 124189 ) on Saturday December 25, 1999 @09:09AM (#1445438)
    May 6 1999? That's today?

    Man, I better get ready, the 4th is coming up soon!!! Gotta find the best fireworks vantage point...

    ;)
  • I wasn't too concerned that they didn't include BeOS, as it's more of a desktop O/S than a server. FreeBSD on the other hand should of been included, as it's definately a good server O/S, and is used to power busy web sites such as yahoo and hotmail.


  • I'm not going to be a troll and say M$ was bought off but ZDNet doesn't exactly have a reputation for being very objective about things. I wish I had the real numbers handy at this moment but I wasn't imagining things when I saw a benchmark showing Linux beat NT under high loads (scalability)

  • they didn't include BeOS


    Is BeOS often used as a web platform? I would have liked to see them include FreeBSD, personally.
    ----

  • Uhhh... ya...

    I know Slashdot isn't exactly known for its up-to-the-minute reporting of stories, but this is just silly. May 6th? WTF? Its an interesting article, but /. is supposed to be 'News for Nerds.' News as in NEW. This is not new. :)

    Is the dateline wrong? I'm just plain confused. :)

    Maybe Slashdot needs a checklist for people to go through before they submit a story? Something like:

    • Is this less than 6 months old?
    • Are you sure?
    • Absolutely possitive?
    • Willing to bet your life on it?
    • Willing to bet your computer on it?

    Etc. I dunno, this is just plain odd.

  • by Issue9mm ( 97360 ) on Saturday December 25, 1999 @09:18AM (#1445444)
    Nope, this isn't cutting it... I'm not that big a FreeBSD lover, but I am taking exception to the fact that it and its brethren are continually left out of these benchcrafts...er, marks...

    Because of its speed, ease of setup and administration, and a dizzying array of add-on and development products from Microsoft and third parties, we award Microsoft's Web platform--Internet Information Server (IIS) running atop Windows NT--our Editors' Choice.

    Of all the reasons NOT to choose a web server, I think they hit them all. Granted, speed is important. But ease of setup? administration? (maybe) add-on and development products? (must mean ASP, & other proprietary compoonents... that "mixed web" environment they spoke of earlier in the article must not be that important)

    Whatever happened to security, or - since "ease of setup and administration" are such a factor, how about security "out of the box"? Granted, I'm glad to see Linux listed, but (no offense) Caldera's not exactly the distro I'd pick for my web serving. Nor would I use Stronghold. (personal prefs, made through experience)

    Just seems like ZDNet refuses to just get things right. Ordinarily I hate to be the crybaby bitching about the testing, the methods/materials, etc..., but I'm really becoming more and more disappointed in ZDNet's lack of integrity.

    Flame away...
  • by Signal 11 ( 7608 ) on Saturday December 25, 1999 @09:19AM (#1445445)
    I'm reminded of what another slashdotter recently posted:

    - Mindcraft is known for making benchmarks to suit the manufacturer.
    - Benchmarks can be wildly manipulated... - Hence we should call this practice benchcrafting!

  • Why do stories like this get posted to slashdot? This is months old and was posted on various linux sites back in May, perhaps even Slashdot. I personally have submitted numerous recent stories that have been rejected for various reasons, but this makes it? Who's running this thing anymore?
    ----
  • Out of Curiosity, what would you use? I'm going to put Linux on one of my boxes, and I'm looking for a good web server to use. What distribution of Linux, and what web server do you guys recommend?
  • The tested using a 2.0.35 kernel. This is about 1+ year old and doesn't contain support for upgraded SMP performance. While an accurate measure of Linux then, it's far from accurate now.

    Figures lie, and liars figure. What else do I have to say?
  • "A few weeks before our testing began, Linus Torvalds (creator and keeper of the Linux kernel) released the Linux 2.2 kernel..."

    This article is a bit dated - May 6, 1999; Put into perspective by the upcoming 2.4.x release


  • I didn't see any mention of BeOS and MacOS10, unless maybe I didn't look hard enough. BeOS *COULD* be a decent web server I'd imagine, after all it is loosely based on UNIX. MacOS10, havn't heard much mention of that these days but I know it too has a web server and also is based on UNIX. It would be nice for once to see an objective report that doesn't waste all it's time rambling about windows and give equal coverage to all these solutions.

    Ooh, I'm feeling insightful today :-)

  • Why is ZDNet comparing NT 4.0 and the latest versions of other operating systems to Linux 2.0.35? That kernel is quite old and lacks a lot of optomizations that the later kernels have.
  • They "tuned the cache" to the point that none of the servers went to disk for their workload. That right there gave NT an advantage. In a real world scenario I'll be out to the disk all the time. I'll be there to get my data during misses, to write my state info for transactions, to log things.
    How much logging was the NT server doing? If it wasn't a lot then they took out the disk subsystem from the equation.
    The whole reason they used the 2.0 kernel was fishy at best also. 2.2 TCP stack broke communications with their win 95 clients?
    Finally, why on earth would I want to do hundreds or thousands of SSL transactions in software. If you are doing more then a few a second you really need a hardware SSL brick or card, which works with all the tested platforms. These people obviously don't understand but one set of solutions.
  • I would have to agree wholeheartedly with all the comments you've made. Over many many year, ZD publishing (in it's many forms) has proven to be an unrelible, laghingly bias source for microsoft press releases.


    _________________________

  • by Alex Belits ( 437 ) on Saturday December 25, 1999 @09:28AM (#1445454) Homepage

    they repeat bogus reasons for using Microsoft servers ("if you have existing business logic, such as pricing strategies, written in Visual Basic..." -- hello, you want to call Visual Basic script, written by an accountant's assistant from HTTP server in secure environment?), measurements made with CGI scripts on Unix (heard of anything else?), their WebBench tests that have nothing to do with real-life environments other than that it uses HTTP, never use Gigabit ethernet cards in their low-latency tests, "demonstrate" that SSL servers are dominated by Microsoft IIS by splitting branded Apache-derivatives into different categories, don't include Unix servers other than Apache and Netscape, omit *BSD, etc.

    In other words, usual advertisers-driven "Editors' Choice" stuff from ZDNet.

  • The primary reason is that this article is OLD. May 6, 1999. I have no clue why this made it to slashdot.
    ----
  • I don't understand how the 2.2 kernel didn't work for win 95 clients (as stated in the article). Was this ever true, and if so, for what version of the 2.2 kernel?

    Also, what is the state of threading in apache? Is there currently any support for it (the article states the 1.2.x version they used didn't support it at all)

    Not a very bad article (it is somewhat old now), but maybe not to relevant any longer. (And they seems to thing that web based administration is easier than config files... well, whatever floats your boat.)
  • Just seems like ZDNet refuses to just get things right. Ordinarily I hate to be the crybaby bitching about the testing, the methods/materials, etc..., but I'm really becoming more and more disappointed in ZDNet's lack of integrity.

    I have no respect for ZDNet at all. I don't trust a word they say. I rarely visit their site, and when I do, I never, ever click on their banner ads. I also find some of their advertising offensive. (http://ads.x10.com/zdnetmacro/nov19m1.gif and its ilk.)

    They have no decent content, its pathetic. And I hate that Berst guy. I guess I'm just not part of their target audience. Everything they do seems to be aimed at rich neophytes, with more money then sense. (This might explain their bias towards Microsoft products.)

    I've just loaded up their homepage, zdnet.com. Its nasty and vile. Vile vile vile. The whole site is geared towards shoving expensive gadgets down your throat. It looks like a bloody online retailer. Half their 'content' is product reviews, and the rest product compairaisons or 'howtos'.

    Heh, wow. I've counted the number of advertisements on their main page: just 4. zdnet.com/developer has 6. And 7 at gamespot.com. But they have to eat, right? At least the ads are targeted. :)

    Oh, and this isn't flamebait, I hope. :) So don't mark it down as such. Just a few opinions.

  • Linux really didn't seem to do poorly. In fact I couldn't tell a difference between the platforms..
    When using CGI, they all seemed to do about the same, yes, the two that ran NSAPI and ISAPI kicked the living crap out of the ones running CGI scripts, but I'll bet if they'd slapped mod-perl onto the Apache server, Linux would have caught right up..

    greg
  • I often hear people say that running a Linux/BSD webserver is only cheaper is you have a lot of time on your hands. However, most people running websites don't do all the site administration themselves, so for them it's not really an issue.

    I've personally used NT for a lot of development and wanted to have a site hosted on Windows NT, because I knew some ASP and was familiar with ISAPI. But where I live (Europe, I've checked most countries here) Windows NT hosting is usually three times the price of Linux/BSD hosting. So I went for a Linux server and discovered that (after trying out some other hosters for other sites) having a database or other extensions installed is usually also very expensive on NT (try getting them to let you install your own ISAPI extensions) and that most Linux/BSD hosters would happily let me use their (MySQL/PostgreSQL)database and stuff like PHP and my own CGI-scripts.

    Administration-issues are only important for a small bunch of people.

  • First, it would really depend on the application. For what most people do, any flavor of Linux will do. I like RedHat 6.1 for its "ease of installation and maintenance", which, quite honestly, is far easier to set up than NT. Granted, my machines were all built with *nix systems in mind, so compatability is a key factor.

    If you're looking for something secure, (ie: web commerce, hosting, webmail, etc.) I'd recommend Open/FreeBSD with Apache & SSL. Can't get much tougher than that for the price.

    I hear RedHat has a Secure E-Commerce server, I think it's based on Stronghold, and have heard good things about it, as well as being
    Don't get me wrong, I think I came off kind of harsh in the original post. I'm not saying Stronghold and Caldera are a bad combo, but arguably, since they were primarily testing for speed, Apache would have made more sense. Apache without SSL would have SMOKED stronghold if given the chance.

    Word of advice, ZDNet did get one thing right. It all depends on what you're going to be using it for. Think hard about its application, and then figure out what's best suited for that purpose.

  • We admitted it for some class of setups after the Mindcraft benchmarks, and worked to improve the areas of deficiency. That still didn't make the benchmarks useful to 95% of people needing to deploy web-servers.

    And "wussies" is spelled "wussies". Although I'll probably be woosy come New Year's Eve...
  • What a load of bull! How much do you want to bed that they didn't turn on the SMP option in the kernel?
  • by rbrander ( 73222 ) on Saturday December 25, 1999 @09:50AM (#1445471) Homepage
    Their graphs show the worst servers flattening out at "Webbench" numbers of 600, where the best go up to 4000.

    They don't show the formula that gives the number, but from similar web benchmarking reviews, I know that even the worst ones are serving up hundreds of page-views per second. The best are maxing out multiple 10Mbps Ethernet cards - i.e. you need a T3 line to actually provide the bandwidth you're serving.

    If you can afford that, you aren't reading Ziff-Davis to make your product decisions or even find your shortlist.

    These kinds of servers are only needed by the big ISPs and the eBays of the world - the whole review is only of interest to a few thousand webmasters.

    My employer is a city government serving some 860,000 people with a mostly static, partly active web site about all their city services and taxes and utility bills - and it rarely exceeds a few tens of pageviews per second.

    Forget all the sniping about tuning and benchmark methodology; the really stupid thing about these product comparisons is that they imply that more than a fraction of one percent of their audience should even care about which one wins. For the rest of us, a free product running on a free OS and hardware that costs less than the monthly cost of our Internet bandwidth can meet all our needs.

  • well, what can I say?

    Lets drink to yet another microsoft victory!!!

    hahaha...

    well maybe thye didnt configure the other boxes properly and whatnot, and maybe microosft's solution is not the best (no dooubt you all think so, to prove it, this will be down to -1 in minutes!) but one thing I have to say is that Microsoft certainly seems to be getting its act together quite well lately!

    I have to say that Windows 2000 is an absolutely magnificent improvement on previous operating systems. I've been running it for months, ever since the first relatively stable releases, and I still havent seen the blue screen of death! (Well once, but that was cause I was messing with my hardware and my RAM was screwy...) Its much more stable, I'm getting uptimes ive never dreamed before! I just ended an uptime of more than a month! and I dont even try to run my system nonstop (so all you Linux buffs up for 318 days can shut it!)

    In any case its an improvement on Win98... the defult background color is a welcome change... (oh and they even changed the tint of blue background used on the BSoD, it too looks better now)

    But one definite loss...one definite indelible black mark on microosft's good name... I was much dissapointed to find that Windows 2000 Advanced Server does not ship with that silly pinball game... (no wait, I probably havent seen it around since I specifically didnt install the games... Im starting to miss solitare to tell you the truth :)

    But dont flame too hard, because I do run Linux, and I do think its a much more hardcore OS when you get to now it. In fact the only reason I dont use it fulltime is becuase I cant be bothered to switch, and my current linux box (an IBM compatible DX/50) isn't really a supreme specimen to learn on. But I promise you, that as soon as I get that new 1 gigahertz box I've always wanted, I'll retite this one to an exclusive Linux box, and then I'll really be able to use Linux!!

    (Apparently linux doesnt like the PS/2 architecture... or old ATAPI SCSI CD-Roms for that matter... took 24 solid hours of moving hardware around from different computers to finally get redhat installed... and even now it refuses to do some reasonable things! Im putting it all down to the oldness of the computer its on [rather than my lack of competence] -- that sig on the bottom is not a joke, after repeated attempts I couldnt get it to know my network card, so I eventually gave up...)

  • by Anonymous Coward
    They recommend NT & IIS as their editor's choice but Netcraft says:

    www.zdnet.com is running Netscape-Enterprise/3.6 SP3 on Solaris
  • According to historical evidence, Jesus was a pharisee rabbi crucified in 88 B.C.E. If he was 33 at the time, this means he is 2121 years old now.
  • if you have existing business logic, such as pricing strategies, written in Visual Basic...

    That's a good point -- who has core business logic written in VisualBasic? By that logic, any web server supporting COBOL would probably be more applicable. (Even MS's "DNA" strategy is to componentize business logic, allowing it to be language independant, so even from a Microsoft mouthpiece, this is bizzare.)

    Reading between the lines, what I think they are implying is that Microsoft's environment allows low-end VB developers to be converted to web developers without them having to learn a new language. That might be a consideration in some places, but it's hardly a huge point in IIS's favor. And you can't exactly take your typical VB client-server front end and push the "Recompile as Web Page" button.
    --
  • by SnowZero ( 92219 )
    If only speed matters, and not security, why on earth are they benchmarking SSL? Hello, anybody home?

    Linux still has catching up to do to be on par with IIS in some types of static content; and we can again thank ZD for a meaningless benchmark that is no help in such development efforts.

    Personally I can't wait to see what will happen with the 2.4 kernel with khttpd and phttpd on the horizon...
  • Maybe you'll consider (Score -1: Troll), because I'm suggesting rational discourse over religious whining.

    It appears from this thread of comments that the slashdot community is unhappy about all sorts of things that don't seem central to the issue. The comments becry ZDNet's advertising vs testing integrity and methodology, Caldera distro's shortcomings, Stronghold's performance vs other Apache releases, and why they didn't choose the EndOS-BeOS of solutions.

    The issue, as I would see it, is "what can Linux do, to fare better in third-party comparisons?"

    It appears from the article that there are several reasons that ZDNet listed why they felt Linux/Apache/Stronghold was limited. Let's start with those.

    • ZDNet chose to tune ALL the servers to have 68Mb of web source material and at least 68Mb of memory disk cache.
      Why did this give NT an unfair advantage? Why does Linux (or particularly the Caldera distro) solution not deal with RAM-rich servers as well as NT? I think the poster who complained of this meant the inverse: if it were a RAM-poor server, Linux would have the advantage in disk accesses.
    • ZDNet used multiprocessor servers.
      All religious handwaving aside, why did NT fare better by spinning threads than Apache could do by spinning processes? What is the big bottleneck in managing a process, that managing a thread doesn't have? They were using a brand-new MP kernel straight from Linus. Will the Linux kernel mature to deal with SMP situations and massive numbers of similar threads or processes better?
    • ZDNet suggested that in-process programming worked better for all the hairy e-commerce they decided to test.
      Though I think they should have configured some PHP or Mod_Perl into their mix, just as they had to bend to the SSL3-only constraint for another platform, they have a point. Writing modules is the way to go, to get inside the server and run fast. Besides PHP and Mod_Perl, where can Linux go to improve?

    Most of these seem to suggest that ZDNet could have configured their Linux servers more like real-world Linux admins would, and would find better performance. This does talk poorly of Linux's ease-of-use, though they lay it on thick when they cry about config files. This is an education issue.

    Slashdot is already slanted (pardon the pun) towards the Linux solution. Not every problem is a nail, and there are even different hammers for different nails. Let's be objective and constructive, instead of whining about every possible outside excuse. Improve the tool, and the tool will become the standard.

  • Remember that this article was reviewing e-commerce site tools, not just regular site tools. They made it pretty apparent in the beginning of the article that Linux has a virtual lock on things that don't involve e-commerce.
  • It seems that ZDNet doesn't like to acknowledge any non-commercial software, and went out of their way to test commercial products wherever possible. CodeWarrior won't run on Caldera, AFAIK. :)
  • Right on. I'll also add a few things:

    One of the knee-jerk posters complained that ZD didn't use the latest kernel, then somebody re-read it and realized that this article was written way back in May when that kernel wasn't available. Even if it had been written yesterday, the NT camp could then legitimately complain that ZD should have used Windows 2000, which is a better web server and is theoretically available today (check your newsgroups.)

    Every time one of these gets posted, people scream and moan about the box not being configured correctly. "If only we'd have been at the helm, we'd have won that race," they say, complaining about how ZD didn't do hours and hours of tuning work to get the Linux box just right. You know what? They didn't put hours and hours into tuning the NT box, either, and the result still stands.

  • hemos didn't say that zdnet was running it today, that stupid anonymous coward said that they were running it today. hemos was not wrong in this case. he just was kinda lazy to edit the post or add a disclaimer "actually, it's may 6th.." but, in any case, may 6th, 1999 was a long time ago. that probably explains why they were running linux 2.0.35 and not something reasonably current (i.e. 2.2.x).
  • First, the story is from may, so it doesnt even count for xmas in july...
    Second, what is stronghold and why didnt they use apache? Was stronghold a release name for an old apache version? am I missing something? They used IIS which is standard for NT, isn't apache the standard for linux servers?
    I'm sure had they used MSFrontPage Personal WebServer for their NT benchmarks, the scores would have been more comparitive.
    Ah well, standard benchcraft from the king of benchcrafting.

    Merry Xmas!
    NH
  • If you've ever used BeOS you will know that its more a single user OS in the same class as Windoze but with a better architecture. I mean theres no login and everyone is root by default. The only good thing is the POSIX compliant shell and ease of use. Mac OS X is based on BSD (and basically the same as any BSD). Note that this article is dated and not a serious benchmark efort in any case.
  • With regard to said sermon, a correction has been issued. Instead of 2000 years, the author meant, uh, 2121. Note also that said sermon was a joke, and that this is Christmas Day, and that some moderators have no sense of humor.

    I am the plausible religious zealot troll. No sermon or good wishes, express or implied, were meant.

    (C) Plausible Religious Zealot Troll, 1999
    All Rights Reserved
  • If comparing CGI scripts with ISAPI modules is acceptable, then I can easily "prove" that anything running in Linux is much faster than the equivalent program in Windows 2000(TM). Just compare an interpreted GW-Basic program in W2K with the same algorithm compiled in C on Linux.

    These benchmarkets are just thinly disguised press releases, not responsible journalism.

    The question you propose on "what can Linux do, to fare better in third-party comparisons?" is asked by thousands of users worldwide, not with marketing in mind, but with the objective of making our own systems run better and faster. Free code allows one to improve our systems, and many of us do so, or try to.
  • Some caches don't like expires=now, so it is common to just put in a date from the ancient past.
    The story is out of date, but not quite that much out of date ;-)
  • I don't think so, I just did an ftp install of both FreeBSD and OpenBSD recently (loading up a gateway/ftp server at home) and they were both MUCH shorter downloads than something like rh6.1 which would have prolly taken longer than I am willing to wait. Personally I settled on OpenBSD as it was smaller and who can ever argue with a very secure os like that. The biggest thing that bugs me about that artice is that they didn't even recognize the BSD family, any of who would have made a strong showing.
  • IIRC They combined some features of netBSD and FreeBSD to come up with what they used in Mac OS X
  • ancient kernels [zdnet.com] available for download. 2.2.0 .... :-P
    ---
  • by ebrandsberg ( 75344 ) on Saturday December 25, 1999 @11:28AM (#1445509)
    Recentally, I setup a Linux router with 8 10Mb/s feeds (full duplex)inbound with one 100Mb/s feed going out. What happened is that at about 20K interrupts a second (about 70Mb/s), the system, which was running fine (about 97% idle) started sucking up cpu time. By 30K interrupts, it had saturated the CPU. In this setup, I made use of the vlan code (it was attached to a vlan switch), and used an Alteon AceNIC which does interrupt coalessing. The AceNIC under the same loads ran with about 1600 interrupts a second, and continued to run with about 97-98% idle time at the same traffic levels that had pegged the system before. I'm wondering if special drivers that are doing interrupt coalessing are making the difference on the NT boxes. Assuming that each transfer generates on a WebBench 20 interrupts (or more), then the 1K connections a second and the CPU load are really understandable.

    On another note, why not have different groups do benchmarking with a fixed $$ amount that they can use to purchase equipment, as well as fixing a $$ amount per hour that they have to spend configuring the servers. This would be a MUCH more realistic benchmark scenario as the cost of equipment and time are realistic factors in the real world.

    Erik
  • The fact that they came up with a BS reason for using an old kernel, even when that was written back in May is what people are really complaining about. I haven't heard any reports of any of the 2.2.* series' tcp-ip stack "breaking" connection with win95 boxes. They should have used the latest RELEASED kernel, or just gotten whatever updates were availible from caldera. Seeing as Windows 2k hasn't been RELEASED yet (I don't care to use pirated beta version from a newsgroup) there is no good reason for them to have used it. I do agree that linux might be a little harder to configure, but i prefer having one or two text files to tweak my webserver as opposed to 10's of tiny menus and trying to remember where each option was.
  • Before the Common Era (IIRC), its been a while since i gave two shits about being politcally correct
  • Well, to be fair, it did come from PC Magazine, so that could certainly justify leaving MacOS out. As for the rest, though, screw 'em.

    Good... bad... I'm the one with the gun.
  • I think this just about sums it up [jrray.org].
    --
    Why Ah Must Scribble GNU
  • They could have tested Solaris on an Intel box and they could have also tested Apache on NT. Obviously, the multi-processor issue with Apache will be sorted out in the future ... the Linux kernel just became multi-processor friendly.

    But, they fail to mention these issues with their tests. But, what can you expect from a pop-culture magazine like PC Magazine?
  • by tweek ( 18111 )
    Don't get your panties in a wad jsut yet. Stronghold is just a hardened apache. It's like adding SSL yourself but to the umteenth power. These guys know apache ;)
  • If Linux or BSD won everyone would be talking about how 'accurate' ZDNet was.


    NT and IIS might not be the best, but there is something to be said about ease of use. I can set up an e-commerce site that supports a high amount of traffic - with a minimum of server tweaking - in a very short time with a minimum of hassle.


    Linux is a great... until you need to use it

  • must mean ASP, & other proprietary compoonents...
    ASP is by no means a proprietary technology. Take a look at Apache::ASP [nodeworks.com]. It's a perl module, running under Apache/mod_perl, that lets you write ASPs in perl. ASP is actually a very neat solution if you care to look at it. And if you write ASPs in perl (instead of VB), you get to write your scripts in a very nice language, get a much more feature-full technology than CGI, and it's cross-platform too -- those perl ASPs run on Apache with the above module and on NT/IIS with ActiveState PerlScript [activestate.com].
  • Not bitter, just amused. Did you notice the squibs in the article about Linux's reliability? Unix servers (and this includes Linux) tend to have uptimes measured in years. When was the last time you saw an NT server go for more than a few months without BSOD'ing?
  • Comment removed based on user account deletion
  • And that also explains why it compares IIS 4, instead of IIS 5, which has been released as well.
  • Don't be a dumbass. Can't you see that they didn't even use the 2.2 kernel? In fact, if you read the article, the 2.2 kernel was too good for the test! It's improved stack implementation _BROKE_ the Win95 boxes! Why did they use WinBlows clients then if they didn't work properly? Use some common sense.
  • In most cases, yes, but wouldn't it be silly to say that he was crucified 88 years Before Christ, since that is what 'BC' stands for, you know. Not just politically correct, but also logically.
    Pay attention to the context next time, not just your flame-trigger keywords.

    Good... bad... I'm the one with the gun.
  • What did they hope to prove by pitting the 2.0 kernel against NT and Solaris rather than the 2.2 kernel? Why don't they pit NT 3.5x against 2.0 and see which comes out on top. I think 2.4 will do some real damage though. Hmmm it would be easy to see BeOS go against NT..... no competition there as a workstation OS goes.
  • why don't they ever test it on anything but x86? one of the strengths of linux is that it runs on many different archs, so you can use decent archs and aren't stuck with x86 (as you pretty much are w/ windows). this seems like a huge oversight to me.
  • The best are maxing out multiple 10Mbps Ethernet cards - i.e. you need a T3 line to actually provide the bandwidth you're serving.

    Without addressing the issue of whether *this* benchmark is valid, what if you are providing web services over a local intranet with a gigabit backbone and most clients on switched-100? What if that web server was the web interface for a heavily-used ERP application, or a Product Data Management solution over HTTP-DAV? Wouldn't the web server that was able to perform at the high-end matter in this case?

    You don't need to be a big ISP or an eBay to make use of such bandwidth - if it's local. This scenario describes the company I work for, and we have only ~200 employees.
  • banner ads are getting out of hand. all these sites have about 15k of sidebar links, 10k or so banner ad, then usually less than 1k of actual content. often it's just two paragraphs before you have to click "next". another cheat trick is to have a top 10 list split into 10pages + intro + conclusion. all these banner ads and blatant tricks make tv ads appear suttle(sp).
  • by orcrist ( 16312 ) on Saturday December 25, 1999 @12:57PM (#1445537)
    Also, what is the state of threading in apache?

    Apache 2.0 will be a hybrid forking/threading server thus giving it some of the speed advantages of threading while maintaining the advantage of multiple processes that every one of these benchmarks never mentions: stability.

    If for some reason one of the threads in a multi-threading server crashes, it can bring the whole server down with it. If one of Apache's child servers crashes... Apache forks a new one to replace it. The new design will be a compromise with several preforked children, each of which is multi-threading. Then let's see what the benchmarks look like :-)

    Chris
  • Notice that he/she implied that this May 6th article was posted today, and actually stated that there was _no_ tuning at all (at least slightly contrary to the article). This was from back in the mindcraft days (or a bit prior), which in internet time is nearly forever. Yeah, the article itself has a bunch of holes in it, but being from almost 8 months ago isn't it just water over the dam?
  • Here's the link to info about Apache 2.0 which I actually wanted to include in my post above:

    http://www.apacheweek.com/features/apac he20 [apacheweek.com]

    Chris
  • As of this past summer/early fall, support for Microsoft products on the Alpha was dropped. It's no longer "pretty much" x86, it's the only game in town. And linux _rocks_ on the Alpha :-)
  • junkbuster [freshmeat.net]
    ..
    "We must move forward, not backward, upward not forward, and always twirling, twirling, twirling towards freedom."
  • Ironically, since there is no year zero, Christ was born either *before* or *after* himself. My proposal for calendar reform is this: 1 before Christ is the year before he was born. He was born in year 0 DC (zero *during* Christ), and crucified in year 33 DC. The first year after his crucifixion would be 0 AC (zero after Christ). C-language style for array numbering works good for us Unix hackers. According to the canonical dating used by Dionysius Exiguus, we would be now in year 1965 AC, which would leave me enough time to retire without having to worry about Y2K. Merry Armageddon and a Happy Looting to all.
  • Just because something will be on computers shipping in 2 months does *not* mean it is released. Released means that it is *avalible* to the public.

    FYI, 2/17/2000 is the day that it will be released to the public. Read [microsoft.com] for yourself if you don't believe me.
    ..
    "We must move forward, not backward, upward not forward, and always twirling, twirling, twirling towards freedom."

  • Comment removed based on user account deletion
  • AD stands for anno domini, "in the year of our lord."
  • Actually big companies do care about price.
    The way an OS finds its way into IT shops is
    when they need a quick solution for some
    problem, often behind the PHB's back. If it
    works it sticks.
  • Comment removed based on user account deletion
  • Bad thing about these tests, is that we know NT is faster. Does that mean I want to use it in a mission critical operations? NO.

    M$ Can tweak NT/W2k so it serves pages faster, but it is still unreliable for most applications...

    People like to state the NT can be very stable, how many apps do you have running? How much downtime per year do you have? How much time do you spend fixing problems?

    My biggest grip was you couldnt telnet into NT and restart processes. W2K has that. NT can try to become more like *nix platform, it still has a long way to go..

    Im going to stick with Solaris and Apache. Im under staffed, and over worked.
    NT is not the answer.

    Also another sad part is most E-Commerce packages are for NT.

  • by 703 ( 65046 )
    Apparently

    www.zdnet.com is running Netscape-Enterprise/3.6 SP3 on Solaris.

    Why did they choose that platform if the IIS on NT solution earns the editors choice?

    Or the Webmasters of Zdnet.com disagrees with the Editors?


  • You can't telnet into NT and restart a process? This is amazing news to me as I telnet into my NT box on a daily basis. Thanks for letting me know that I have a fake telnet service running, I'll switch to Linux immediately ;)
  • Well, At the homepage of Apple Computer Inc., they claim that the AppleShare IP 6.3 provides a good web server. Is there any result of comparison including the AppleShareIP?
  • Yeah, except for the part about doing work. NT may crash, but it does actually have useful programs (photoshop, MSOffice, etc.) as opposed to beos which has...hmm...Gobe Productive?

    Jeremy
  • Heh. I've so far seen the busty chick with an obvious push-up bra, then several months ago I saw the blonde who looks like the poster child for being an airhead. Collect them all!
  • There is absolutely no historical support for such a statement. I'm sure all the people who read this and assume that "it's posted on slashdot, it must be correct" are thankful for your complete misinformation.

    I can find myriad historians who would date Jesus' birth to approximately 5-6 BC. I challenge you to find any substantial contingent of historians with proper credentials (ie. not "Harry's House of History" degrees) to state that Jesus was born prior even to 10 BC.

    Oh, and supposing you can find such information, how do you explain the fooling of billions of people into thinking that he was born ~115 years before they think he was?

    Jeremy
  • "Anonymity on Slashdot has become a haven for the ignorant and childish. I say remove it." I agree 100%
  • NT writes to the disk a lot!
    Error...Error...Error......
  • ZDNet used multiprocessor servers. All religious handwaving aside, why did NT fare better by spinning threads than Apache could do by spinning processes?

    Probably because people don't bother tuning the Linux kernel for good SMP performance on web tasks.

    Why would that be? Because it doesn't make sense to pay the premium for four processors in a box when for less money you can get four processors in four boxes and quadruple your disk and network bandwidth. People use SMP on Windows mostly because of software licensing costs, because of per-box colocation costs, or because it sounds good. In my experience, SMP on Linux is mostly used for getting really high performance on numerical and scientific tasks.

  • ZDNet suggested that in-process programming worked better for all the hairy e-commerce they decided to test. [...] Besides PHP and Mod_Perl, where can Linux go to improve?

    Apache has a perfectly good in-process module API; mod_perl is written in it. You can write that kind of code yourself if you want to write C/C++ extensions to the Apache server.

    I consider it pretty foolish, however, to write web server extensions for an E-commerce site in a language without runtime error checking or fault isolation. Applications specific modules simply can't receive the testing and debugging that Apache itself has received. You are lucky if the server crashes due to a bug; more likely, you are going to ship an unpredictable quantity of widgets to an unpredictable address as a stray pointer leads to overwriting some of your order data.

    The performance difference between native code and Perl, Tcl, Python, Java, or PHP3 simply don't matter in most web applications: applications are generally bandwidth or database bound. Given that simple fact, you should use the safest and easiest language to program in. For single programmer projects, that's likely to be a scripting language. For large, component-based multi-programmer projects, that's likely to be Java, OO Pascal, Eiffel, or something of that kind.

  • Well, I'll go ahead and make the claim that since ZDNET and MSNBC have a "Content Exchange Alliance [ziffdavis.com] ," they cannot possibly be objective in their evaluation.

    Oh, and merry christmas.

    - tokengeekgrrl

  • I think someone touched on this, but this is IMPORTANT since even ZDnet said this:
    Apache and linux's process based system works better by adding boxes than adding processors/resources in a single box.
    So here's the deal: figure out how much it costs to buy OS, software, hardware, and setup the system. Determine its optimal performance (which may NOT be peak performance). If Linux still doesn't have SMP at a cost effective level, spec out single CPU boxes and save the cash for entire other systems. (Don't flame me for saying it doesn't work, I'm saying it might be cost effective to not use it until 2.4) If Solaris' cost-performance is better on Sun hardware (duh) then use it.
    Then, take those numbers and ask ZDNET to append them to their article.
    We know linux has an automatic $1000 price advantage over NT/Solaris which is about a third the cost of adding another server. I'm not sure about Stronghold's cost vs IIS/NS server/etc, but I'm guessing it's not as expensive as IIS. And with the exception of the "custom" API's (NSAPI, ISAPI), Linux performed as well as the other servers, even with the use of a twitchy old version of caldera.
    Personally, I'd want to see the addition of a mod_perl'd server to represent the Linux equivalent of NSAPI and ISAPI, but with the bonus that perl stuff is PORTABLE, much more so than NS or IIS-only scripts.
    If we suggest this right to ZDNET, when they review the new W2k's web server and put it up against a decent server linux (w/kernel 2.4 we can hope) they may add a price/performance index.
  • They only show Apache running on Linux, but seem to overlook the fact that it will run on other UNIX (like Solaris 2.7 on their Sun E-250 that they used in the test)...

    Is the Sun Web Server that much better tuned for Solaris? In the static page view comparison it blew just about everything else away, but they conveniently decided not to include Apache.


  • Shit, yes, I do huge file transfers and video conferencing over HTTP all the time!

    F'chrissakes, use a protocol that actually suits what you're trying to do. FTP, NFS or SMB for file transfers; one of the streaming protocols for video conferencing. (Actually, come to think of it, video conferencing over HTTP isn't really feasible; what are you doing, breaking the video stream up into frames, converting them to JPEGs and pushing them to the clients?!)

  • I believe that's spelled "woozy" ;)
  • I'm most impressed by the "multi-hundred megabyte CAD drawings". My biggest CAD drawing is of our 200,000 water services, 20,000 water mains, etc and runs only 25MB. (Microstation design file).

    Since this is an Intranet, have you considered just having the Web server provide only a "file:///" URL and letting a file server handle this massive load? They're much better tuned for it.

    The point is very well taken, however - bandwidth is not the limitation in a LAN. Still, the problem doesn't come up in my workplace - and we're talking 4000 seats. Our biggest Intranet server also maxes at a few tens of hits per second.

    Perhaps that's an indication of our Intranet usage being backward or something, but I don't think we're all that far behind.

    A larger factor is your computing philosophy - is your Intranet a highly centralized "mainframe" style with one provider of information to many-many-many? With so many diverse departments in a civic government, ours is more spread out among many servers, even though the IT department runs them all out of one room. If you put hundreds of functions from hundreds of information providers onto one web server, then its security arrangements and the tuning of the server become very complex.

    Lastly, if you have very heavy web usage because your corporation is practically run from a couple of major applications - say sales management or the accounting system - it may be better to consider that this is not best done with web apps but with a "traditional" client/server app installed on every machine.

    To sum up, with the options to serve lots of (or big) files with a file server, to split multiple services into many servers on the KISS principle, or admitting that not everything is best done as Web apps, I again return to my point: that not that much of the total Web server market cares about getting over 1000 pageviews/sec - even on Intranets.

  • Zag. You should know that it's not nice to tweak people.

    Especially since you're inaccurate.

    You CANNOT telnet into NT. You CAN find a shareware telnet service, but you cannot telnet into an OOB NT box. Period.

    I know you're in love with your Win2K box, but sheesh. Please note that he DID mention Win2K being telnetable.

    *SPANKSPANK*

    Bad boy! Now you have to listen to Reba West again!


    Chas - The one, the only.
    THANK GOD!!!

  • I know about H.323, but AFAIK the H.323 standard talks mainly about transmission over LANs and specifies nothing about encapsulating it with HTTP, so your last statement doesn't really mean much, does it? It's like saying, "A firehose is designed to carry water, so there's nothing wrong with encasing it in a garden hose." I mean, that's gotta kill your bandwidth.

    My point (which applies to HTTP-DAV as well) is that most people would be better off using a server and protocol that are designed for the way they're going to use it, rather than trying to stick a square peg in a round hole. Just because your users are too braindead to use ftp/ncftp/WsFTP/CuteFTP/Fetch for file transfers, don't try and convince others that your solution is the best and web servers should be made to handle it.
  • The most important factor in this is the hardware... RedHat supports some things better than NT, simple fact.

    On my home computer, when I tried out RedHat 6.1, Already having had the partitions set, from box to boot, it was 20 minutes on the Workstation Package. 20 minutes from the time I opened the box, till I was able to boot into X. No config questions, configured X Automatically, detected my mouse, keyboard etc... 20 minutes.

    On the other hand, NT doesn't like my video card, after setup (which easily took 45 minutes, with similar options installed), I had to hunt around on the net to find NT drivers for it.

    Anyway, as I expressed in my original post, which you obviously didn't pay much attention to, I purchase hardware with Linux in mind. Everybody's experience is different, this is mine.

  • Certainly, you're free to use the solution that suits your users, but saying things like:

    If we have to sacrifice some bandwidth to make this happen, so what?

    isn't going to make you many friends among most IT departments; not everyone runs Gigabit Ethernet (hell, the company I used to work at was using plain old 10Mb Ethernet for more than 350 clients (SMB, IPX, Ethertalk, TCP/IP) - without subnets or any other way of limiting broadcast traffic).
  • For some reason I thought it had always been spelled admission.
  • You're insulting him for being correct? Doesn't that make you an "utter complete moron". Ebay's backend uses Solaris with an Oracle database, their front end servers use NT. The frontend does a tenth of the work the backend does.
  • You have to remember though that M$'s site runs on a cluster on 96 Compaq Proliants running NT. Hotmail uses about as many as that.
  • I hope someone moderates this post up.
  • Hi Ed.

    ZDNet chose to tune ALL the servers to have 68Mb of web source material and at least 68Mb of memory disk cache.
    Why did this give NT an unfair advantage? Why does Linux (or particularly the Caldera distro) solution not deal with RAM-rich servers as well as NT?


    Well, I don't think you have it quite right. It is not that they are exposing a Linux weakness in handling RAM rich servers, it is that they are hiding a major performance bottleneck of NT -- its file system.

    On the other hand, you get something for your performance hit -- filesystem journaling. It will be interesting to see what happens when ext3 and Reiser FS become more common.

    In any case, I run a small web site that is 160MB in size, not counting databases. This is too large to be cached in the setup described, although not unreasonable to be entirely cached. However, if I was doing serious corporate intranet or a major ecommerce site, I would expect it to be much, much larger.

    I am not a conspiracy theorist, but does it not seem a tad unrealistic to devise a web benchmark test which totally discounts disk access?

    All religious handwaving aside, why did NT fare better by spinning threads than Apache could do by spinning processes? What is the big bottleneck in managing a process, that managing a thread doesn't have? They were using a brand-new MP kernel straight from Linus. Will the Linux kernel mature to deal with SMP situations and massive numbers of similar threads or processes better?

    Well, on Unix, forking is cheap -- very cheap. On Windows, launching a new application instance is very expensive. Therefore multithreading is a huge performance win -- multiprocessor or no -- on windows, but a relatively smaller one on Unix. If you think about it, it hardly seems worth multithreading unless the threads are updating some common memory. If you write a thread with no critical sections, it may as well be a process on Unix, but on Windows it benefits from getting access to memory pages that have to be laboriously set up (they are simply copied in a Unix fork).

    I am guessing that multithreading Apache speed improvements in Unix will scarcely be measurable unless nearly all the data being served out is cached in memory.

    ZDNet suggested that in-process programming worked better for all the hairy e-commerce they decided to test.
    ... they have a point.


    Well, that is a matter of opinion. Ebay may use IIS, but they do everything through CGIs.


"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...