Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Apache Software

Apache vs IIS in Performance? 531

Greg Merideth asks: "In the midst of my proposal to universally sweep all of our company web servers together I was handed an advertisement from DELL by our CIO with a big circle around two machines that DELL sells with an interesting note. They are identical machines, literally identical machines but the claim is that the Linux 6.2 Apache based machine only supports 20k-100k hits per day while the Windows 2000 IIS based machine (remember, same config) supports 500k-1M hits per day. Now if DELL is claiming that NT, with the same config will out-perform Apache in serving web traffic, how am I supposed to convince my company that Apache and open source is a great way to go? They don't care about open source or Linux, just the performance that they will get from the machines. Where can I get -credible- data to prove that Apache can outperform IIS?"
This discussion has been archived. No new comments can be posted.

Apache vs IIS in Performance?

Comments Filter:
  • by Anonymous Coward
    It's an odd fact that many of the most irrational and vociferous OS zealots are not themselves programmers (or at least not experienced ones). Linux, more than any other OS, suffers from a surfeit of testosterone-poisoned young men who know little but speak much, and the whole community suffers from it. They want things free simply because they don't want to pay. Saying that charging for software is ethically wrong is only a dodge; they just don't want to part with any dough. They are, in a word, punks. And Linux has far too many of them...
  • by Anonymous Coward
    From what I understand, Apache wasn't written to be SMP because the coders didn't believe SMP was stable enough. The idea was that it was better to cluster servers together for added speed (and uptime) than to add more processors and reduce stability (that and a second processor under NT does *NOT* double the speed of NT, it mearly adds 50% to it - and the more CPU's you add, the more that % drops) Now - take the cost of a single proc box vs a multiproc box... Here's a kicker - Win2K does *NOT* have a true multithreaded IP stack either. For every NIC you place in the box, you need a matching number of CPU's. In the end, Apache on a couple of medium-priced uniproc machines not only will outperform NT or Win2K on a single expensive multiproc box but will also be more stable and have a better up time. So what is more important? The OS or the results?
  • by Anonymous Coward
    So, to translate to a language the boss will understand:

    You've got 'Open Source' development tasks to perform, and if the boss orders an NT box, you won't have nearly as much code to upload to Freshmeat each week. Heaven help us, the server might be easy enough to administer that an IT flunky can keep it going. That wouldn't be good for Open Source because then your boss wouldn't pay you to develop non-business-related code to give away.

    Explain this and watch your boss jump in panic at the thought he might get flamed by Richard Stallman.
  • by Anonymous Coward

    Your CIO is in charge of what happens with your company's technology. What he wants will eventually happen, whether you like it or not.

    The possibilities I see are:

    (1) He's interested in making a thought-out and justifiable decision. Ads like this are evidence against your proposed solution. It's up to you to get evidence for it - evidence your CIO could show to his boss to justify having done it your way.

    (2) He's one of those gullible management types who believes anything he reads - the Microsoft solution is presupposed, regardless of its demonstrable merits. The ad is just a way of getting you to shut up. In this case, nothing you do or say will make a difference - I'd look for another job, and it's the current company's loss when you find one.

  • Erm, no, he was talking about using system ram to cache the hard drive, not L2 to cache the system ram.
  • Dunno what world YOU live in, but if you're a unix guy and you get fired for being a unix guy, then:

    (a) the place that fired you almost certainly wasn't a good place to work in the first place,

    and

    (b) you can get a much better job in no time flat.

    Mark my words, in today's labor market someone who gets fired for being a unix guy is much better off for it.

    - Duff, a Unix guy.
  • This is yet another example of why benchmarks are shite.

    To illustrate: I have a pentium200 w/32 Megs of RAM runnign Slackware Linux 7.1 and Apache 1.3.12 compiled from scratch, and connected to the new with a T-1.

    Using the provided tool "ab" (Apache Bench) I have concluded (don't flame me, I realize that benchmarks are useless) that my piddly little P-200 could sustain traffic of 25 hits per second. If that happened for an entire 24 hour period, I'd be looking at 2.1 million hits per day....which is complete bullshiz.

    My point? Shame on Dell for posting results from 2 wildly different studies on the traffic any given hardware can support. Shame on Dell for trying to pose NT as the "high-traffic" solution and Red Hat as the low-traffic solution when you have the same hardware. If they start complaining about not selling any Linux servers, then ads like this will be to blame (I've seen the advertisement, and it's completely nuts).

  • CGI to CGI is a fair comparison...however, of COURSE Apache loses "BIGTIME" to CGI vs. ASP. Apples to Oranges man. ASP vs. PHP3 would be a more fair comparison.

  • Yup... the bandwidth isn't really the issue here- it's how many hits per second the box can handle. Assuming when they say "per day" they're talking about a 24 hour period, that's 200 000 / (24 * 60 * 60) = 2.31 hits per second... miniscule. I'm not sure how big people are going with apache these dayys, but i'm assuming 50-100 hits per second is doable, even with mod_perl and stuff like that.

    Even if they were talking about an 8-hour business day, that still only brings the number up to 7 hits per second.
  • at http://www.zdnet.com/sp/stories/issue/0,4537,21961 15,00.html

    complete with charts showing NT at the bottom.
  • I test nic cards and no mater what I use gigabit or fast ethernet. I can't get a total thruput of more than 370 MBPS.

    I tried similar test with kernel 2.4.0-test8 and nearly got fired for shouting.

    I was able to max out my switch and there is no upper limit in sight with the Cisco hardware I curently have available to me.

    How did you manage to exceed the PCI bus speed? The fastest that I've ever seen was a GSN nic running on an RS6000 with a 64 bit, 50 MHz PCI bus, transmitting at about 300 MB/sec, and that was 100% limited by the PCI bus.

  • If performance is an issue, you should be looking at the "Tux" kernel-accelerated webserver. It absolutely creams IIS for delivery of static content.
    --
  • No, but there have been several other exploits discussed on Bugtraq this past week with OpenBSD that definately ARE root exploits (xlock, and format string errors in pw_error).

    I just posted the above because I thought it was humerous. OpenBSD is not secure just because Theo tells you it is, or there is no open bug in an OpenBSD bug list.

    And no, a DoS is not a root hack, but in this case, was easily preventable. The above occured because it slipped through OpenBSD's code review process, proving again that we are all human, that we all make mistakes, and that OpenBSD's open arrogance concerning their security doesn't amount to much of anything.

    (People shouldn't throw stones at glass houses)

    Besides...remember when it was discovered how volunerable NT was to teardrop attacks? This was also a DoS. It was due to poor coding on the part of the developers, and many of us in the security community held Windows very accountable to their stupidity.

    Why should OpenBSD get any better treatment, especially when they claim to be so secure, and their code review so intensive?

  • Web Server Market Share
    IIS 20%
    Apache 60%

    Hacked Web Server Market Share
    IIS 60%
    Apache 30%

    Which implies that systems running IIS are 6 times more likely to be hacked

    (Source - netcraft + attrition.org)
  • ...but this seems like a typical response from a Linux Enthusiast.
    "Oh oh we dont have this feature yet but it WILL be in next version....."

    This is a characteristic of *all* software developers, be they RMS or Microsoft. Open source or closed-source. The difference with open-source is you can poke around in CVS and the mailing lists to find out how likely the marvellous new feature actually is, and how far away it is from appearing in a release that you'd actually consider using.

  • > Where can I get -credible- data to prove that Apache can outperform IIS?

    You can't. Apache's developers have never claimed apache is the fastest. One of them even replied to a tuning question with "...if by ``tuning'' you mean replacing apache with something that's actually fast."

    Now Zeus [zeustech.com] is another matter entirely when it comes to speed. You can get a free demo of it. And its admin interface is nice and purty too :)
  • Raw performance is only one variable in the equation. Sure IIS may perform slightly faster under certain circumstances, but at what cost? The instability and babysitting that come along with NT should also be a factor in this equation.

    I could care less if Apache has slightly less raw speed; at least I don't have to hold its hand and be on call 24/7.

    Basically it's freedom over servitude. Sure the work might get done a little quicker when the workers are being whipped, but I'd rather go a little slower and not be whipped.

    Dom

    "osm is an artist; I do not question his ways" --troll
  • Look at stability and total cost of ownership as well. In my experience NT/IIS go down far more often. Also if they cost you several hundred $'s on top of the price of the hardware you could run that much better hardware w/ Linux/Apache. Kernel versions, driver support for the chosen hardware, filesystems chosen, etc can all play a role in how many hits a machine can take. I'll tell you my favorite test.. pull the plug in the middle of heavy transactions from both machines and plug them back in and see which comes up quickest with least loss of data. Gotta love ReiserFS. :) You also have several alternative webserver choices for Linux is you want speed more than the other features of Apache. *shrugs* I'd never use NT for anything mission critical. To much experience with those issues. ;>
  • Adding more memory does not necessarily increase performance. This is only the case if the tasks are memory bound and are using swap heavily.

    In case anyone out there read this and nodded and thought, "yup, ok, that sounds good," allow me to point out that this is 100% bullshit.

    RAM is also used for buffers and cache, and both of those will make a large difference in performance. Almost all applications are memory-bound, and any application that is swapping heavily is on an overloaded machine and has nothing to do with real-world web service anyway.

    In every case where you are actually using a server in a production system with more than a negligible load, adding RAM will increase performace - up to the point where your entire site is cached and all non-logging disk i/o stops.

    You will see, obviously, a much greater increase in performance by adding RAM to a machine that is swapping.

    Obtopic: No one can answer your question without knowing what kind of machine it is and what kind of data you're serving. At best, we can speculate.

    See ya!
    Blue
  • Adding new Linux servers may be cheaper than adding new NT server. Especially after you get into the NT licnesing fees.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • Remember the day the satellite blew it's zap and all those pagers stopped working a few years back?

    Well our company hosts www.panamsat.com the coporate site for the satellite company.

    On that day the site was running along with 50 other shared hosting sites on a pentium 166mhz w/64mb RAM running Slackware and a 1.2.x Linux kernal.

    Prior to that day the site would take ~1000 hits a day, but by the end of the day just that site alone took 1.2millon hits.

    From that I know that a properly configured Linux server could take at least a few million hits a day.

    Forget the FUD, if you want scalable stability don't use Windows.
  • Congratulations, you just figured out how to spend 2.5 times as much by replacing an experienced
  • Damnit, should have previewed... :-(

    Congratulations, you just figured out how to spend 2.5 times as much by replacing an experienced < $100,000 salaried employee with a $250,000+ 'paper expert'. And make no mistake about it, you need someone full time to administer servers, especially ones that require as much tweaking and twiddling and handholding to keep running properly and reliably as NT/IIS does.

    Vendor sponsored multiple guess 'certification' programs like MCSE are pretty nearly worthless for determining quality. While there may be a lot of MCSEs out there, how many of them really have good applied knowledge, experience and problem solving skills?

    Microsoft is selling the PHB's a false economy with the way they try to make it out that any idiot out there can administer NT/W2K. And hiring consultants for this sort of thing is just plain foolish $ wise anyway.

  • I convinced my PHB's to switch to Apache very simply. I setup IIS and let it run on one of our installations. It ran fine for a few months, when nobody was using it much. Then the system finally went full time, and IIS started crashing every 3 hours. It couldn't handle the load (which was actually pretty minor). So I quickly configured Apache for NT, and set it up (after removing IIS) with the exact same dynamic content (CGIs mostly). It works without fail, and uses less processor time to do the same thing.

    Now I'm using that as leverage to make them see why NT sucks so bad for this system. Maybe I can push them to a Linux or BSD type solution.

    Sure, IIS may be a bit faster, but there's a huge difference between fast and stable.

    ---
  • I know a admin for Expedia.com. He claims they their NT web servers are rebooted more than once per day. I also heard a talk from Steve Ballmer where he said that Microsoft.com was run on ~70 NT web servers by ~70 admins! He jokingly said that each admin must spend their entire day sitting in front of "their" web server and repeatedly ask themselves, "Did the server crash?! Did the server crash?!" If Microsoft can't keep NT up and running on their own Microsoft.com and Expedia.com web sites, how do they expect anyone else to do so?

    I also know a web developer from go2net.com. They're a Linux shop, but acquired SiliconInvestor.com who used NT. They routinely had to reboot their inherited NT web servers every morning.


  • Oooops.

    HEH i should have checked my arithmetic.
  • 100k hits/day:

    (gdb) print 100000.0/60/24/24
    $1 = 2.8935185185185177

    Urm

    Thats 3 HITS A SECOND.

    Time to fire your CIO.

  • Fuzzy math here. There are 86,400 seconds in a day. At 1.15 hits per second, you get 100k hits.

    I think my HP-200LX palmtop can serve that or better....

    I threw together a mysql/perl/apache combo to replace a horrid sql server/NT app that sucked. With purely unoptimized sql and half-ass CGI.pm code, I got about 4 hits per second (which included the db lookups) versus 1 per sec on NT, with all of it running on a single cpu AMD K6-350.

    I know, not definitive enough.... Don't forget to try ApacheBench for torturing the boxes. A company called Cyrano makes some good loadtesting software too.

  • See http://www.spec.org/osg/web99/results/res2000q3/ for a refutation of the Dell claims.
  • No, no, no. Internet Explorer is not in kernel space. Or at least it wasn't last time I heard, but who knows what MS has done recently. But that's not what everyone else is talking about.

    We are comparing IIS - Internet Information Server - to Apache. They are web servers. IE is a web browser. IE is not the same as IIS.

    IIS runs largely in kernel space for high speed on static pages, but takes a big hit on highly dynamic pages.

    The new TUX web server for Linux is an optional kernel module which is kind of similar, but gives you the best of both worlds - kernel space httpd for very high speed on static pages, and it passes off requests for dynamic content to Apache which runs them very well.


    Torrey Hoffman (Azog)
  • IIS is known to be superior to Apache for *static content* -- but what about dynamically generated content, which is what most companies really care about *anyway*.

    The kernel httpd will help for performance with static content, but it's not out of beta just yet...

    My daytime employer handled 300,000,000 page views in August -- almost all dynamically generated, most on Apache/Linux, the rest on Apache/Solaris.

    Our server farm is around 60-ish servers I think... That includes the two E4500's at the back-end. (one live, one hot-spare)

    -JF
  • Erm, don't mean to pick nits, but 100,000 into 86,400 seconds per day is one hit approximately every 1.16 seconds . . .

    You should still fire the CIO, though.
  • Comment removed based on user account deletion
  • I have no idea where Dell got its numbers. I maintain a website whose primary server runs on a Celeron 400, 128M RAM, and an IDE hard-drive using NT4/IIS4. It gets roughly 600,000 hits/day, and is only "lightly" loaded. But of course, these are small static web-pages that are mostly cached in RAM. I'm pretty sure any web server could match its performance (it is less than 300kbps).

    Back in the 1800s, Nietsche proposed the idea of an uebermensch who had the sensitivity and intelligence to rise above the daily petty lives of normal men. This entire "who has the fast web server" is one of those petty squables. The fact is that each is fastest in its own way. If you want to prove how good your OS/httpd compared to the other OS/httpd, then you'll certainly have enough statistics to back you up. So will the other side. The only way to win the war is not to fight it.

    DELL was almost certainly quoting a specific test, possibly one tha compared dynamic content generation between Apache+CGI vs. IIS+ISAPI (CGI sucks vs. almost any alternative).

  • .. which is very exciting, but not as proven
    as Apache. People not wanting to be early adaptors
    may want to wait for a while.
  • I'd have to agree so much here. It's ridiculous how meaningless raw speed data is thrown about in this industry. For most sights, the ability to handle an astronomical number of hits doesn't really mean anything since they'll never see that sort of traffic. Other things are much more important.

    First, ask your boss if your company realistically ever expects to get within more than a few percentage points of either supported hit rate. The idea here is that those numbers are meaningless.

    Second, ask about failover support. With the cost of NT/IIS, you could afford another server to provide backup support.

    Third, ask about reliability. The throughput of a computer is exactly 0 when it is rebooting, blue screening, or otherwise taking a break from reality.

    Finally, if your boss insist that one small narrow view of the world is all that is necessary to make an important decision, then consider posting your resume on one of the many internet job sites. Your prospects look good in the current economy.

  • In the benchmarks done about 6 months ago IIS did outperform apache, BUT that wasnt apache's fault. The fault was with the tcp/ip stack in linux, and it was determined that apache on *bsd would outperform IIS because of *bsd's better tcp/ip stack.
  • I like Apache a lot, but it is not buit for raw speed -- it is built for manageability. It's a joy to work with. The 100K hit figure seems a bit low if this is high end intel iron (it's only 1.15 hits per second); but even at face value, I'd expect this to go up considerably with the 2.4 kernel (multithreading); Squid can take up some of the slack too if you need to increase the static page performance several fold.

    It's probable there's no technical barrier to what you want to do using Linux/Apache. However, what the Dell folks have done is throw a sales barrier in front of you.

    If you are concerned about selling scalability to t PHB, why not AOLServer? It's open source, and AOL uses it to host their services to the tune of 28,000 hits/second or about a hundred million hits/hour. Granted this is not on anything resembling the iron you are buying, but if you're expecting to get into the rarified 1M hit/day range it will be nice to know that you have a place to go without rearchitecting when you are hit the wall. The money for a nice solaris box won't seem like much if you can avoid months of porting software.

    Another thing to point out is that the max static pages benchmark is not a measure of speed, but of capacity. Within the reasonable range of the Apache performance, you won't see any difference in responsiveness. Thus if you are planning for less than 100K hits/day there's no reason to use this benchmark as a differentiator. Instead, focus on issues of manageability and TCO.

  • You miss the point: What is Linux 6.2?
  • Hey, look, the Apache website server status [apache.org] says they're serving 17 hits per second. That's around 1.4 million per day. Looks pretty fast. Looks like they're running Apache. Hmmmm.
  • Slashdot is not running on a single machine; it's load-balanced. MSN.com runs on IIS and it get's probably 100 times the traffic that Slashdot does. Not because IIS is so much better than Apache, but because they've got plenty of machines handling incoming requests.

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • There's that, and there's also the configuration of your entire system. Is it just going to be one machine sitting by itself on the Internet? Will it be behind a load-balancer? Will it be making requests to other machines? What are those machines running? Will they be behind balancers?

    There's so much that goes into setting up a serious web-server that the comparison between IIS and Apache really needs to be made at *your* site, using *your* settings.

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • Completely different issue. This was a problem with Samba, which handles file-sharing, not Apache, which handles webservices. It's my recollection that the specifics of that problem were worked out in the meantime.

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • If you were really interested in performance why wouldn't you use tux or aolserver. Both of them are much much much faster then apache and probably as fast if not faster then IIS.
    Hey it great to have so many choices.

    A Dick and a Bush .. You know somebody's gonna get screwed.

  • seems shorsighted to me. If you know there is going to be significant improvement in the next couple of months you'd be fool to ignore that and go with the best platform right now. In two months your competitors will be zooming by you.

    Having said that you could always use aolserver which is godawful fast and php right now and cook and go into overdrive when kernel 2.4 get out.

    A Dick and a Bush .. You know somebody's gonna get screwed.

  • I have the same experience -- but my web server multihomes -- and I use multiple IP's to do it with separate running apache's -- I just counted 125 instances of httpd running, and the load average is 0.32 0.17 0.17. The most popular site gets only ~3k hits per day, but taken together it's probably a sh*tload.

    The server is a Compaq Proliant with a 300 Mhz processor and 128 megs of RAM.

    -Omar

    PS -- The same box is also our primary SMTP server for ~500 email users (qmail, of course).

  • Try this one [redhat.com] for more info.
    ------------------------------------------- --------

  • Ouch. I assume thats in addition to the shared memory (of course -- what else could it be? You say "per worker process"). So, I suppose you're looking looking at a 25 - 35 meg httpd daemon. That's pretty darned heavy

    When you start using things like embPerl (a php-like extension to perl), and you have a large project (dozens of perl modules), all cached in memory, you can easily bring your perl-space alone to 6- 12 meg. With mod-perl off, Apache is like 2-3 meg per worker for me. With everything turned on, I usually get 10meg... If you don't properly manage your perl code, then things like database accesses can consume a meg or more for a single variable.. This is hard to de-allocate in such a way to free up the process's memory space.. That coupled with fragmentation can grow your memory size considerably. It's virtual memory, to be sure, but it still counts as disk swapping.

    The size of the daemon totally depends on how much code is loaded at apache startup time.. If your .htaccess files actually load the code, then your workers are going to be larger than the central daemon. If the size is due to dynamicaly loaded data, then each worker will be a different size... I've had all of these situations occur.


    I suppose, though, that no-one thought to make perl interact well with copy-on-write memory semantics. It would be cool if it did.


    Well, so long as you initialize _all_ your perl code in your httpd.conf or vhost files, then you'll get essentially that. Anything generated after the dameon spawns off workers is lost, needs to be reloaded / recompiled. In the first case, you take advantage of UNIX's copy-on-write. This means most of the constant data-sections (namely the in-memory text copy of the perl source code), won't be physically duplicated.

    Unfortunately, unlike most programs, most of perl's memory is writable. Meaning, you have the core perl-interpreter which is static code, then you have the hundreds of K of perl-code-text, and their intermediate meta-data or compiled form. This compiled form does not get shared between forks, since the execution tree can change over-time. At the very least, it can't be registered as "shared memory", since it's possible for the contents to change. At best, you have multiple forked processes temporarily sharing a segment with the copy-on-write flag set. If ANYTHING in that 2, 4 or 8K page changes, you have to copy out the code.


    Could you explain what you mean by "the cache lines are valid for all threads." Obviously, the code cache is going to be valid (just like in Unix, obviously), and the data cache is also going to be valid (unfortunitely, not like unix). But is the stack cache going to be valid?


    Ok, thinking this over, I may not have all my facts straight. I know that the x86 uses a legacy addressing scheme, unlike most modern CPU's which have a flat virtual address space, the x86 uses segment selectors. This allows a process to utilize two segments that have the exact same virtual address but really be mapped to totally different physical addresses. A segment is an offset to the CPU address register which occurs prior to page-table mapping. It also provided an upper address bound to provide a first-tier memory-protection scheme.. It worked great in the 386 days, but in our UNIX flat-virtual-memory model it's not terribly useful. I think NT still uses it for some optimization tricks.

    My problem is that I do not know exactly how Linux deals with this.. My assumption was that since each process got it's own segment selector, it would have a completely different virtual memory offset, and thus the addresses that the cache lines get are different for each process. From this, I assumed that two context switches would require cache-flushing. If anyone knows differently, please come forward. If this were true (as I assumed), then MT would have the added benifit of sharing a segment selector, and thus not requiring a cache flush.

    As you can see, if my original assumption was correct, then none of the code, or data cache lines would be valid. If I was incorrect, then it's just like most any other processor. Both your code and data lines are valid for all MT context switches. It's true that you have independant stacks for threads, but they're in the same memory space, they shouldn't do too much harm to each other's cache. With MP, even here, you have the issue of starving neighboring process's data caches, since for for all copy-on-write operations, you have additional memory to cache and compete against. Still, unless my origional assumption was true, none of this really will affect much.


    Also, what on earth can you possibly mean when you say "the apache processes run round robin". When did apache start including a task scheduler? Will it start coming with disk drivers soon, too?


    Ok, I have made assumptions based on empirical evidence. The daemon process opens port 80 (or what-ever), THEN forks off a bunch of processes with port 80 still open.. This is a very bizzar situation. Normally you'd think of it as simply having the forked processing maintaining the file-handle from the original guy. For STDOUT, it's just the luck of the draw who prints out in which order when they're both performing prints. But for server TCP sockets, things get wierd. Lets say the central daemon never listens to sockets (
    it's main role is to monitor the number of workers and fork off working copies as necessary), but all 5 of your workers ARE listening on that same socket. What happens when a browers establishes a connection on port 80? You have 5 guys listening and waiting to accept a new connection, which gets it? Well, it's the OS's job to decide, and evidence on my installtion suggests it's round robin.

    Here's how I did my experiement. I used mod perl in order to have a persistent memory structure. I went to a web page that would print out to the screen it's process id (since it was mod-perl, it was the same ID as that of the worker). You'd think that you'd always use the first worker, and only use additional workers if the first was busy. But instead, I'd get a random worker. I don't recall if I worked out whether it was indeed round robin.
    Think of it like MT with cond_wait and cond_signal.

    All workers cond_wait on the port-connection. A seperate thread (the kernel in this case) finds a new connection to port 80, so it signals a cond_signal, and some undetermined guy wakes up first and establishes the connection.. The other guys go back to sleep until the next cond_signal. When the worker is done with the connection, it does another cond_wait, reentering itself to the queue.

    The most simple implementation of such a conditional broadcasting structure would be to have a round-robin queue. cond_signal activates the guy at the head of the queue, and all people initiating cond_wait go to the end of the queue.

    All of this is hidden by the socket "accept" function call, which should just be an interface into the inet kernel routines. I don't know how much of libnet is done in user space and how much in kernel space, but I'm sure that no actual MT programming is going on, since I don't see any IPC resources being consumed when children listen to the same socket.

    To experiement on your own, just do the following (written in perl code for brevity).

    #!/usr/bin/perl
    use IO::Socket;
    $sock = new IO::Socket::INET( Listen => 5, LocalPort => 5000, Reuse => 1 );

    for( 0 .. 1 ) {
    fork;
    }

    while(1) {
    $client = $sock->accept;
    $client->print( "Server $$\n" );
    $client->close();
    }

    Then repeatedly telnet to port 5000, and watch how it cycles.

    I just tried it out, and it produces 4 processes. Telneting shows me that at least on my implementation with this few processes it is perfectly round-robin. This may not be the general case, but we can't deny that having 50 web servers is always going to spawn the first one (thereby reusing it's cach lines). If I am correct, then if you are underloaded, then having 50 apache processes will actually hurt your performance (especially if any of it goes out to disk).
  • Hmm, I'd be curious to learn more about TUX. I was wondering what kind of performance you could get if you used an X-type event model using UNIX's select. Divide your apps into tiny chunks that take very little time each, give the apps priority queue's where tasks that take longer than one time-unit queue subsequent activities on lower-prio-q's. Then periodically poll for IO (for both disk-reads and Socket activity).

    This is kind of like an RT system, where IO is garunteed to not be pending for longer than a certain amount of time. This allows thousands or millions of simultaneous connections with only the overhead of short-term context-information.

    You get very fast connection-response time, and you could internally prioritize activities so as to minimize overloading (though this probably couldn't be true in general).

    The bueaty of the system is that if you actually HAD multiple CPU's you could run multiple worker processes in a very similar fashion as Apache currently does. From this, there would be no incentive to run more workers than CPU's, since a given worker can handle an infinite number of connections (limited only by practicality).

    This obviously wouldn't not support external API's such as CGI or mod-perl, though it MIGHT work with FastCGI or servlets, since all you need do is probagate IO.

  • putting something complex like a webserver (or browser for that matter) into kernelspace is just asking for it


    I assume you mean IE. I was under the impression that IE did NOT live in OS-space. I base this on the fact that when i?explorer dies, I can restart it without affect independant proceses (course, if anyone had a file-dialog box open, they'd freeze too).

    Please correct me if I'm wrong, but Windows runs their utility DLL's in user-space. It's only the vxd's (for the win 9x world that is) that runs in kernel-space. Likewise NT 4.0 didn't let any utilities in that space. 4.0 allowed all things video to probagate, but that was it. No clue about 2k.
  • I was assuming that much. We're talking about 500 THOUSAND connections though. I seriously doubt that a single connection is going to have 499,999 image links. :)

    -Michael
  • If it's static content, wouldn't TUX be the way to go? If course, I think TUX was tied to the 2.4 kernel, but I'm a little fuzzy...

  • by Anonymous Coward on Thursday October 05, 2000 @08:34AM (#729301)
    I'm using Infortrend Scsi to Scsi RAID modules on Dell PowerEdge 2XXX series machines. Running BSDI 4.1 I serve ~50k page views a day, HIGHLY dynamic, using mod_layout and lots of SSI. My load is usually .08 with an average idle of > 95%. Yes I run porn. It is worth noting that almost no unix comes optimized for what you want. Anyway, my machines do not suffer from "20 second" lock ups. Perhaps you've got a scsi timeout issue going on there. If you're using a lot of cache between your host adapter and disks, try removing some, big caches that don't clear quick enough cause scsi timeouts.
  • by Suydam ( 881 ) on Thursday October 05, 2000 @09:25AM (#729302) Homepage
    1) Stock system vs. Stock System NEVER EVER EVER happens in benchmarks. Both OSes allow for configuration during install.

    2) Stock systems are not what are used in the real world (i.e. webhosting companies) and therefore the results of a benchmark pitting stock system vs. stock system are completely useless. Web hosting companies DO hire talented system-administrators for NT or Linux.

    3) your third comment is a complete troll. You should be moderated down.

  • by ydnar ( 946 ) on Thursday October 05, 2000 @08:42AM (#729303) Homepage
    i have found similar results. the scheduler seems to have gotten a nice kick, as well. under heavy concurrency load, i saw performance increases in the order of 800-1200% (!) on identical hardware.

    /me thinks apache + tux on that dell hw will perform nicely...

    ydnar
  • Reliablity and configurability is in many ways more important. In all likelyhood with eather server setup you will run out of bandwidth before you run out of server. So it really becomes a question of what are your real needs. Spead is nice, but not everything.

    The Cure of the ills of Democracy is more Democracy.

  • I was testing on a Compaq Proliant Dual Xeon 800 1 gig of Ram, 8 Gigabit ethernet cards running traffic to 8 separate networks to 16 clients.

    With 2.2.X the driver I was testing could only muster a total thruput of around 370 MBPS.

    With 2.4.0-test8 ( ia32 )

    My Cisco switch was the bottleneck.

  • by mellon ( 7048 ) on Thursday October 05, 2000 @09:31AM (#729306) Homepage
    I really have trouble believing that a Linux+Apache combination is that slow. Think about it: a day is 86400 seconds. So that means that at worst, they're handling one hit every four seconds, and at best one hit about every .86 seconds. This is a really unbelievable amount of time to spend serving one hit on a static web page - it's pretty unbelievable even if you have a huge DB query going on for every hit. If these numbers were accurate, it would indeed be cause for embarrassment, but I doubt that they are.

    I don't want to accuse Dell of malfeasance, but if they really benchmarked these numbers, they must have badly misconfigured Apache. My guess is that they had Apache doing reverse DNS lookups on every query, and the queries all had to time out because there was no name server responding to them. That's the only way I can imagine that you could get such bad numbers.

  • by scrytch ( 9198 ) <chuck@myrealbox.com> on Thursday October 05, 2000 @10:30AM (#729307)
    > 'Three years without a remote hole in the default install!'

    DOS attacks are not root hacks. OpenBSD has never claimed to be immune to DOS attacks. And unless you can control the connection end-to-end (or at least a hop away from your box), you can't ever be.
  • by tgeller ( 10260 ) on Thursday October 05, 2000 @08:17AM (#729308) Homepage
    You could also sell them on security and reliability, areas in which IIS simply falls on its face. A look through old Slashdot stories [slashdot.org] will give you ammunition. --Tom
  • by platypus ( 18156 ) on Thursday October 05, 2000 @08:36AM (#729309) Homepage
    I have two links for you, from an os-agnostic source, the german computer magazine c't:

    The first [heise.de] is in only german and show iis and apache are at the same speed (with 2.4pre-kernel IIRC).

    The second [heise.de] article is really interesting (and in english). They measured downtime of business class webservers and compared nt to solaris and linux.
    To quote the beginning of the article:

    To Be Up or Not To Be Up
    Analysis of Web Server Downtimes
    Stability is one of the major criteria for web server performance. Although it is commonly accepted that Windows NT and IIS cannot match Unix and Apache servers in this field, there are hardly any tests to confirm this assumption. An availability test of the major German internet businesses clarifies the situation.

  • by keepper ( 24317 ) on Thursday October 05, 2000 @08:11AM (#729310) Homepage

    If you are looking for static page serving, then apache is not the speediest, it's fast, but not as fast as the competition. This is even the cliam of the apache group

    For Dynamic serving, the story is a bit different, with apache puling up on par or ahead with the competiiton ( IIS, Netscape E, Zeus ).

    if you want the absolute performance king, then the no question winner is zeus, it has been for a while the fastets serving static content, and a great contender on dynamic. The price is not a consideration, since it's not only well worth it, but when you are buying high end servers, a very little add on

    Couple zeus with FreeBSD or solaris, and you got yourself a mean combination

  • by GauteL ( 29207 ) on Thursday October 05, 2000 @08:12AM (#729311)

    .. and I don't know how Apache compares to NT, but since you're obviously wanting to use Linux, why not monitor the situation with Tux and khttpd. khttpd is the kernel webserver, and provides a webserver in kernel space, while Tux is a mix between kernel space and user space.

    Khttpd is exceptionally fast when delivering static content, while it gives all requests for dynamic content to Apache. And apache being quite fast at dynamic content this works out well.

    Tux however handles both dynamic and static content. It is also exceptionally fast at both. Take a look at this slashdot story [slashdot.org].

  • by MarNuke ( 34221 ) on Thursday October 05, 2000 @08:36AM (#729312) Homepage
    Ah yes! We're back on this subject again!

    Let's get started on the same thing for the millionth time.

    FIRST! Dell server hardware can be summed up by one word: wierd. Thier PERC Raid controller blows goat nuts.

    Second. Most likley this was tested on a great big honking 8 way Xeon with globs of ram. Yes, the 2.2 kernel has "good" SMP support, and yes 2.4 kernel has better support, but Apache 1.3 DOESN'T!!! It simple doesn't have muilt treaded support like ISS, or what ever. Apache 1.3 runs best on many single proccessor machines, clustered some how, sharing content through NFS. If you want to do the Apache vs, ISS on muilt processor machine bit, test with Apache 2.0. I think it was showed once, and it scored 4 times better then ISS.

    Sure, one HUGE machine with no fail over is what you might want for your intranet server where only people will complain. However in a co-lo, running a website that runs your company, many small cheap machines, that can quickly be replaced, upgraded, and scale up is what you want.

    Think about like this, you can spend 4k on a small one process machine with a ide drive, or 50k on this huge central heater with globs and globs of ram and a huge SCSI raid system. You'll need two of these space heater, that 100k. For 100k how many of these 4k cheap-o machine can you buy? About 25 machines. For about 4k, you can get a single processor 600 mhz and a 20g ide drive, about 1/4 a gig of ram, a nic, and a rack mount able case. If you only get 20 single processor machine, you'll be be able to flood your connection, and have money left over for a 250g RAID 10 system. My number may be off (WAY OFF), but my point is made.

    If a 4k machine blows up, so what, you have 19 other machines. If five blow up, oh well, you have 15 left. If this 50k 8 way space heater blows up, you lose haft your site, and you'll be running at haft power for the next few days until the unsite monkey tech gets there.

  • by Foogle ( 35117 ) on Thursday October 05, 2000 @10:07AM (#729313) Homepage
    And plenty of people have seen hundreds of NT boxes run under considerable load, and never crash once. I've seen Linux shit the bed from some Javascript under crappy versions of Netscape. I don't blame Linux though; I blame Netscape (for the most part).


    If your website is crapping-out on you, the problem is likely to have something to do with your website. I.E. Your application was poorly designed, and the author didn't truly understand the platform he was designing for.

    I'm not saying that IIS and MS SQL Server are perfect; They're not. But neither are MySQL and Apache, either.

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • by Get Behind the Mule ( 61986 ) on Thursday October 05, 2000 @10:12AM (#729314)
    Think about it: a day is 86400 seconds. So that means that at worst, they're handling one hit every four seconds, and at best one hit about every .86 seconds.


    That's assuming that the hit rate is uniform across the whole day. That's an unlikely state of affairs, even for a site that is of worldwide interest and is getting hits from every time zone. There are always slower hours of the day, and there are always spikes in the request rate.

    But you're point is well taken if we narrow the time frame. If we assume that all the visits come in an 8 hour time window, then the best performance (at 100k hits/day) is a mean of about 3.5 hits per second. If they all come within 12 hours, then the best mean performance is 2.3 hits per second.

    Now, I've gotten Apache servers to serve up hundreeds of pages per second on fast Pentium machines! If you're averaging less then 10 hits per second over a whole day, then you have severely misconfigured your server.

    I submit that the Dell figures are completely worthless without more details about the benchmark. Were they serving static pages, CGIs, servlets, PHP scripts, mod_perl scripts, FastCGIs, what? Were backend applications and/or DB queries involved? How did they tune the server? Did they tune it at all? Did they use a caching proxy like Squid? Without this information, the benchmark results are little better than random numbers.
  • by ostiguy ( 63618 ) on Thursday October 05, 2000 @09:15AM (#729315)
    Datacenter is only being sold on certified hardware you nitwit, and please explain to me why anyone would want a 8-32 CPU web server?

    ostiguy, mcse
  • by dboyles ( 65512 ) on Thursday October 05, 2000 @11:40AM (#729316) Homepage
    Ok, so I'm a few hours late to the party, but maybe one or two people will see this post. First, a preface: I use Linux, I don't particularly like Microsoft products. With that said...

    ...how am I supposed to convince my company that Apache and open source is a great way to go?

    Well, you probably need to show them that Linux will make the company more profitable. As you said, the suits don't care about open source. And they really shouldn't. Personal agendas (e.g. "Microsoft is bad") shouldn't get in the way of what the company is there for: to make money. The same goes for you (the original poster). Just because you like OSS (and you should) doesn't mean that the solution for your company is to use it. There are far too many variables to consider.

    Where can I get -credible- data to prove that Apache can outperform IIS?

    That's really not a good way to go about it. If you're comparing an Accord and a Camry, you don't want to go looking for information that will prove that the Accord is better. You have to look at it much more objectively. I remember back in '92 when I was looking for a mountain bike. I decided what I wanted before I even started researching, and bought a bike that really wasn't the best for my money. I think the analogy fits this situation somewhat.

    Besides all that, as I'm sure a million other people have mentioned, benchmarks aren't very valuable. You've probably heard a quote attributed to Mark Twain: "There are lies, damn lies, and statistics." Look hard enough and you can dig up factual data to back up most anything. But it's rarely the full story.
  • by psailor ( 88197 ) on Thursday October 05, 2000 @08:50AM (#729317) Homepage
    I have found that IIS is faster than Apache. However, IIS runs, for the most part, as a single multi-threaded process (inetinfo.exe) running in kernel space. Therefore when something goes wrong -- The entire process goes down, which kills everything. Ever try stopping the web publishing service on an NT box that is having a problem? It usually comes back with a message saying can't respond to control function, and you have to reboot. With Apache everything is running in user space as multiple processes - Speed Hit Here? - However, it is much more stable and reliable because if one process fails, it can be corrected without taking the entire server down. IMHO - I would take a speed hit as a trade off for uptime and reliability anyday.
  • by user ( 88235 ) on Thursday October 05, 2000 @08:40AM (#729318)
    Then bring up the MS licensing costs
    From the original question:
    They don't care about open source or Linux, just the performance that they will get from the machines
    They're only concerned about performance. The license fee shouldn't be a deciding factor unless the two systems are otherwise quite similar.
    the Apache statistics on web presence, etc
    ...right, fine, but be prepared to completely ignore the bandwagon approach when you try to pitch the idea of running Linux on all the lab machines...
    along with an "Oh by the way I'm a Unix guy"
    Oh come on. That sure is a great reason to pick a web server, right? Besides, since a really good Unix-flavor admin is a member of the elite crowd, they should be able to pick up IIS in no time!

    The posted question peeves me a bit, since it asks: "Where can I get -credible- data to prove that Apache can outperform IIS?" rather than: "Where can I find independant validation of these results and/or data to refute them?". If you're really concerned about finding the best solution to your problem (web serving) you should find facts and then choose a solution, not pick a solution and then find facts to back it up. That's not fair to your employer and it's not the way shops should be run.

    -User

  • Don't forget, IIS runs in kernal space, apache in user space. That gives IIS a boost, but also causes it to take the OS when it dies.
  • by Tassach ( 137772 ) on Thursday October 05, 2000 @10:59AM (#729320)
    There are 86400 seconds per 24-hour period. 86400 hits/day is NOT the same as 1 hit/second because it is highly unlikely that your server is being pounded 24 hours a day. You'll never see anything close to an even distribution. The vast majority of your hits are going to come in during a fairly narrow time frame.

    Say you run a superbowl ad with your URL. Say 20,000,000 decide to check out your site after the game. Those hits arn't going to be streched out over the 24-hour period that comprises super bowl sunday -- you'll get hammered HARD at half-time and after the game. If you are running an online brokerage, 99% of your hits are going to come during market hours, with probably 1/2 or more of those hits coming in during the first hour after the market opens. You've got to be able to handle PEAK loads, not just the sustained average loads.

  • You could also sell them on security and reliability, areas in which IIS simply falls on its face.

    Linux has nothing to brag about when it comes to security. As for reliability, Linux perhaps had an advantage over NT/4, but not over Win/2000. On the other hand, if they want reliability, Linux is possibly the poorest version of Unix. I personally would use AIX, but just about any of the commercial unixes blows Linux away.


    --

  • All you need is a few ipchains or iptables lines to make a box almost totally secured.

    That's like saying that you can make a box totally secure by turning it off. Are the Linux services so riddled with security holes that you have to actually block all access to them?

    Unfortunately, the answer is yes, as you point out.

    Call me crazy, but I want a system that allows me to use ALL the services I want to use without fear of being cracked. Would you cut Microsoft the same slack if they came out with an advisory saying "in order to make a Windows machine secure, block all Internet services" as you advocate for Linux boxes?


    --

  • by man_of_mr_e ( 217855 ) on Thursday October 05, 2000 @08:34AM (#729323)
    I think the thing to keep in mind is that any given test is most likely not going illustrate useage on YOUR server.

    The only way to do tht is with tests performed in your environment, with your data, your pages, and your customers.

    Tell your boss: "Would you buy a car without test driving it and the competition? Why buy a web server without doing the same?"
  • by Auckerman ( 223266 ) on Thursday October 05, 2000 @08:10AM (#729324)
    Step 1: Show your boss NT's listing at Bugtraq.

    Step 2: Show your boss OpenBSD's listing at Bugtraq (read: "...This was fixed 5 months ago..."

    Step 3: Use OpenBSD for security (with lowers costs)

  • by DragonWyatt ( 62035 ) on Thursday October 05, 2000 @08:10AM (#729325) Homepage
    If I were you, I'd get Dell (or the reseller/vendor, whoever) to drop off a couple of identical machines, configure one with RH6.2 and one with NT, and let you guys test them. Do not be afraid to tweak the Apache server based on your experience and knowledge. That will show management where their skillsets lie.

    Run one for a week or two, move the content, and run the other for a couple of weeks. Then bring up the MS licensing costs, the Apache statistics on web presence, etc, along with an "Oh by the way I'm a Unix guy" and see what they say.

    If Dell refuses to supply the test machines, make sure and bring that up to your management- Explain that basically the vendor is unwilling to justify their claim. Then you might be able to pick another vendor such as IBM or Compaq. I understand that Compaq has a fairly liberal test/loan program for such things.
  • by bwt ( 68845 ) on Thursday October 05, 2000 @10:55AM (#729326)
    This [dell.com] page on Dell's site might also be of interest.

    Bingo. Dell itself reports July 25, 2000 SPECweb99 results. Compare items 2 and 7 to find a common platform score comparison.

    (rank, vendor, system, score, #CPU's, OS)

    2* DELL PowerEdge 6400/700 4200 4 Red Hat Linux 6.2
    7* DELL PowerEdge 6400/700 1598 4 Win 2000 Adv Server
  • by Shimbo ( 100005 ) on Thursday October 05, 2000 @08:24AM (#729327)
    Where can I get -credible- data to prove that Apache can outperform IIS?

    Go to the SPEC web site [spec.org]. Then search for the SPECweb99 results Dell submitted.

    Apparently, they are using Redhat's Tux server, not Apache. I don't know whether they are related but the combination kicks IIS's ass. You can't get much more definitive than the manufacturer's own tests using the recognised industry benchmark.

    This [dell.com] page on Dell's site might also be of interest.

  • Yeahim sure this is informative, its really good to know, I did not know this, but this seems like a typical response from a Linux Enthusiast.

    "Oh oh we dont have this feature yet but it WILL be in next version....."

    It just seems like well I dont know, and I know Linux is progressing much faster than it has ever before, but I could go do the same thing with FreeBSD and im sure I can max out a switch.....

    The main thing I want to say is just that it doesnt seem fair or rather helpful to this guy that a kernel that isnt even production level does what he needs it to.

    I suppose you got modded up because you were showing facts about Linux, but Apache runs on many platforms not just Linux

    So I dont really know what the point of the post was, other than the fact that a Dell comes with Linux 6.2 => or is that RedHat 6.2? :) But my first thing to do is to take Linux off and throw FreeBSD on there :)

    Jeremy
  • by itarget ( 168249 ) on Thursday October 05, 2000 @09:15AM (#729329)
    It's true, IIS has run in ring 0 since 4.0. I think they did this to get a leg-up on apache/zeus/netscape/etc...

    It provides a performance boost, but putting something complex like a webserver (or browser for that matter) into kernelspace is just asking for it.

    Having it take the OS down with it on a crash is the best thing you can hope for... what if it doesn't take the kernel down on a crash, but decides to trample all over memory and data instead? It could misbehave and mangle SQL queries from there on in and it could do signifigant damage before it's noticed. I hope that database wasn't too important... :-P
    ---
    Where can the word be found, where can the word resound? Not here, there is not enough silence.
  • by lrhegeba ( 175526 ) on Thursday October 05, 2000 @08:23AM (#729330)
    c't (german magazine) compared apache vs. IIS in a quite exhaustive test and made some very interesting points. Though not Win2k was involved the article may give you some hints and it is even available in plain english here [heise.de]
  • by Wiggins ( 3161 ) on Thursday October 05, 2000 @08:05AM (#729331) Homepage
    I also found this bit interesting, if you look at the machine that is configured at the top of the ad it has 256 MB and is quoted at 20K-100K hits, and directly below that is the exact same machine with only 64 MB and surprisingly it is quoted at 20K-100K hits. So apprently quadrupling the amount of available hard memory does nothing for the performance of a machine....uh....er something.

  • I test nic cards and no mater what I use gigabit or fast ethernet. I can't get a total thruput of more than 370 MBPS.

    I tried similar test with kernel 2.4.0-test8 and nearly got fired for shouting.

    I was able to max out my switch and there is no upper limit in sight with the Cisco hardware I curently have available to me.

    The 2.2.X TCP stack is NOT multi-threaded.
    The same bechmark on the 2.4.X kernel will take your breath away.
  • by MSG ( 12810 ) on Thursday October 05, 2000 @09:00AM (#729333)
    There have been a lot of good posts on this subject, but all that I've read missed the one obvious problem with Dell's claim (or the post):
    100k hits per day is just over ONE PER SECOND. A 486 could do better than that. ;)

    Even if you assume that peak time on the site is a 6 hour period, you're getting close to four and a half hits a second, which is no big deal.

    For some good information on the state and progression of Linux, look at http://www.kegel.com/mindcraft_redux.html [kegel.com]
  • by Phrogz ( 43803 ) <!@phrogz.net> on Thursday October 05, 2000 @08:05AM (#729334) Homepage
    I know, love Open Source, hate MS. But are you open to the possibility that Apache may not perform as well?

    This is not a troll, but a serious point for all--make certain that your loves, hatreds, and desires don't over-rule what may be a valid goal.

    If the goal for your project (as specified by the powers-that-be) is performance over ethical ideals, then be certain you know what you're really looking for before you go looking.
  • But then you know that since you are a techie.

    One of the hardest jobs a techie has to do is convince a clueless boss there is more to a job than a rigged benchmark on a 4-color marketing sheet.

    Here are some points to bring up:

    Since there are more Apache installations than IIS installations, there are more engineers on the market who understand the technology. You can even throw in a few MCSE horror stories for good measure.

    IIS will tie you into an NT platform, and in 5 months (when M$ gets the appellate court to delay their case for a few years) the licensing fees are going to shoot way up. Every major consulting firm has given exactly the same prediction to their largest clients, when M$ wins or delays its antitrust case, licensing fees will increase 2x to 10x, and to reserve a major portion of budget for it. If you choose Apache, later you can switch reasonably painlessly to Solaris, linux, or any other system as costs or management changes dictate.

    If you are going to serve only static pages, then IIS wins slightly. If you start to generate dynamic content, Apache blows past IIS. Go search the web for some of the other comparisons.

    Apache installations are far more stable than IIS, and there is a lot of anecdotal evidence on the web to help you back this up.

    When it comes time to add a custom feature to your web servers, an OSS solution like Apache is likely to have it covered, but with IIS you are at the mercy of M$. If a new feature doesn't exist for IIS, you don't stand a chance of convincing M$ to add it. They have a long history of doing only what they want, not what their customers are demanding.

    Get creative, or you will be stuck with IIS, and it will be time to find a new job :-(

    the AC

  • by 1010011010 ( 53039 ) on Thursday October 05, 2000 @09:00AM (#729336) Homepage
    http://www.flyingbuttmonkeys.com was a 250MHz AMD k6-2 with 32MB of ram and a 4GB IDE hard drive during the past month when it's been repeatedly slashdotted. Twice, it served over 120000 hits an hour, at an average packet rate of more than 1000/sec, without a problem. Lately it got unreliable because of a failing network card, but that's not exactly Apache's fault. 100k hits a day, right. If the slashdotting had been in effect for the whole day, my little Linuxbox would have taken and served 2.88 million hits that day. the load stayed around 0.20. Imaging what a machine with some actual memory and CPU could do!

    ________________________________________
  • by NetJunkie ( 56134 ) <jason.nashNO@SPAMgmail.com> on Thursday October 05, 2000 @08:15AM (#729337)
    I'm SURE Dell has the documentation to confirm that ad. Call and ask to see it. Dell is very good about things like this and you should be able to look it over yourself.

    Also, ask to borrow two boxen for evaluation. They'll do it.

  • With Apache everything is running in user space as multiple processes - Speed Hit Here?


    Yup, The apache processes run round robin (far as I know), and that means that you have to flush the CPU cache for each new connection (rotating the memory space). Note, this won't be noticable for anything but the heaviest loads (since regularly scheduled OS context-switches will have the same effect). Additionally, unless you put in enough memory, you're going to have to swap to disk throughout the rotation. This is likely when you have large mod-perl code (I've gotten apache up to 20 meg / worker-process). Additionally, AFAIK, all IO caching is redundant with MP. I THINK there was a shared memory segment that resolved this, but I don't know.

    An MT version has no such problem, since the cache lines are valid for all threads, AND if you had 40 threads with 4 - 12 Meg each, it's unlikely that you'll push yourself out to disk.

    We're assuming, of course that nobody is using flat-out CGIs. If anyone was, then Apache would win out-right. In a single-tasked, MT environment, you have to initiate an entirely new process space, and before that, you'd have to configure your environment. In Apache, you have ready-to-go worker-processes than can easily exec the CGI, a new worker thread is recreated at a later time. In Windows, I believe there is less process-creation overhead than in UNIX (assumption based on the non-forked model of windows), so the distinction may not be as great.

    I definately agree with the added stability inherent with discrete processes. However, it can actually be harder to debug when you have faulty embedded applications (mod-perl), since each worker will have a seperate work-space, you get a different answer every web-page access. Course, on the other hand, MT perl is even less secure.

    With MT, you have all the fun of race-conditions and hidden shared variables that clobber each other. Debugging that can be a nightmare. IIS gives you their equivalant of perl / visual basic. I don't really know much about it, but I speculate that it's more integral to IIS than perl is to Apache, and thus is more efficient. Not to mention, perl wasn't designed for the web whereas ASP was.

    Now I know we're not focusing on perl here; you could very easily substitute php, or SSI static pages. I discount servlets, since most of your processing time supposedly is taken up in the servlet, and that's independant of the web-server. I assume that servlets are available (in one form or another) for IIS. Hell, if you really liked servlets, you would use it AS the web server.

    I've heard that Apache is comming out with an MT version, BUT, here's the problem. In a windows platform, you still have all the negatives of being in server windows platform. Not to mention, Apache is a text-file configuration while windows is a gui based configuration. The two do not mix well. In the UNIX environment, however, our kernel MT model leaves much to be desired. I vaguely recall reports that showed how poorly Linux did in MT compared to other platforms (obviously Solaris, but I think we even lost to Windows). If you compare MT Apache on Linux to Windows, you might get a nasty surprise. I don't know if the situation has been improved for the 2.4 kernel. Hell, Linux isn't even POSIX compliant with MT.

    For simple static-page serving, MT apache should accel. Highly optimized C code in tight loops with a fixed number of worker threads could do magic. This avoids all that BS VC++ MFC stuff. But unfortunately, you can't make a sophisticated business around static pages (unless you're into pr0n I guess).

    -Michael
  • by Talonius ( 97106 ) on Thursday October 05, 2000 @08:05AM (#729339)
    Is there such a thing? Really?

    As well, what's the difference between "supports" and "speed?" Not to mention cost. (I'm sorry, but I just put two Linux servers in place yesterday simply because Microsoft wanted over $5,000.00 for the software licenses alone; Secure Apache came with Red Hat for $170.00.)

    The server I have up seems to foot the bill quickly and correctly. What kind of bandwidth are you going to have? Can your BANDWIDTH support that many hits? Do you expect that many hits? Contrast this again with the advantages of Open Source as you see them.

    All in all, I thought Linus had to eat crow because IIS really *did* outperform Apache on Linux. Wasn't there a story about that on Slashdot about a month ago?

    -- Talonius
  • by earache ( 110979 ) on Thursday October 05, 2000 @08:32AM (#729340) Homepage
    I work for a global internet services company and we deploy huge multi-million dollar solutions BOTH on solaris, *BSD and NT.

    I have no idea what the hell you are talking about NT crashing once a day for an IIS installation, most of our production servers have 4 to 5 month uptimes. I've never seen an NT server that is simply serving pages, dynamic or static, crash. Ever. I have seen the box slowed down, but not because of microsoft, but because of Sun's shoddy JVM and Allaire's crappy JRUN installations.

    I've witnessed and worked on deployments of sites that bang massive loads, and you know what? If your site is pulling a million hits a day and you're attempting to box that into one server, then you're a complete moron and deserve to suffer those crashes. No OS is going to save you from that.

    For huge sites, the network design is just as important as the application design, and if you fail on implenting a solid design - regardless of what OS you are serving off of - you're going to run into problems.

  • by localman ( 111171 ) on Thursday October 05, 2000 @09:11AM (#729341) Homepage
    Apache is a general webserver, which is designed to be correct first, and fast second.

    This was taken from Apache's own Performance Notes Site [apache.org].

    IIS may be faster. I actually don't know, because I've never used it. But I will say this: I worked at LinkExchange when we were the number one company in Internet reach (52%). That's right, more eyeballs than AOL, Yahoo, and MSN combined. And we did this using Apache; both for our site, and for our banner network.

    Not to start a flame war, but there's also a chance that the performance bottleneck is Linux, and not Apache - LE was using FreeBSD. There's an excellent benchmark [innominate.org] of various Unices, which may indicate as much. It well done, but doesn't get going until page 8 or so...

    Anyways, be sure to take administrative costs and bandwith constraints before making a decision.

  • by Chyeburashka ( 122715 ) on Thursday October 05, 2000 @08:22AM (#729342) Homepage
    This doesn't directly apply to Apache on a Linux 2.2 system (somebody remind DELL that there is no such thing as Linux 6.2), but for a glimpse into the future, check out the SPECweb99 2nd quarter [spec.org] and 3rd quarter [spec.org] results.

    This shows how a bleeding edge webserver, TUX 1.0, running on a tweaked 2.4.0-testX box can outperform a virtually identical box running IIS 5.0. Curiously enough, these are DELL boxes, and the tests were performed by DELL.

    I understand that it is hoped that the advanced features of TUX 1.0 will eventually make their way into Apache.

  • by AliasTheRoot ( 171859 ) on Thursday October 05, 2000 @08:06AM (#729343)
    Apache has been designed for correctness and it *chugs* compared to IIS, Netscape/iPlanet or Zeus. But it's stable and works well.

    If you need a speed boost try putting a Squid proxy in front of it - it'll really help on static pages / images.

    You may find that the Apache 2 Alpha/Beta software performs better than the 1.x line.
  • If your web server is down then it ain't going to be handling any traffic now, is it?

    Do you *really* believe the numbers and statistics handed to you on the glossies? Wouldn't it make you a rather credulous person if you did? These are *salespeople* that make these things up yyou know.

  • by Riplakish ( 213391 ) on Thursday October 05, 2000 @08:50AM (#729345)
    A look through old Slashdot stories will give you ammunition.

    I would love to be there for that conversation :)

    Greg: Look, I've researched this and I've determined that Linux/Apache really is the way to go.

    CIO: Dell is an industry leader with a multitude of highly qualified people that can produce these benchmarks. Do you have anything you can show me to corroborate you findings?

    Greg: Sure. Here are some quotes I printed off Slashdot. Cmdr Taco says that...

    CIO: Hold on. Commander who?

    Greg: Commander Taco. He runs a website called Slashdot which is owned by VA Linux which is one of the largest Linux concerns.

    CIO: So let me get this straight. You want me to rely on some quotes from a man who goes by the name "Commander Taco" who is employed by a company who's business relies solely on the success Linux instead of an established, industry leader like Dell. Let me think about this. No.


  • by JohnTheFisherman ( 225485 ) on Thursday October 05, 2000 @08:16AM (#729346)
    Where can I get -credible- data to prove that Apache can outperform IIS?

    I know I'm risking a troll rating here, but shouldn't the question be "Where can I get -credible- data to prove which is better?" If you don't have access to the information that Apache kills IIS, how do you know that it is, in fact, better? I'm not saying you should take Dell's advertising at face value, or that IIS is better, but you are presupposing the answer to the question that you (admittedly) don't have the answer to. I suspect Apache would be better on many, if not all fronts, but I don't have any data to back it up either.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...