Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Red Hat Software Businesses

Red Hat Finishes Last 460

JTMatrix writes "RedHat takes last place [in an IDG Network Operating Systems showdown]." The information on how they benchmarked everything is readily available on the site. Go check it out. Update: 01/26 01:07 by H :Check out this link for more technical information.
This discussion has been archived. No new comments can be posted.

Red Hat Finishes Last

Comments Filter:
  • Does useradd work with NIS?

    That requires a script. the NIS tables themselves are easily updated with 'pushd /var/yp ; make ; popd'.

  • Fight spam: insist on the source! Can you imagine eating something that didn't come with an ingredient list?

    Sure, do it every time I eat at a resturant...

    Maybe you need to find a better motto...
  • the review docked linux because samba can only be configured by editing "a cryptic text file." in truth, samba is supported by linuxconf, and it ships with SWAT, which IMO is a pretty nice web-based config tool.

    just picking nits..

    chris
  • You may wish to re-read the part of the article dealing with file I/O. The article goes to lengths to state that both Novell and Linux beat out Windows 2000 for file transfers.
    Red Hat Linux followed NetWare in file performance overall and even outpaced the leader in file tests where the read/write loads were small.
    Windows 2000 demonstrated poor write performance across all our file tests.
    Under the TCP transaction tests, however, Windows 2000 comes in top (attributed to its multi-threaded IP stack).

    The author of the previous comment was likely referring to this major shortcoming when he used the word "faster." Though, in all fairness, the TCP/IP performance tests were ignored.
  • I'd like to say that

    (!RedHat == Linux)

    If you don't understand that, then you just don't understand.

    LK
  • It looks, and sounds, like they gave major points for "ease of use" aka point and click configuration and wizards.

    And oddly enough, they don't seem to have included ease of tuning into their comments about ease of use. In order to get the (comparatively poor) results they got out of Win2K, they had to hack the registry as part of their tuing procedure. They comment:

    • Tuning Microsoft Windows 2000 was fairly involved. Tuning included file system, network and some memory management modifications.
    • Registry hacking ranks right up there with kernel modifications, neither for the inexperienced system administrator.

    I can't speak for anyone else, but that hardly sounds like "ease of use" to me.

  • by Yarn ( 75 ) on Tuesday January 25, 2000 @09:31AM (#1337619) Homepage
    I liked it where when they were investigating file sharing performance, and wondered if the write flag was being honoured, they could just grabbed the source to samba and checked. Why they didnt give RH points for that, I dont know.
  • MS Tech: Thank you for calling Microsoft, where do you want to go today?

    IDG: We are doing a benchmark report and Windows 2000 didn't score high enough, and so we'd like tips to increase your scores before we go public with such negative results.

    MS: Ah, it must be the ultra-reliable write-through flag. We here at MS do not condone other scrupulous OSes that do not properly handle this flag, causing nothing but corrupted data and crashing your entire organization, not to mention knocking our second moon out of alignment as well.

    IDG: Ah, very good. Thank you for all your help.

    MS: You do also realize we have GUI admin tools, don't you? We would hate to see a report that doesn't cover this terribly important aspect. Shortages of further MS products have been known to occur, ya know.

    And give LinuxConf a try, it the only config tool for Linux.

    IDG: Okee dokey. We don't want anything that drastic to occur. Consider it done.

    MS: And give LinuxConf a try, it the only config tool for Linux.

    IDG: Thanks for the tip. We should get back to "testing" (wink-wink) again. Goodbye.

    MS: And have a cheery day.
  • by Mickey Jameson ( 3209 ) on Tuesday January 25, 2000 @09:34AM (#1337622)
    Very informative article, yet when I voted for my favorite operating system, I had to pick Red Hat Linux because there was no other Linux option.
    It looks as if Red Hat and Linux are now synonymous, or at least to the media. Not that I have any (major) gripes against Red Hat, but Red Hat and Linux are NOT interchangeable.
  • by dsplat ( 73054 ) on Tuesday January 25, 2000 @11:26AM (#1337623)
    The text of the article mentioned that Linux provides the ability to use the standard Unix tools in scripts to automate tasks across a network. As far as getting consistent system administration done quickly across a large network, that is much more useful than running a GUI for each one.

    They mentioned scalability, and one important factor with scalability is how administration scales with the number of servers. I don't expect to see many benchmarks that do it, but I would like to see a real scalability test with 1, 10, and 100 server configurations. The ability to learn the details once, and then automate them out of your way is a big plus with a rich, mature CLI environment.

    I don't mean to say that there is no place for a set of GUI system administration tools. The single server in a small business will be easier to maintain that way. The file server at home serving my machine, my wife's and my kids' would be easier as well. It opens doors at the low end of the scale, which represents a larger number of sites. If you are a captive of the GUI for every configuration task, it slows you down significantly as the number of servers grows.
  • RedHat got the top spot by far in the "preferred NOS" section. Is it a popular but poor OS? I don't think so. It's not surprising to find faults in various bits of Linux, but what'll be interesting is how quickly such faults are fixed.
  • by FreshView ( 139455 ) on Tuesday January 25, 2000 @09:39AM (#1337628) Homepage
    I wouldn't say Red Hat came in "last", because they weren't really rating them apples to apples. They said Red Hat was best for some things, Windows 2000 was best for others, and so on.

    I actually thought the article was somewhat complimentary towards red hat. The benchmark they needed to run, however, was Quake3Arena servers on each. : )

  • by X ( 1235 ) <x@xman.org> on Tuesday January 25, 2000 @11:27AM (#1337629) Homepage Journal
    I'm surprised that nobody has pointed this out yet: abortive closes ARE supported on Linux. They just aren't enabled by default, and that's a good thing.
    Abortive closes are great if you're a client running a benchmark, but if you're a server, you could receive packets at a port from a previous connection that will now appera to be coming from a new connection! Not a good thing.
  • The cluster would lose if the gross hardware spec were close. (i.e. four K6-2 350's vs one dual-processor Xeon-600). Clusters have transitive and communicative overhead several orders of magnitude larger than SMP mobos, and so they are performance lossy. But the great thing about clusters is they have no end. I could make a cluster out of 20 P3-600's; I don't think we'll ever see a dodeca-processor SMP board.
  • Interesting article. I thought though somewhat flawed. Netware even being in the running has to make you wonder. The poll results from the CNN applet on the page has Linux beating win2k with close to double the votes, making it the far and away leader in the CNN poll.

    Also according to the fusinon poll results, the order of final points were:

    Win2K 7.78
    Netware 5.1 7.61
    Red Hat Linux 6.35
    SCO Unix 6.10

    So, Red Hat did'nt finish last like the title suggested. However it did end up in next to last place. I guess it's good even to get on the list. To hedge the bets next time maybe we can get them to also include Mac file and print services, baynan, 3Coms old 3+ Open and IBM's Lan Manager under OS/2. *Grin*.

    Never knock on Death's door:

  • Yeah, they just looked at the Samba source to see what it did in a certain situation.

    But when MS-Windows got poor scores on their first test they had to get an answer from Microsoft...and they never were able to find out how the MS driver for their SCSI interface was behaving.

  • Now if only everyone would take IBM's lead and start posting links to actual documented research. I was very impressed with IBM's Java/Kernel benchmarking and research.
  • Hmm. I suppose with the load you described, and taking an average that includes downtime, you might be able to arrive at 4 days of average uptime, although that number still seems pretty small.

    You can see some slightly old statistics for wonko.com over here [wonko.com]. Those stats are only for a few hours, but I'll update them again when I get home this evening (I'm at work now). You can see stats for the front page here [209.185.154.35] (only the front page though, these stats aren't server-wide). That's just the WWW service. The FTP service isn't very active. The machine has never given me an illegal operation, or for that matter any serious error at all (/me knocks on wood). I've probably rebooted about 5 times since the initial installation of Windows 2000 Server, but only to install new drivers or software...never due to a crash or error (although one time was because the power went out, but that's hardly any fault of Windows).

    The only thing I can think of that you might have done to cause your box to crash every 4 days is if you went through the system services and set them all to start on system startup...or perhaps your swap file is insanely small and you keep running out of memory...and of course, there's always the chance that you're running a third-party app that's leaking memory. What types of calls do you use to talk to the SQL server? It's possible that something may be opening database connections and forgetting to close them, though SQL Server tends to notice that and deal with it in most cases.

    And of course, it's entirely possible that you just have bad karma. :) Slashdot reported months and months ago on a scientific study in which researchers found that some computer users just simply encounter more bugs than others, even when using the same unmodified systems. They weren't able to explain it any other way than saying that certain users are just more bug-prone than others. I use this to explain why I generally tend to have excellent luck with Microsoft software, yet the Linux kernel often coredumps in the middle of an install when I'm sitting in front of the machine it's installing on (yes, it's true...if you doubt me, I'll be happy to demonstrate...my aura must be anti-Linux or something). ;)

    --

  • It all has to do with how they weighted the various categories. Looking straight up between Win2k and Novell where there was an edge to one or the other, here's how they biased it:
    1. Rate File services as only 15%, and the network benchmark at only 10%. Bias: against Novell which whipped Win2K
    2. Rate the areas of Scalability (20%) Security(10%)and fault tolerance (10%) which were only discussed in a theoretical construct, and not fully thrash tested by security experts, etc.) where Netware has been. Bias: towards Win2K
    3. Install (5%) bias towards Win2K.
    Okay follow me here:
    1. I only install a Novell network once, and perhaps it's not quite as peachy-keen easy as Win2k. But it rates 50% better in performance (9.3 vs. 5.6 rating). So why the hell is the install given a value equal to 1/2 of the performance?
    2. Scalability doesn't necessarily come into play until extremely high loads are encountered. You can buy and install a second Netware server and still have a lower overall cost less than the price of the top-end MS servers (You wouldn't believe the license prices for Enterprise NT (required for "highly available" systems with RAID disks, etc.) Or a half a dozen midrange RH Linux servers.
    3. The security, Stability and fault tolerance figures are only in the lab, not in the real world like Novell, Linux, and SCO have already been.
    So there you have it. Win2k does come out on top. But only if you cheat.

    Measure real world performance, proven stability, security, etc., and the score comes out more like #1 Novell for the biggest installations, #2 RH Linux for small and midrange, and a tie for #3 between SCO and Win2k. SCO for people who know better, and Win2k for PHB's who don't.

    P.S. I changed the scalability figure to 10% and upped the value of the file/network portion of the test...Novell wins in a Landslide. RH still third, but Win2K is not much better, and the difference is 90% in the docs and utilities. ;)

  • The first half of the article was about how NT had better I/O throughput, both for large tranfers and while under load, than Linux. You can't say Linux is faster in this particular comparison.

    Also, looks like cost wasn't a factor in this article. So you're left with "more reliable".

    Now, if we add up the points like IDG did, NT comes out on top. Of course, you've already made up your mind in favour of Linux.
  • Another problem is that they just weren't familiar with Red Hat Linux.
    Among other things, they claim samba can be configured only through a "cryptic configuration file".
    First of all, I wouldn't call smb.conf cryptic - second, Red Hat Linux includes both linuxconf and swat (samba web administration tool), both of which can be used as simple frontends for editing samba configuration.

    Also, if samba is their *only* idea about filesharing (IMO both nfs and coda are superior), they must be joking.

    And that's not the only inaccuracy...

    Also, did anyone else notice when Windows 2000 failed miserably, they contacted Microsoft and tried to get it fixed while they did not bother to ask Red Hat or anyone else if there's really no frontend for samba configuration?

    I wonder how much Microsoft paid for this.
  • Ummmm... folks, as far as I know, Win2k hasn't been released yet, so those votes are bogus, yes?? So other than SCO, Windows NOS's are in last place in the poll.

    Er, wrong. Windows 2000 has been widely available in various prerelease and/or beta stages for over two years. I've been running it as a server OS ever since it was called NT 5.0 beta 2 (a little over a year ago). Plenty of people have used Windows 2000. In fact, I'm typing this on a final release version Win2000 Pro machine.

    --

  • by DonkPunch ( 30957 ) on Tuesday January 25, 2000 @12:51PM (#1337675) Homepage Journal
    (Big, frustrated rant ahead)

    It really burns me when I see technical pundits talking about "enterprise-class" systems when they clearly have no idea what an "enterprise" is.

    Here's a big, fat, spelled-out clue for them: "Enterprise" means more than just "really big". "Enterprise" means more than "lots of bundled pretty lights".

    1. Enterprise systems have to be able to handle a tremendous load without sweating. This means lots of processes and lots of threads running smoothly at the same time. When overloaded, enterprise systems degrade gracefully.

    Let me spell that out in caps -- ENTERPRISE SYSTEMS DEGRADE GRACEFULLY. They may refuse additional client connections, they may log error messages, but they may not EVER collapse under pressure. Anything less is not an enterprise system. It is a toy. Period.

    2. Enterprise systems integrate with existing systems. A REAL enterprise often has legacy systems -- some of which have been running since before the new web developer was out of diapers. Companies offering enterprise solutions like to talk about how well their products work with your existing systems. Companies selling toys also want to help you with "updating", "migrating", or "replacing" your existing systems (which were working just fine before you strolled into my office, twerp). Consider that a red flag.

    3. Enterprise systems stay up. In a real enterprise, rebooting costs money. Usually it costs BIG money. A company who doesn't understand that doesn't understand what an enterprise is. Beware -- toy-makers will try to sell you aftermarket add-ons for clustering, failover, or maintaing your "quality of service". Don't be fooled. You will pay more to maintain "quality of service" than you would pay to get a solid system in the first place.

    I am so tired of magazines pandering to managers who think that they're running an "enterprise". Real enterprises already HAVE professionals to do these comparisons. They have no choice. In the long run, having a professional who is accountable is whole lot cheaper than trusting some twit at IDG, CNN, or ZDNet.

    Now, with that perspective, I ask: do any of these NOS qualify as "enterprise-class"? If not, which ones come closest?
  • The only reason I pointed this out is because we recently installed a P-90 as a NAT in our environment. An Intern in the building put the pieces and the OS in 2 hours, I secured it and got it NATing the entire network in another hour.

    We were going to purchase an Ultra 1 running Netra or FW-1, which could have cost us around 3k-20k. How much of a price difference do you need, because there is a piece of deadweight PC (in our environment, cause we have win2k) versus a very useful workstation that is wasted doing NAT.

    And the Linux community knows where our shortcomings are (multi-threaded tcp stack, higher fd lists, better SMP support) and they are getting worked on. But it's good to know that at the low end of the spectrum, Linux has the cheap and easy solution. Anything you throw a lot of money at is going to work better than something that recieves almost no money, on average anyway.

    --
    Gonzo Granzeau

  • I too was surprised by the lack of monitoring tools comment. I went looking. Specifically for graphical tools.

    So far I've found

    • xload (standard CPU load graph)
    • xcpustate (horizontal bar graph showing current CPU load.)
    • xsysinfo (similar to xcpustate but includes memory and swap)

    I'm forced to conclude that they have a point on the monitoring tools.

  • Because the pre-compiled Red Hat is compiled for the lowest denomination CPU, you can't fairly compare out of hte box speeds to something like say, Mandrake, which is compiled for Pentiums

    Not quite true.
    The speed of the TCP/IP stack, which was the limiting factor here, is exclusively a kernel issue. Red Hat does ship i586 and i686 versions of the kernel.
  • That's all well and good but what are the average uptimes with Windows 2000. I use the latest beta release on one machine and have an average uptime of about 4 days. This is on a lightly hit server with ftp and IIS running. My Linux box has been up for 185 days and that's only because i needed to add a new NIC.

    4 days?!???!? You've got to be kidding me. Either that or you're the most unlucky Win2K user on the planet.

    As I've stated here many times before, I run wonko.com [wonko.com] on a Windows 2000 Server. The machine is a Pentium 166 with 64 megs of RAM and a 6-gig hard drive. I use IIS5's www and ftp services, and SQL Server 7.0 for my database backend. That sucker has been running nonstop and without trouble since the day I booted it up, about 80 days ago. A friend of mine has been running his Win2000 server for nearly 200 days now, with no problems.

    The stability of Windows 2000 is very much improved over that of NT4. If your server only lasts 4 days, you must've done something horribly wrong to it when you set it up.

    --

  • by yugami ( 102988 ) on Tuesday January 25, 2000 @11:40AM (#1337688)
    You doubtless wonder if Microsoft had anything to do with it whenever you stub your big toe.

    well maybe not when I stub my big toe, but I will sue them if I get cancer, since I go out and smoke everytime I reboot my machine/server.

  • Learn how to use vipw to add new users, and you'll be able to do it in any Linux and all the BSDs as well.

    Actually, I'm quite familiar with that, and whenever useradd is not there, that's just exactly what I do. And it works on any UNIX period, even Unicos :-)

    I just figured that since the topic was more or less goof proof ways (compared to the various gooey interfaces) useradd might be safer.

  • One part of the article I don't agree with was that you need graphs and chart wiz-bangs to figure out what your system is doing.

    I was writing a PHP3 script on my machine and I accidentally created an infinite loop. Not realizing this I hit the refresh button on netscape and after a few seconds I heard the disk start spinning. The page should have been up by now, so I switched to a text console and typed top, apache had gobled up about 110M of memory.

    I find that top displays information in a much easier to read format than NT4's task viewer. Other tools such as vmstat give quick access to any information I want. I've played with NT4's system monitor (??) that displays graph and other histories of various stats and find it more difficult to configure than poking through a man page for which cryptic letter combination from vmstat will tell me how many page faults I've had.

  • Which network operating system do you prefer?
    • Windows 2000 17% 1462 votes
    • Windows NT 10% 875 votes
    • Netware 13% 1106 votes
    • Red Hat Linux 46% 3948 votes
    • SCO UnixWare 2% 170 votes
    • Other 12% 1040 votes

    Um...think about it. 4,000 Slashdotters go visit the site and take the poll. Do you think they're going to vote for Windows? Bah. The poll is invalid until 4,000 Microsoft employees get their chance to spam it as well.

    --

  • They did everything they could to beef up W2k's performance: rerun tests, reset parameters, etc,. and then they simply put puff descriptions all around descriptions of its performance.
    Did they recompile Linux to make it i686? No.
    Did they retune it and Samba? No.
    Did they compare W2K with SuSE, the Linux distro with the largest sales volume, worldwide? No.
    A W2K puff piece?
    Beyond a shawdow of a doubt.
  • I saw NOS in the article and immediately wondered why they were running tests using obsolete CDC Cyber mainframes. Just shoot me now.

    This sort of test is just one more argument in favor of distros, particularly big commercial ones like Red Hat, shipping different editions tuned for different uses. Kind of like NT Workstation vs. NT Server. A home user who wants his machine mostly for word processing and web surfing on a dial-up account doesn't have the same needs as someone setting up a file server on a LAN. Of course the entire distro should ship in each package, but the default configurations should be different, and maybe the kernel could be tuned differently in each case. And the home user especially does not want to have to read a book to learn about how to set up his system - he wants to use it right out of the box.

    At least with this scheme, when Linux performs poorly on benchmark tests we could always say they were using the wrong version.

  • by pb ( 1020 ) on Tuesday January 25, 2000 @11:46AM (#1337724)
    I think what our friends at CNN meant to say was:

    1) There are *too many* graphical monitoring tools for Linux [152.7.41.11].

    Therefore,

    2) It's too confusing. I bet there are text tools. No one on Unix would use graphical tools...

    3) If we told the truth, we'd lose our "Microsoft Journalistic Objectivity", and get shunned by the other trade rags. Oh no!

    4) We're really incompetent to review anything but Windows, but we'll pretend we can do it to sound smarter... And we wouldn't want to actually *ask* anyone else for help. ...except Microsoft. They're okay. They provide support...

    Finally, for those curious about the link / screenshot, I'm running a modified Redhat 6.0. That is, it's somewhere between RH6.0 and RH6.1, and also supports the freaky network stuff my university (NCSU) uses for networking. It's neet. And I was running DOSEmu (Fire demo) for the CPU cycles, and a Scheme interpreter (essentially doing 6^6^6^6, for the swap). Gtop is a pig, I like xosview and xsysinfo.
    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • Please name a "big player" who hasn't got involved in Samba :-).

    Ask Terry Lambert why IBM bought Whistle instead of a Linux company. Or how it treats the GPL even now.

    The reason why IBM is holding Linux at arm's length -- and so many other companies give it lip service but are hesitant to integrate it into products -- is the GPL. It's a license motivated by spite, and its entire raison d'être is to put companies out of business and destroy programmers' livelihoods.

    Folks from at least one of the companies you mention above have told me frankly that they tolerate the GPL because they perceive jumping on the Linux bandwagon as important to their short-term business opportunities. But at the same time, they believe it's necessary (and I think they're right!) to "firewall" their IP against the GPL.

    By adopting the GPL, you're making yourself an enemy rather than a true ally. Some of these guys will sleep with the enemy if they must. (That's why they've gotten involved with Microsoft -- to their peril!) But if you really want their help, it is best not to do that. Don't adopt a license whose purpose is to stab them in the back, and you'll get their full support. And the support of folks like me, who won't touch GPLed code both as a matter of principle and as a practical matter. If there's any alternative, we won't run GPLed code.... And we certainly won't contribute to it. We believe, very strongly, that it would be unethical to do so.

    --Brett Glass

  • FreeBSD *was* tested with Samba (this was in the FreeBSD 2.x timeframe just after the Mindcraft benchmarks - not with 3.x).

    It did about the same for SMB fileserving as Linux did. I was dissapointed as I would have loved to get an Intel Open Source based rebuttal to the Mindcraft benchmarks and I didn't care if it was FreeBSD or Linux. I was pushing to get the tests done, and had FreeBSD done much better I would have pushed PC Week to do a Samba+FreeBSD test. Remember, I'm promoting Samba, not an OS :-).

    Unfortunately, at least with FreeBSD 2.x the TCP stack was also very single threaded in the kernel.

    Things look *much* better with the 2.3.x Linux kernels, and I hope they improve in the same way for the *BSD's also !

    Regards,

    Jeremy Allison,
    Samba Team.
  • 1) How long it can stay up without rebooting?
    2) How soon can a technical problem be fixed?
    3) What software will it run?
    4) Will it be around for long?
    5) Can you purchase it?

    For more notes on #4, check out this article [zdnet.com] on ZDnet.

    -----
    Want to reply? Don't know HTML? No problem. [virtualsurreality.com]

  • IMO both nfs and coda are superior

    How did you arrive at that conclusion? The general consensus seems to be that NFS on Linux still sucks badly (security problems). OTOH, in my own experience Samba is pretty reliable. And all the commercial sites I've seen that use Linux as a file server use Samba rather than NFS. There must be a reason for that.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • by M-2 ( 41459 ) on Tuesday January 25, 2000 @09:44AM (#1337745) Homepage

    Look at the report summary:

    If you want a good, general purpose NOS that can deliver enterprise-class services with all the bells and whistles imaginable, then Windows 2000 is the strongest contender. However, for high performance, enterprise file and print services, our tests show that Novell leads the pack. If you're willing to pay a higher price for scalability and reliability, SCO UnixWare would be a safe bet. But if you need an inexpensive alternative that will give you bare-bones network services with decent performance, Red Hat Linux can certainly fit the bill.

    So, if you're looking to drop a bunch of cash on bells and whistles, get Win2K. I don't think we can really hate Novell, and SCO UnixWare is sort of a cousin.

    What this review points out is, once again, what the Mindcraft Review pointed out: Linux is not 100% read for high-power, high-speed, prime-time major network server use. It IS getting there - look at the stuff that's popped up since then! - but more work is needed.

    The bright side is: how many people are going to look at this, grr, and get to work on fixing it? That's the good side of FUD reports - gets people off their butts and trying to make better.

  • Whats even more funy is that acording to Novell, Microsoft demands disabling all disk caching if you are running active directory.

    Sorry, but you and Novell are spreading bad information. Disk caching is only disabled for those drives which store the Active Directory information and log files. Since any sane administrator would put the files being served on a drive which is separate from log files and operating system files, this is a complete non-issue.

    Cheers,
    ZicoKnows@hotmail.com

  • > I wouldn't say Red Hat came in "last",
    > because they weren't really rating them
    > apples to apples. They said Red Hat was best
    > for some things, Windows 2000 was best for
    > others, and so on.

    The summary was definately NOT a ranking. It was pretty clear to me that they felt that 2000 was the clear winner, with Novell probably edging out RH, with SCO being the weakest contender.

    The summary was really an evaluation of each NOS's strong points and the people/organizations they would best suit.

    Its kind of interesting, however, how much these NOS tests (specifically this one) depend on Windows clients and Samba? They should have an article w/benchmarks of people trying to get Macintosh or Linux networks served up by a 2000 server. Turn the table and have Microsoft be the ones that have to adapt to the client!

    I mean we're talking about Windows being served by non-windows machines that were not really designed (with exception of Novell) for the sole purpose of serving up windows services. With netatalk supporting AppleTalk IP, a linux machine will beat down any non-AppleTalkIP or MacOSX server any day of the week. I don't believe that Windows 2000 serves up AppleTalk IP, but could be wrong. Even if, I am willing to bet that a RH box with netatalk will beat it. -k
  • by Anonymous Coward
    This review seems to be directed at those who need a "quick and dirty" networking solution, which doesn't really give enough credit for things like Linux's rapid improvement/development cycle and the variety of useful freely available apps, as well as the availability of source code.
    Cases in point: it sounds like they were just using RH 6.1 out of the box, using the pre-compiled binaries. The results may have been different if (a) they simply upgraded all of the packages via RPM or (b) they grabbed the most current (stable) source code for the kernel and the apps they were using and compiled them themselves. Secondly, they said that RH doesn't come with many monitoring tools. True, but many are available simply by going to Freshmeat and searching for "monitor" or similar. But they did mention Linux was customizable, so at least they got that right :)
  • They mentioned "no GUI" for Samba. I guess their web based management tool isn't enough?

    Actually, the deal is that the were only evualting tools that the vendor included with their products. Too most people 3rd party tools aren't valid. You've got to get everything from Microsoft, Baby!! That's what Microsoft has been heralding for a long time, and people have been buying it hook, line and sinker. This is why IE, AD, Visual Studio, the registry, SQL Server, Exchange, and others are "better". If it's in the MMC, then it's "good", otherwise it's a crappy third-party tool.

    Not surprisely, Linux has the opposite methodology. Everyone is not only welcome to innovate, everyone is *encouraged* to innovate. When everyone chips in parts to the complete system, then everything is "third-party" and then people who bought into MS marketing thinks that most everything is not "part" of Linux, and that all Linux provides is "file" based interfaces. Yeah, the VI interface, Baby!!

    -Brent
  • Funny how W2K comes in third in file performance, short TCP tests, doesn't interoperate with anyone else, and has no written documentation, yet still seems to come in first and get the award.

    By their own admission Netware either beat Microsoft hands down or tied with Microsoft in all but the long TCP test. And funny how they only looked at optimizing _W2K_ when it did unexpectedly badly on a test. They probably could have modified Netware to do much better on the long TCP test if they had cared to. Can anyone say Netware got screwed?

    And as far as Linux coming in last, what a crock. Linux came in first on short file writes, and short TCP writes. Linuxconf was given short shift considering that the underlying text mode of configuration is still available remotely from anything that supports telnet. And if they wanted a little more performance they could always just turn off X and get a few extra cycles that way.

    For system monitoring under Linux try typing 'xosview'. This has been a standard tool for as long as I can remember and is much more responsive than anyone elses monitors. Not to mention the hundreds of other tools that are available. Not to mention that all of this information is also available through the /proc filesystem for use in automatic scripts.

    And top shouldn't be overlooked either. With its fine grained ability to monitor and modify process priority on the fly it is the best tool in my book. Under top I can see when Netscape is growing out of control and kill it. I can find the processes which are taking all the processor and turn down their priority so the rest of the system gets its fair share. You just can't do this on non *NIX systems.

    For managing users across machines, Redhat comes with NIS. Sure, it has its problems, but once you get it set up and running, it works like a charm. And *NIX boxes are the only ones that will map in the users home, utility and share directories to any *NIX box that the user logs into. This has the effect of the user getting their own environment and files no matter what workstation they log into. This just isn't possible under Netware or W2K. Not that the testers are even aware of this capability, as they have obviously never ran *NIX before.


    Linux has file and print services for Netware, Windows, Mac, and *NIX. It is easy to configure a single Linux box to act as a file and print server for all four networks at the same time, while not having to load four clients on each of the users workstations. We all know how well loading competing network clients on a single workstation works (In case you have never tried it, not very well).

    Stability and fault tolerance? I like how they talk up Microsoft, even though none of W2K's features are actually being used by anyone. Including Microsoft itself. For anything mission critical to Microsoft they use *NIX or mainframes. Hotmail runs Solaris and *BSD, not W2K. Because W2K doesn't scale anything like *NIX.

    As far as documentation, they neglect to mention the fact that you also get a /usr/doc directory with notes on every package in your system, man pages, info pages and a complete set of html based HOWTO's that describe in easy to understand terms how things work and how to configure them to work the way you want.

    I am sick and tired of these people rating software and saying that something is bad because it is a command line utility. This, in and of itself, doesn't make a utility bad. In fact, it is relatively easy for command line utilities to have a graphical shell written for them(xcdroast anyone), that still allows you to use the underlying command line tool in scripts and the like. I don't care how easy a graphical tool is to use, it is not as flexible as a command line utility and _someone_ has to be their to fill in the blanks and press ok.

    Command line utilites are only cryptic if you haven't read the man page. If you haven't read the man page you shouldn't be playing with a tool on a production box, graphical or command line.

    And they didn't even have catagories for the areas in which Linux truely shines; development tools, scripting languages, shell environments, programming language support, and internet services.
  • While we did not probe these NOSes extensively to expose any security weaknesses, we did look at what they offered in security features.

    "Offered" as in "claimed." Well, that's a relief - we all know how forthcoming M$ is about it's OS security. Can you say, "whitewash" boys and girls? No one cares about your opinions on security, CNN - test it or shut up and leave it for the experts.

    They very carefully didn't mention how long you can keep alive a Win2k box, compared with a *NIX or Novell box. They didn't mention how long it takes M$ to patch security flaws. They didn't mention tech support ($500/minute at 1-900-micro$oft, or free on usenet). They didn't mention software cost, or Microsoft's gouging with it's new license structure. They didn't mention what hardware they tested on. They didn't mention what hardware you need for an acceptable install of each OS (e.g. 3x more power just to run the oh-so-pretty Win GUI).

    I guess you can't expect too much from CNN, eh? Sad how many people will read this and not think about any of the unmentioned issues.
  • by Uruk ( 4907 ) on Tuesday January 25, 2000 @01:26PM (#1337754)
    Why no overall "plays nice with others" score? Well, it's because this isn't a benchmark that's intended to be the end-all, be-all of all benchmark tests. Of course they have to leave a lot of stuff out since operating systems are so complex, that if you were to test every single aspect of them, you would need an entire site, not just one published article, on how they work.

    Besides, different people have different priorities - you'll notice how Redhat got slagged on the fact that some of their tools don't have graphical front ends, and some of them like linuxconf do "evil" things like resize to be larger than the size of your display. Horrors! :) But seriously, those things probably wouldn't be an issue if they had a UNIX admin do the test, somebody who was used to not having a graphical front end. In that case, maybe Win2000 would have ended up on the bottom.

    The point is, that when you release a benchmark on something as complicated as an OS, you're going to miss a lot since there's too much to cover, and you're also going to be a bit biased by nature of the fact that the guy doing the reviewing probably isn't a seasoned professional on all of the OS's simultaneously. From the sound of the article in fact, he's probably a windows munkey. :)

  • by NullGrey ( 46215 ) on Tuesday January 25, 2000 @11:49AM (#1337755)
    You seem to have fallen for one of M$'s greatest lies. They would like the general public to believe that an OS consists of a system for running sotware, as well as applications that run on it.

    The function of an OS is as follows (as defined in a CS OS class):
    • Provide memory management
    • Provide process management
    • Provide access and management to disks and static storage

    All of these functions are provided in the kernel, although it may be argued that a small set up utilites is also necessary (insmod, ps, mount, etc.). However, I serously doubt the kernel and few required utilites is changed from distro to distro. RedHat is a distribution of the Linux OS. It is simply a set of utitilites and applications that is packaged with the OS. RedHat itself is not an OS.

    Everyone clear?

  • They only reviewed Intel OSes and complained if tasks x,y, and z weren't accomplished with help from a GUI.

    Real enterprise OSes such as OS/390, OS/400, HP/UX and Solaris - non PC, non-toy OSes - were notably absent from the list. I guess these fall into the "other" category I voted for in the poll. Still, I never saw one real enterprise OS even so much as mentioned.

    Strangely, an as-yet unreleased OS with a nice GUI was mentioned: Win2K.

    "clickety-clickety-click. Ooooo! See? I can run an enterprise!!!"

    -M
  • We definitely need graphical network monitoring tools. I just can't hack it with the command line stuff because I can't get my head around the documentation for it all - you need to be a TCP/IP expert to get anywhere with it.

    For example, it would be nice to have a graphical tool which would dynamically display the traffic on an interface, including any combination of fields in each packet as specified by the user in an onscreen dialogue. It would be nice to be able to monitor DNS requests and see the address and host name returned. It would be nice to be able to graph things like socket usage.

    It occurs to me that maybe the TCP/IP stuff in Linux hasn't received a lot of attention because most of the mindshare involved doesn't really exist in the Linux community. The whole thing was brought over wholesale from NetBSD. That would probably also explain why we're still waiting for a multithreaded IP stack.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • by maynard ( 3337 ) on Tuesday January 25, 2000 @09:45AM (#1337761) Journal
    ...with over 40% of the vote. And interestingly, many of the statements in this article are pretty subjective opinion. For example:
    Microsoft's Windows 2000 edges out NetWare for the Network World Blue Ribbon Award. Windows 2000 tops the field with its management interface, server monitoring tools, storage management facilities and security measures.
    Yet they admit several paragraphs down:
    Windows 2000 demonstrated poor write performance across all our file tests. [...]
    And even after turning off forced syncs after writes:
    This second round of file testing proves that Windows 2000 is dependent on its file system cache to optimize write performance. The results of the testing with the write-through flag off were much higher - as much as 20 times faster. However, Windows 2000 still fell behind both NetWare and RedHat Linux in the file write tests when the write-through flag was off.
    So, even though Win2000 is the slowest of the bunch (even slower than SCO's UNIXWare, according to this artile), it "Tops the field," but the benchmarks tell the true story. So, if you just skim the first few paragraphs of this article you'll walk away thinking Win2000 is the OS to beat. But by actually reading the article, you'll see the whole picture. Why do I think this is more of an advertisement for Win2000, than a serious article?
  • So, even though Win2000 is the slowest of the bunch (even slower than SCO's UNIXWare, according to this artile), it "Tops the field," but the benchmarks tell the true story. So, if you just skim the first few paragraphs of this article you'll walk away thinking Win2000 is the OS to beat. But by actually reading the article, you'll see the whole picture. Why do I think this is more of an advertisement for Win2000, than a serious article?

    Because you don't seem to be willing to listen to the many weaknesses currently present in Linux that must be addressed if you want to claim superior technology. Lots of these are being worked on, but they aren't shipping yet. The GUI management interfaces in Linux aren't as good, neither are the performance monitoring tools which are woefully obscure. Red Hat Linux lacks ACLs which can accept/deny permissions to files on a per-user basis. Kerberos is a pain to learn about and install. Scalability still isn't great on the 2.2-based kernel series. The IP stack on the 2.2 kernel still isn't multithreaded. Storage management isn't that great without journalling filesystems which have been in the works on Linux for at least a year and have long been on NT. Distributions like RedHat aren't shipping ReiserFS as supported software AFAIK. The article and the associated scorecard [nwfusion.com] show you the criteria and weightings.

    (Is anyone doing GUI performance management tools, BTW? I've seen at least alpha code and efforts for everything else except that one...)

    Would you trade all these things away to get 10-15% better file performance? I would.

    There *are* things that Microsoft can't match (freedom, etc.) and we have to keep improving our ability to articulate those benfits of that, but technologically, Windows2000 does raise the bar for Linux to beat. We don't want to end up like Netscape, vaguely cooler but not as strong technically.

    Less talk, better thinking, more good code.

    --LinuxParanoid, paranoid for Linux's sake

  • First off, what is the deal with this test claiming that Red Hat Is Linux?

    Red Hat != Linux, and we all know this. Red Hat is definately not the best Linux distro, and it is unfair to peg Linux in general with Red Hat's faults.

    Now despite this, Red Hat was given an unfair shake. First off, this article gives little detail on how Red Hat was set up. I seriously doubt they installed the latest Kernel or did much with any of the configurations.

    Now, it seems from the article that in raw network performance, Novell and Red Hat did much better than Windows 2K.

    But then they bring in all of this information about "interface" and suddenly Windows 2K is made the winner because the editors liked the way Windows 2K's administration was set up. Quite a subjective thing to base a claim that W2K is the best Network OS, if you ask me.

    I don't even see from the editor's comments how Windows 2K was the clear winner. It seems to me that if there was any clear winner from this it was Novell. I wonder if Microsoft sponsored this little test in any way?

    "You ever have that feeling where you're not sure if you're dreaming or awake?"

  • Just a small point, but... does anyone else get rather annoyed with the way RedHat gets lumped in with SCO? Linux is a Unix-based OS, so is SCO (loosely-speaking: you know what I mean, to hell with the UNIX(TM) crap); RedHat is NOT an OS, just one way of distributing Linux. Full kudos to RedHat; but it is getting more and more common to use the distribution name as though it meant an OS, which it doesn't. SCO and Linux can be compared; SCO and RedHat can't: it's apples and oranges.
  • Microsoft's Windows 2000 edges out NetWare for the Network World Blue Ribbon Award. Windows 2000 tops the field with its management interface, server monitoring tools, storage management facilities and security measures.
    I was surprised to see the remote-client based admin tools for Netware referred to as simple and basic - the server-based tools for NW have always lagged behind the remote ones, and nwadmin is still the easiest way to administer netware boxes (even if it is being depreciated now in favour of a java-based client). Unixware and Linux likewise administer well remotely; unless W2K has massive advances over NT, it will expect you to be at the server console to do almost anything, which was one of my main dislikes of NT.
    --
  • The general consensus seems to be that NFS on Linux still sucks badly (security problems).

    s/on Linux //. SMB may have its problems, but its better on security than NFS. Trusting clients for security is a bad idea.
  • The Linux kernel, standing alone, does not constitute an OS. It's near-useless on its own. And historically, the term "OS" was never used in this way (except perhaps those that claimed that Windows 3.1 was not an OS because it runs on top of DOS).

    At minimum, the basic set of services, like the C library and those daemons that the system can't really live without (pretty much all required features that would appear in the LSB document) have to be considered part of the OS. Thus most of, say, the Debian base packages or the Red Hat required packages has to be included in what you call "OS". And generally speaking, these are not exactly the same for different distributions.

  • by BJH ( 11355 ) on Tuesday January 25, 2000 @09:48AM (#1337781)

    Where in that article (which I read a couple of hours before it was posted on /.) does it say that RedHat finished last?

    I'm going to rant a bit here - Could the posters please make sure that the comments they post (either their own or those the submitter putin) are at least vaguely accurate and not likely to cause a goddamn flame war? This comment was completely gratuitous.

    Back on topic: I actually found the article to be reasonably fair (if a bit clueless in places - the "RedHat only" poll comes to mind), but it covered some pretty deep material for CNN; stuff about Winblows NT's multi-threaded TCP stack, the stuff about Samba, etc.

    Can we do without the endless flames of CNN now? Please?
  • Well, it sounds like they don't know what the fuck they are talking about; but that's hardly news.
  • by rjamestaylor ( 117847 ) <rjamestaylor@gmail.com> on Tuesday January 25, 2000 @09:48AM (#1337789) Journal
    When we examined the Samba file system code, we found that it too honors the write-through flag. The Samba code then finds an optimum time during the read/write sequence to write to disk.

    This jumped out at me because it so obviously points to a (if not the) significant benefit of Open Source (or, of at least having the source code, open or otherwise): not guessing and inferring about a black box. Microsoft 2000 "appeared" to be the only system "honoring" the performance-hitting flag of the benchmark suite. That was the argument MS gave for why Win2K's write performance was 10% of read. But the testers could infer Netware honored the flag by running the suite without it and noticing the performance increase. Nice to know. Great to be able to change inputs, observe outputs and infer process.

    But with RedHat (Samba, specifically) no guessing was needed. Just look at the code! There it is. No mystery.

    This suggests to me that the real SPAM threat has nothing to do with email. It has to do with closed-source systems: mystery meat. Usually nasty things are contained in mystery meat (no offense to Hormel...I'm sure Spam is fine and I remember my bachelor-days of fried Spam with Mustard on Toast fondly).

    Fight spam: insist on the source! Can you imagine eating something that didn't come with an ingredient list? Why use an OS that isn't OS?

    Sorry... got carried away, but the point is clear: those who care about their systems will demand access to the source.

    :-only kona in my cup-:

    :-robert taylor-:
  • When was Linux ever a bare kernel, aside from the early days? Every linux dist. uses the GNU C lib., so no difference there. There has been a generally-agreed standard of what constitutes a working Linux OS from 93, and if you examine the competing products, you'll see they don't differ much from this baseline -- Slackware vs RedHat /etc/rc.d schemes is the biggest difference I can think of. I was rather vague in my initial comment (exasperation can do that); I hope this makes it clearer.
  • Hmm, yes, very true. But how many pointy-haired managers does anyone know that do anything more than skim the first few paragraphs? Sad but true. But, yes, of course it's "advertorial": most of the computer press is, unfortunately. Not deliberately, I suspect; sustained exposure to PR releases maybe has a deleterious effect on all but the strong-minded few (Chris Bidmead is a fine example of the latter, though he's a Brit so you might not have heard of him).
  • Version 4.0 is about to go into code freeze, and several people have been working on TCP/IP optimizations for 3.x-STABLE and 4.0.

    I'd like to see Samba better integrated with FreeBSD. I also think that the Samba team should consider Apache-like licensing, as this would get some more of the big players interested in assisting the Samba development effort as they have Apache.

    --Brett Glass

  • I'm surprised no mention was made of the limited time for evaluation of the products in these experiments.
    It is obvious to me many of the things the IDG.com reviewers were considering as highlights of the MS operating system would get really annoying after a long time, but allow people to get started quickly. (Here I refer to the "Moving into Management" section of the article). Although for something like a personal operating system, this may make a lot of sense, for a network operating system, however, I would expect them to realize the point and click interface in the MS OS is going to become a pain in the rear after a few months, and they will be wishing fondly for the text based tools they mention once (and don't comment on further) in Red Hat.
    Again, in the "Handling the Staples: File and Print" section, they fall into a similar trap. How many times will the average network admin want to start the "print administration wizard" and start clicking things before they start wishing for a faster, more precise technique? Below, the reviewers call the ASCII file configuration "a serious drawback" to Linux, and I agree it is a drawback - at the beginning when one is learning - but an advantage later.
    The most obvious quality of a network OS from a users point of view is stability ("is my email available to me, or not?"), and this is not mentioned in the review. The experiments needed to address this question correctly likely excede the time scale this review covers. (Perhaps one can simulate a month's activity by applying a large load on the network for a short span of time, but this seems very speculative at best.) The results of such an experiment seem necessary to me to conclude which is the "King of the network operating systems".
    (Well, since one of these American beers (Coors?) is the King of Beers perhaps the title is somewhat in jest?)
  • by the way ( 22503 ) on Tuesday January 25, 2000 @01:39PM (#1337804)
    It seems that the tests results completly belie the conclusions drawn

    No, they didn't. Although RH had better write performance (although only slightly better with cached writes), on many of the more qualitative tests RH came out behind. These qualitative tests were based on the appropriateness of the system for serving up files and printers.

    When you're looking after the file server for a hundred people, do you really want the flexibility of scripts and configuration files for the simple tasks you do every day? Have you even looked at Win2k? Using both RH and Win2k every day, I can support the authors' conclusions that the MMC tools in W2k are both fast and powerful for common day to day tasks.

    Those who suggest that W2k doesn't have good scripting capabilities are also on the wrong track, IMHO. Windows Scripting Host provides access to much of the administrative interface, and Perl for Win32 can be used to automate pretty much everything (since it can access the COM objects that run the show). You'll also find almost all the GUI tools also have an associated text tool (e.g. try typing 'routemon' into a W2k machine sometime to see how to configure routing and tunnelling from the command line). If you want powerful shell scripting, grab the Cygwin tools [cygnus.com] which include bash, make, gcc, etc.

    The article is also right that W2k's documentation is fantastic. Commonly used tasks get dozens of examples and step by step instructions (e.g. look up 'routing and remote access' in the help) and more arcane commands and options still contain a thorough explanation (e.g. look up 'routemon' in the help).

    Before people here start making judgement on Win2k, please use it. And that means try it on a machine you actually use, for a few months--give it the same air time that you'd ask somebody trying out Linux for the first time to give.

    Having said all that, I should balance this by mentioning what a great OS Linux is too (really--I like both Linux and W2k!). For serving up web pages it's got the wonderful Apache (which on Win32 is still immature), and the benefits of Open Source can not be understated. The mass of information in the HOWTOs makes complex tasks tractable, whereas with W2k if you go past the scope of the documentation you are often SOL.
  • Is this the launch of a new line of books,

    "Benchmarks from Dummies" :-).

    Actually reading the article I would say they gave things a far shake, and did highlight where there testing may have not been acurate. Conclusions are always subjective based each of our requirements. They are trying to fit theirs to the bais of an IT manager.
  • Well yes, but at least processes are fairly easy to understand. Network traffic however is a bit more complicated. We could definitely do with some more visual tools to deal with the network side.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • by Anonymous Coward
    MMC - the Microsoft Management Console; a system
    management framework that supports snap-in modules
    for configuration of services, etc. on a NT\W2K
    server. The modules can manage remote machines.

    Telnet - a network terminal emulator; in the
    windows world, this is a client application.
    There are some (fairly) clunky servers available.

    Terminal Services - a service, largely based on
    licensed Citrix technology, which allows clients
    on machines with an installed client to run
    Windows and and application software on the
    server machine, with output directed to the
    client; similar in some respects to X Windows.
  • > IMO both nfs and coda are superior
    How did you arrive at that conclusion?

    I'm aware of the fact that there are problems with NFS - but the SMB protocol (as used by samba) can't even handle something simple like file permissions.
  • Except for Windows NT, they didn't list _any_ NOS which was not reviewed in the article. They didn't list SUSE, TurboLinux, Debian, Mandrake, CorelLinux, OpenLinux, NetBSD, FreeBSD, OpenBSD, Solaris, AIX, HP-UX, DG-UX, OpenVMS, ... because they weren't reviewed.
  • This is the summary of the test:

    "
    Wrapping up
    The bottom line is that these NOSes offer a wide range of characteristics and provide enterprise customers with a great deal of choice regarding how each can be used in any given corporate network.
    If you want a good, general purpose NOS that can deliver enterprise-class services with all the bells and whistles imaginable, then Windows 2000 is the strongest contender. However, for high performance, enterprise file and print services, our tests show that Novell leads the pack. If you're willing to pay a higher price for scalability and reliability, SCO UnixWare would be a safe bet. But if you need an inexpensive alternative that will give you bare-bones network services with decent performance, Red Hat Linux can certainly fit the bill."

    From what you posted on /. you might think that this is an "anti-Linux" article. Please, keep cool - they say many warm words about Linux, and I think that they are quite fair.

    Regards,

    January

  • They say the only way to configure Samba is through the "cryptic" configuration file.

    Boy, don't these guys actually *read* the documentation? Swat is included in Samba in all of the distributions (I don't use Red Hat, but I imagine it has it too).

    Swat rules, I use it all of the time. It is one the very few configuration tools that doesn't fsck up when you play with the file directly.

    I'll file this one under "misinformed."
  • I don't run Red Hat, so I chose "Other"

    =]

    I expect more than a few /.ers to do the same.

  • "Red Hat Linux offers no graphical RAID configuration tools, but its command line tools made RAID configuration easy."
  • I think you were looking for:

    (RedHat != Linux)

    because using your math, we could prove that anything that's not RedHat is Linux....

    (Cisco-IOS != RedHat)
    (Cisco-IOS == !RedHat)
    (Cisco-IOS == (!RedHat == Linux))
    (Cisco-IOS == Linux)

    QED.


    --

  • RedHat got the top spot by far in the "preferred NOS" section. Is it a popular but poor OS? I don't think so. It's not surprising to find faults in various bits of Linux, but what'll be interesting is how quickly such faults are fixed.

    While I agree with you on the second part I wouldn't base myself on the poll to say that Redhat (or Linux in general for that matter) is that much popular, not that it isn't but what prove that every people having voted (for Redhat or for another NOS) are in fact people using NOS everyday for their job? The poll probably is biased due to the /. effect.
  • Time for a little critical reading:


    Microsoft's Windows 2000 edges out NetWare for the Network World Blue Ribbon Award. Windows 2000 tops the field with its management interface, server monitoring tools, storage management facilities and security measures.


    now. Where in the above does it mention Windows 2000 being fast? Nowhere? That's right, nowhere. Apparently, and.. let's try to take this like calm adults, SPEED ISN'T THE ONLY THING THESE PEOPLE CARE ABOUT!



    Damned conspiracy theorists.

  • It's time-tested, heavily optimized, and built like an embedded operating system rather than a general-purpose operating system. It darn well ought to be better at file and print service, since -- to paraphrase the movie The Terminator -- "That's all it does." (While you can add NLMs to make it do a few other things, they rarely function as well as a separate server.)

    One reason why Novell did so well is that general-purpose operating systems use preemptive multitasking, while Netware uses cooperative multitasking. The latter is several times more efficient, because processes are not interrupted at "inconvenient" times and context switches can be made inexpensive. But cooperative multitasking requires very careful tuning and debugging. Novell has taken the time to do this.

    Linux's degradation under very heavy loads was reported by The Gartner Group more than a year ago in a carefully documented study.

    I'm surprised and disappointed that IDG did not test FreeBSD, which was recommended by The Gartner Group after its evaluation. Gartner recommended it specifically because it handled high loads better than Linux.

    I wonder if this bespeaks an anti-BSD bias on the part of IDG. The company does, after all, publish LinuxWorld and run LinuxWorld Expo. I certainly hope that they did not cut BSD out of the running for this reason.

    --Brett Glass

  • The moronic masses insist on saying that the millennium started 1/1/00, and I'm sorry to say /. posters have by and large agreed with the drivel. So, for the purposes of my post, 2100 AD signifies the next century.
  • by rogerbo ( 74443 ) on Tuesday January 25, 2000 @09:58AM (#1337864)
    Why do these reviews never include an "interoperability" or a "plays nicely with others" score?

    they always seem to test with all win98 clients or
    all NT clients. Ok, I don't know maybe most places are all Microsoft nowadays but in my environment we have mac, unix and NT clients and that's not going to change anytime soon. We have applications that we need access to on all platform's.

    Then all these issues like microsoft's "enhancement" of DNS in Windows 2000, their deliberate breaking of samba authentication in NT SP3 and all sorts of other cases where MS toys "do not play nicely with others" would get mentioned.

    And MS would get dead last in this category every time.

    These are factors real sys and network admins need to know about.
  • I do not think that the testers were biased. I just think that they were bad testers. They didn't understand Linux and thus rated it lower. They seemed to ignore Linux's typical strengths and gave too much credit for "prettiness". This isn't a MS FUD campaign, though. It is just a bunch of unknowledgeable testers.
  • No, no, no, no.

    • Linux -> Operating System
    • RedHat -> Distribution
    • GNU Software -> Applications/Utilities
    • Stallman -> Attention Hungry

    Were 'GNU' to be in the title at all, it'd be GNU/Redhat Linux, not Redhat GNU/Linux -- after all, Linux is an operating system independent of GNU software, but Redhat is not a distribution independent of GNU. To say GNU is the operating system is ridiculous, stupid, ignorant, and a ton of other insulting words.

    Of course, that's just my "opinion"...

  • by DocJohn ( 81319 ) on Tuesday January 25, 2000 @09:59AM (#1337872) Homepage

    You don't buy or judge an NOS based upon a single benchmark result. Read the whole story and you'll see why RH Linux didn't quite make it to the top, mainly because of its poor user management abilities, monitoring tools, and lack of other niceties expected from an enterprise NOS these days.
  • Sorry about the formatting, hopefully this will look better.

    Windows 2000-----------6.72
    Netware 5.1------------9.42
    RedHat Linux 6.1-------6.98
    SCO Unixware 7.1.1-----4.98

  • To: john_bass@ncsu.edu,james_robinson@ncsu.edu

    Your article contains two grave factual errors regarding Redhat Linux. I trust that these corrections can be verified and submitted to the various parties affected, so that as few people as possible need be the victim of misinformation.

    "Red Hat offers the standard Linux command-line tools for monitoring the server, such as iostat and vmstat. It has no graphical monitoring tools."

    Wrong. A quick trip through the default XWindows setup on any of the recent Redhat versions will reveal a panopaly of various graphical performance monitoring tools, on a par with, and in many cases superior to, the comparable Windows offerings (the most notable example of these is gtop). So, to restate, Windows has several built in tools and few other alternatives, while Linux offers a multitude of competing monitoring programs to choose from.

    "Linux has a set of command-line file system configuration tools for mounting and unmounting partitions. Samba ships with the product and provides some integration for Windows clients. You can configure Samba only through a cryptic configuration ASCII file - a serious drawback."

    Wrong. See the Samba Web Administration Tool. It is totally functional and has been included in Samba for well over a year.

    Also...

    "Red Hat Linux offers no graphical RAID configuration tools, but its command line tools made RAID configuration easy."

    True, it offers no _software_ RAID graphical configuration tools. A common misconception is that hardware raid vendors do not support Linux, or support it minimally. ICP-Vortex, which makes superb mid-range SCSI RAID controllers, has had full (text-menu-based) GUI support for Linux for some time.

    "Red Hat offers a basic Kerberos authentication mechanism. With Red Hat Linux, as with most Unix operating systems, the network services can be individually controlled to increase security. Red Hat offers Pluggable Authentication Modules as a way of allowing you to set authentication policies across programs running on the server. Passwords are protected with a shadow file. Red Hat also bundles firewall and VPN services."

    I can't find much fault what you say regarding security on its face, and I can understand not wanting to make difficult to qualify statements about the security of one operating system over another. This only makes it ironic that you do not mention that the entire Windows family is a security nightmare, the evidence of which has been exposed repeatedly, and I mean time and time again, under the light of technical and lay journalists alike.

    Similarly, your comments about "Stability and fault tolerance" bear an equal lack of judgement for an article titled "King of the network operating systems." That Windows NT has significant stability problems (which make its spate of reliability "features" entirely amusing, in a cart-before-the-horse kind of way) is beyond doubt.

    But nitpicking aside, best of luck to you both.

    Regards,
    David
  • Ok, Somebody needs to beat many Linux users with the clue stick. I've seen posts whining about how Win2k got rated higher because of graphical utilities that the reviewer says Linux does not have, when in reality, Linux does have them, and they are at least comporable, if not better, to the Win2k utils. Perhaps the review reached this conclusion because he could not find them! Did it ever occur to you that perhaps some of these tools could be just a tad easier to find. Yes, they may be on the Gnome menu. Yes, they may be on the KDE menu. But are they in a central location like the control panel in Windows? Yes, there is Linuxconf, but as a poster pointed out recently, Linuxconf has a long way to go, and doesn't include basic utilities such as top or a network monitor. I'm not saying that it should either, though. Linuxconf is for configuration, not statistics. But perhaps there should be a better control panel than the one included with RedHat, because while RedHat's control panel puts many basic utilities in one location, its interface is still worse than the long forgotten days of Win 3.1

  • by DrCode ( 95839 ) on Tuesday January 25, 2000 @10:03AM (#1337908)
    Let's see, the RedHat car is:
    More reliable (1 point)
    Faster (1 point)
    Cheaper (1 point)

    The MicroMobile car has:
    A better radio (1 point)
    White lettering on the tires (1 point)
    A bigger speedometer (1 point)
    Corinthian leather shift knob (1 point)

    Therefore, we conclude that the MicroMobile is the clear winner, with a 33% higher rating than the RedMobile.

  • >>I wouldn't use redrat if it was the last OS on earth.

    RedHat isn't an OS, it's a distribution. But I'm sure you knew that.
  • Ok, I like Linux, and I would like to see articles like this tout its speed and power. However, if this article had been well-written, I would have accepted their put-downs.

    This article is not well-written as it turns out. They tout numbers, but you would think that a benchmark result would be presented in a tabular format.... Nope, that would encourage people to use quantitative comparison, and that would not show favorably on W2K. Also, they peg Linux as a sort of "almost as good as Netware" OS, but in every actual feature comparioson it comes out ahead (except for performance).

    Basically, things that Linux does and none of the commercial products do are "extras". Things that W2K does and no one else does are "missing features" in the other OSes.

    Someone please start doing real feature-to-feature comparisons; real benchmarks; real TCO comparisons; etc. I'm getting sick of this kind of "I think W2K is a good product because we have advertisers who want to hear us say that" crap.
  • Yup...this pretty much says it all. From the article, "The choice is yours."
    I think it is fantastic that CNN posted this tory. It is pretty deep for them. Although, it appears IDG did all of the work in the analysis. They seem to be very impartial and leave it to the reader in the end. I like this. State some information and let the reader decide. Don't decide for the reader.

    ----------------

    "Great spirits have always encountered violent opposition from mediocre minds." - Albert Einstein
  • As usual, the Slashdot summary of this story had no connection with reality. The story itself made no attempt to rank the operating systems whatsoever; it in fact seemed to be incredibly wishy-washy and careful in saying that every one of the systems had its good points. Red Hat was praised as the "most flexible" in the intro, and a good value in the conclusion.

    Meanwhile, attached to the article are the actual rankings at:

    http://www.nwfusion.com/reviews/2000/0124revs.ht ml

    ... where SCO comes in last.
  • While some may argue that "XYZZY wasn't configured right" or "the kernel wasn't recompiled with -O3" or some other complaint, I think this is one of the few review/benchmarks that seems somewhat un-biased. This is obviously better than the recent Garter Group statements (the same guys who say "unix is dying, unix is important, unix is dead, unix is here to stay" and so on) and not centered on one vendor.

    They do mention the typical "Linux problems" such as configuration and a lack of "graphical process reporting", but they (gasp) mention some of the strong points on Linux such as fast disk access, easy raid configuration, and free, scalable clustering. It is nice to see something positive for once.

    But, I still have my reservations. They mention the configuration of the Redhat system through the command line and/or a graphical interface. That much doesn' bother me. What bothers me is the fact that X takes up many of the system resources in its current state. Let's face it, until 4.0 is out, the X Windowing System is not quite as "lean and mean" as other solutions claim to be. Even then, it may not be too light on the RAM. If they were running this on a server, I cannot stress that would be a cardinal sin. We run a departmental server that sits and compiles code, keeps up with web requests, runs a database, keeps our proxy going and about 5 other random tasks without falling down. We have no monitor on it, and no X. This saves us a bit of money in the hardware department. If given a choice, it would be a motherboard, 2 net cards and some diskspace. This is one of the strengths of Linux and unices in general. You couldn't survive in Windows 2000 without a monitor. I consider this a strength.
  • look at the table [nwfusion.com]: RedHat was ranked second to last by the meaningless aggregate score.

    Think about it this way: people who say NOS must believe that the alternative is DOS... both wrong.

  • Obviously! Any time a MS product performs better than the competition in a benchmark, it can ONLY be because MS paid somebody off. Excellent analysis!
  • I have a CD sitting on my desk with a nice white label, marked Microsoft, Windows 2000 and Service Pack One, also marked CONFIDENTIAL.

    I also have a close involvement with the company that Microsoft has contracted to do Windows 2000 compatibility testing in .AU. The only place in the world outside Redmond to be doing so, as yet...

    Hardly Linux FUD, I have Win2000 on 2 PCs at home, Linux Mandrake 6.1 and OpenBSD 2.6, I couldn't be described as a zealot for any one of these.

  • Yes, there are ingredients printed on the can. While I can't qoute them word-for word, the gist is that it contains a multi-species mix of animals and animal byproducts, salt, sodium nitrate, and enough preservatives to ensure a shelf life well into the next century.

    It is kinda tasty panfried on an onion roll with coarse mustard, so I won't slam on it.
  • Rather true. SP1 is ready to go for the public.
  • by jht ( 5006 ) on Tuesday January 25, 2000 @10:15AM (#1337976) Homepage Journal
    SCO UnixWare came in last, not Red Hat. And file server performance isn't the top reason I'd run Red Hat, anyhow. Basically, the way I see the choices are:

    If you run a pretty much homogenous network of Windows (95, NT, and/or 2000) clients, then Windows 2000 isn't a bad server, really. Where Windows 2000 starts to suck hard is if you have to support other platforms in either the desktop or the server space. But it's actually a pretty solid OS, and a "safe" pick for a Windows shop.

    If speed is what matters, you run mostly Windows at the desktops, and you're not looking for an application server (because nobody in their right minds develops NLMs), NetWare is fast, efficient, and has the most robust and complete directory services out there. Not to mention that there's a tremendous amount of trained, experienced NetWare CNE's to draw upon. It's fast, it's stable, and it's not Microsoft.

    But if you want to run the most stable platform of all, and you want the power of Unix's tools and services, then Red Hat is ideal. It's easy on the wallet, too. Combine Red Hat and solid hardware that has multiple power supplies and ECC RAM and you'll probably never have to reboot it. And Linux is a lot easier for a network administrator to handle than it used to be.

    And if you're on crack, you'll pick UnixWare - which sucked when Novell had it and still sucked the last time I got a look at it (a year or so ago). Some of the features of Red Hat, a much higher price, and closed source. Yum.

    - -Josh Turiel
  • by GoNINzo ( 32266 ) <GoNINzo.yahoo@com> on Tuesday January 25, 2000 @10:20AM (#1337998) Journal
    After we finished installing the systems on dual 650-Mhz Pentiums, we installed it on a Pentium 75 with 16 megs of ram and a 500 meg hard drive.

    Novel 5.1 had difficulty, but was able to do simple file and print sharing. However, it had difficulty doing anything else.

    SCO hung on the install, seemingly unhappy with it's non-compliant hardware.

    RedHat installed easily, and made a fine NAT, file server, web server, and whatever else we wanted to use it for.

    Windows 2000 laughed at us. It was a humbling experience.

    --
    Gonzo Granzeau

  • by aheitner ( 3273 ) on Tuesday January 25, 2000 @10:23AM (#1338005)
    Because it's something we always need to remember.

    It's nice to know Linux is fast (and it's no shame to get beaten out by Novell; they have a lot of experience in the area).

    But for 99% of the server tasks people have in this area -- the interoffice server, sharing files and providing print and mail services, you could buy a meatier machine if you needed it. The real issue is reliability and ease of management. You need the thing up, period, because the whole office stops if it's down. And you probably prefer if you didn't have to have a tech for your department just to babysit that one machine. Ideally, your central tech support for all departments (or your part time tech support guy, if you're small) should be able to keep it running with minimal effort. We are, after all, looking for core services here, not cutting edge stuff.

    IDG gave Linux the props it's due: Linux will beat out NetWare when it comes to building funky custom solutions. NetWare is very good at what it does. But you have to pay for every server module you want, and they're of course not open and flexible like the Linux ones are. NetWare would make it much harder for you to have that central office machine also be the web development machine for the office -- i.e. not only serve the files, but allow you to update them. And I don't know anything at all about adding database functionality to NetWare to drive a fancier website -- all very easy in Linux, and all there as soon as you want it.

    This is one of the most balanced reviews I've seen. I may not agree with their choice of winner, but I can't criticize IDG's fundmental strategy of "choose the best NOS for your capabilities and your needs".

    Of course SCO is worthless; and Solaris must be considered for its impressive scalability. Linux is fine for most scalability tasks, with the exception it seems of multiple NICs (which is a weird case anyhow. Rarely does a server need more than a single 100mbit link, and a quad-Xeon Linux box will chew up heavy duty database stuff very sweetly :).
  • by YAH00 ( 132835 ) on Tuesday January 25, 2000 @10:27AM (#1338020)
    After reading the article I got a really uneasy feeling as if the author did not want to get flamed for not doing the tests properly, but still *wanted* to show MS Windows 2000 as the best and Red Hat as the wrost.

    It seems that the tests results completly belie the conclusions drawn... Here are some things I noticed...

    File system performance
    -----------------------
    Test Conclusion : Windows 2000 sucks, RedHat is pretty good

    Overall conclusion : Looks like no weightage was given to the overall conclusion based on these results

    TCP Performance
    ---------------
    Test Conclusion : W2k seems to be the best, RedHat sucks but will get better

    Overall conclusion : AHA... Something in which w2k is the best. Now I will be vindicated in calling w2k the best NOS and Redhat the worst

    Management Tools
    ----------------
    Test Conclusions : W2k tools are really polished and provide setting many system settings. No remote management. Redhat is definitly more klunky, but lets you do pretty much whatever you want to, and from where ever you want to.

    Overall Conclusions : Since w2k tools look so much better, they must be better

    Monitoring tools
    ----------------
    Test results : w2k has graphical clients to monitor CPU/Memory usage. RedHat does not

    Overall conclusion : We used the w2k graphical clients for monitoring the system resource usage while our tests were being conducted. We did not even bother to look at linux tools like g-top which do exactly the same thing, and more. Like process level control. Since we did not use these tools, they do not exist. Ergo w2k is better. So there. Also although Unix utilities are a lot more flexible, and can be scripted, since they are too complex for us, they are pretty much useless for everyone else as well

    Client Management
    -----------------
    Test Conclusion : w2k provides active directory and users and groups and oh my!!! Redhat only provides unix level control. Ha that old security model can't do users and groups and... well it can't do active directory so there!!!

    Overall conclusion : w2k does active directory!! NAH NAH NAH NAH

    And so on and so forth!!!!

    It looks like they did put in some effort in doing the tests right. But it looks like they also put in a lot of effort into making the results fit their apparently pre-drawn conclusions

To the systems programmer, users and applications serve only to provide a test load.

Working...