Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Windows Servers Beat Linux Servers 709

RobbeR49 writes "Windows Server 2003 was recently compared against Linux and Unix variants in a survey by the Yankee Group, with Windows having a higher annual uptime than Linux. Unix was the big winner, however, beating both Windows and Linux in annual uptime. From the article: 'Red Hat Enterprise Linux, and Linux distributions from "niche" open source vendors, are offline more and longer than either Windows or Unix competitors, the survey said. The reason: the scarcity of Linux and open source documentation.' Yankee Group is claiming no bias in the survey as they were not sponsored by any particular OS vendor."
This discussion has been archived. No new comments can be posted.

Windows Servers Beat Linux Servers

Comments Filter:
  • 20% more UPTIME? (Score:3, Informative)

    by Anonymous Coward on Wednesday June 07, 2006 @12:11PM (#15487929)
    From the article:
    Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime.
    That means that Red Hat Linux has to have at least 1,461 hours of annual downtime, which is 60 days. (This is so that it would then have no more 5,844 hours of annual uptime, in order to allow 20% more of that to fit into one year at 365.25 days.)

    I don't think so.

    I hate writers who don't understand math.
  • by waif69 ( 322360 ) on Wednesday June 07, 2006 @12:11PM (#15487932) Journal
    I have run both windows servers and linux servers over the last 10 years and my experience is higher uptime with linux servers. Windows machines deal poorly with memory leaking apps and need rebooting for every service pack or required update. I only need to restart specific processes with linux when there is a justified upgrade.
  • by yagu ( 721525 ) * <{yayagu} {at} {gmail.com}> on Wednesday June 07, 2006 @12:13PM (#15487956) Journal

    Another article claiming my OS is better than yours, another article with virtually no information, and the information therein is off-the-scale incomprehensible and inconsistent.

    Here's a casual observation: the article says, "

    Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime.
    " Later in the article, this:
    "..., On average, individual enterprise Windows, Linux, and Unix servers experienced 3 to 5 failures per server per year in 2005, generating 10 to 19.5 hours of annual downtime for each server.
    " Let's just say a Linux server has 24 hours of downtime a year (higher than the "survey" says). That leaves 364 days of uptime in a year, 365 days in a leap year.

    Implied in the article then, a Windows 2003 server would have to be "up" approximately 20% more to satisfy the "claim". Now, I am not a calendar "expert", but I'm having a difficult time believing that Windows 2003 server is up an average of 364 * 1.2, or 436.8 days a year. If it is, I'm buying.

    Also from the article: "..., But standard Red Hat Enterprise Linux, and Linux distributions from "niche" open source vendors, are offline more and longer than either Windows or Unix competitors, the survey said. The reason: the scarcity of Linux and open source documentation...."

    First, this is a survey, it hardly points to data that support this survey, in my book a no-no when trying to prove a point. Secondly, assuming there's truthiness in this, my inference from the previous paragraph is, "Red Hat would be a little easier to set up and use if it had better documentation..."

  • Uptime vs. downtime (Score:5, Informative)

    by Martin Blank ( 154261 ) on Wednesday June 07, 2006 @12:16PM (#15487985) Homepage Journal
    Is it 20% more uptime? Or is it 20% less downtime? There's a very, very big difference there -- two months of downtime is pretty severe, and if you have that, you have some serious problems. From the reverse perspective, three nines of uptime allows for nearly nine hours of downtime per year. If that downtime is reduced by 20%, that's nice, but not really noticeable for most users.
  • by neonprimetime ( 528653 ) on Wednesday June 07, 2006 @12:17PM (#15487992)
    A more informative Summary of the 2006 Survey [iaps.com]
  • WxP Pro (Score:3, Informative)

    by robpoe ( 578975 ) on Wednesday June 07, 2006 @12:20PM (#15488021)
    We have a WinXP Pro box that's been up over a year ...

    Another box that's Win2k pro that's been up almost 2...

    The one app they run is heavily used (dispatch for a 911 center).

  • Yankee (Score:5, Informative)

    by Elektroschock ( 659467 ) on Wednesday June 07, 2006 @12:21PM (#15488027)
    http://www.businessweek.com/the_thread/techbeat/ar chives/2005/04/the_truth_about_1.html [businessweek.com]
    http://www.computerworld.com/softwaretopics/os/lin ux/story/0,10801,82070,00.html [computerworld.com]
    Laura DiDio, an analyst at The Yankee Group in Boston, said she was shown two or three samples of the allegedly copied Linux code, and it appeared to her that the sections were a "copy and paste" match of the SCO Unix code that she was shown in comparison.
    DiDio and the other analysts were able to view the code only under a nondisclosure agreement, ... "The courts are going to ultimately have to prove this, but based on what I'm seeing ... I think there is a basis that SCO has a credible case," DiDio said. "This is not a nuisance case."

    Watch the "expert" Laura Didio on video from a credible source:
    http://www.microsoft.com/windowsserversystem/facts /videos/didio_video.wvx [microsoft.com]

    Enjoy her!

    *lol*
  • by Ritz_Just_Ritz ( 883997 ) on Wednesday June 07, 2006 @12:21PM (#15488030)
    How come I never get any of these "impartial surveys"? I have racks and racks of RHEL Linux servers that I only reboot when:

    a. a machine suffers a hardware failure (fairly rare) or
    b. there's a kernel update that impacts security

    In the case of (b), I apply the updated rpms and reboot which normally results in a downtime of approximately 60 seconds for that server. This might happen a few times a year (single digits).

    For our small number of Windows 2003 server boxes, it seems that each "windows update" cycle recommends a restart. We'll call that a once a month reboot when Microsoft gets around to releasing their monthly cleanup. Total server downtime is maybe 2-3 minutes (windows takes a bit longer to reboot on the identical hardware used with our Linux machines).

    So while I *could* say that our windows servers are down XYZ percent more than our Linux servers, in terms of actual downtime, both platforms are about the same, with Linux seemingly holding a small edge in my experience.

    Cheers,
  • Re:Defensiveness (Score:3, Informative)

    by vidarlo ( 134906 ) <vidarlo@bitsex.net> on Wednesday June 07, 2006 @12:22PM (#15488040) Homepage
    We'll see lots of defensiveness over this study in the comments, although if the conclusions were different, it would be cheered. Why not accept it and fix the documentation issue?

    Because there are no documentation problems. Do you find an OS with a more well documentet API than Linux? More documentation than Gentoo has? The problem is that they have not studied what I'd dare say is the serious users, they've studied those without in-house competence on Linux.

    *NIX-admins are probably more expensive than windows-admins, since there is fewer of them. Those organizations running old UNIX's typically have quite competent admins in-house, and quite different hardware. Windows and linux often runs on off-the-shelf hardware, which I guess explains why UNIX beats both of them

    With Linux, the effect is double. A lot of companies have windows admins with some level of degree, but those who know unix, works in the serious business with big unix-machines. Those who adopt Linux have typically not used Unix before I guestimate.

    What would be interesting would be to see a study between HP's [hp.com] Windows and Linux servers, since they provide the hardware themself, and should have in-house competance on both OS's.

    Compare real things, do not compare different things. Anyone remember Microsoft UK's ad? I think it was along the lines of a x86 off-the-shelf with mssql and win2k compared to a IBM POWER machine. Of course, the ad proved that Linux was more expensive.... This reminds me of that.

  • Math Nitpick (Score:4, Informative)

    by colinrichardday ( 768814 ) <colin.day.6@hotmail.com> on Wednesday June 07, 2006 @12:23PM (#15488053)
    That would be about 304 days, as 20% of 304 is 60.8 (304+60.8=364.8). The 20% must be taken as 20% of the RedHat uptime, not the Windows.

    But yeah, that's way too low for RedHat.
  • Re:Defensiveness (Score:5, Informative)

    by morgan_greywolf ( 835522 ) on Wednesday June 07, 2006 @12:28PM (#15488102) Homepage Journal
    What documentation issue?

    There are boatloads of documentation available. Ever hear of The Linux Documentation Project [tldp.org]? Plus, most distributions offer lots of very good documentation. Why there was a Slashdot story [slashdot.org] just two days ago about the excellent Ubuntu documentation. There are no fewer than 600 books available about Red Hat distros [amazon.com] available for sale on Amazon. Not to mention that Red Hat Enterprise Linux itself includes lots of lots of documentation and most of it is available on the Web gratis [redhat.com]. Plus the hundreds of open source apps that include very good documentation with their package. Have you actually read the documentation and free books available on the Samba website [samba.org]? It's darned good!

    Any perceived documentation issue is Laura DiDiot's head.

  • by olddoc ( 152678 ) on Wednesday June 07, 2006 @12:29PM (#15488115)
    According to Netcraft, they have a whopping 4 days since last reboot: http://uptime.netcraft.com/up/graph?site=www.yanke egroup.com/ [netcraft.com] They also go with the bulletproof reliability of MS IIs
  • by MarkLewis ( 593646 ) on Wednesday June 07, 2006 @12:35PM (#15488176)
    Your math is wrong. 20% more downtime means 1.2 times as much downtime as the Windows box, not 20% of the year.

    So if the Windows box is down for 10 hours per year, the Linux box is down for 12 according to the study.
  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Wednesday June 07, 2006 @12:39PM (#15488203)
    I just switched a box from fedora core 4 to core 5 and was real pleased nobody had bothered to document the changes to the default install of Apache. I also can't count the times I have looked for things on the LDP or the HOWTO's and found yes this is a very good howto but the distribution is entirely freaking different.
    100% agreement. Which is why I prefer Debian (although I'm migrating to Ubuntu).

    I can easily clone a production server and walk it through the upgrade process ... over and over and over and over ... and submit bug reports for any and all problems. All during the "beta" phase of the next distribution. I did that prior to migrating my servers to Sarge last year.

    apt-get dist-upgrade

    It is truly awesome. You can test and re-test the entire process every time they release a bug fix for any of the packages you'll be using. (Yeah, you can do it with gentoo, also.)
  • Let me quote the article:

    "Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime."

    Which part of the sentence is unclear? 20% MORE ANNUAL UPTIME.

    To achieve this claim, what would your numbers be?

    Note that it DOESN'T say "20% more downtime". It is very clear: "20% MORE ANNUAL UPTIME". The MINIMUM requirement to achieve this is 60 downtime days on the RHEL box.

    Note that we ARE being "relative": the 60 downtime days is the MINIMUM. Assuming 100% uptime on the Windows Server. The downtime can only grow.

    Conclusions?

    Maybe the Yankee Group has some math or writing (or both) problems. Implies that they shouldn't be in the publishing statistics business, which needs both skills.

    Maybe the RHEL hardware is very defective. Implies that the Yankee Group is not TECHNICAL enough to do this report.

    Maybe RHEL is defective. With that much downtime, I would have called Redhat. Maybe the Yankee Group has a horribly misconfigured system, and is trying to sort it out themselvs. Implies that they are not smart enough to do this report.

    Every time: the Yankee Group doesn't have the skills needed. The report should be ignored.

    Ratboy
  • flawed logic (Score:2, Informative)

    by rs232 ( 849320 ) on Wednesday June 07, 2006 @01:04PM (#15488418)
    "Linux distributions .. are offline more [because of] the scarcity of Linux and open source documentation"

    Even assuming that were true how does a lack of paper cause an OS to bork [urbandictionary.com]?
    And how does a lack of paper cause the OS to stay offline longer?
    Is it because it is sulking?

    How was the data collect the data?
    What was the methodology used?
    Where are the figures?
    What criteria was used in selecting the participents?

    How is a marketing firm or Ms. DiDio for that matter, suitably qualified to comment on OS security? Finally are these people really the unbiased commentators they claim?

    "Windows servers recover 30% faster from security attacks than Linux servers" Laura DiDio July 2005

    "Indemnification is a serious, potentially costly issue for enterprises" - Laura DiDio Oct 2004

    "For the time being, she said, Linux has an apparent advantage simply by virtue of a lower level of connectivity" - Laura DiDio April 2004

    "hype notwithstanding, Linux's technical merits while first-rate, are equivalent but not superior to Unix and Windows Server 2003," - Laura DiDio Mar 2004

    "This has the potential to turn into a twentieth century witch hunt," "There is a visceral anti-Microsoft sentiment in Europe." - Laura DiDio Sep 2003

    "The entire Linux community is saying to customers, 'You're on your own,'" .. "That's not a place I want to be." - Laura DiDio Aug 2003
  • by PFI_Optix ( 936301 ) on Wednesday June 07, 2006 @01:09PM (#15488461) Journal
    (troll) Silence. Your sensibilities offend the Slashdot drones. (/troll)

    I'm a Windows admin. It's what I know, and the only OS I have significant experience with. At my last job, the server with the most uptime was a RHEL3 box that only got rebooted when the ERP database performed its semi-annual crash ritual. Compare that to the four W2k3 boxes that were down about five or six days a year on average for various OS maintenance issues (in Microsoft's defense, we were *doing* a lot more with the Win servers, the Linux server only had one function)

    Linux is a hard OS to administer without training. It's not something you can just dive into, and a lot of admins get it shoved on them because upper management decides on a software package that requires it. The result? Downtime because the admin is unfamiliar with Linux and doesn't know where to find the answers. So in that sense, this report is spot-on.

    I do question the validity of the data, though. It seems like they picked a sample set that would yeild the results they wanted. A better survey would be to review servers with similar functions, regardless of whether users have both installed. It's no secret that Windows admins have a harder time with Linux and I agree something needs to be done to help them (us) take the plunge with confidence...but this study isn't going to have any impact on anything and was just a waste of someone's money. If they're looking to throw cash away, they should be throwing it at me, not studies.
  • by segfaultcoredump ( 226031 ) on Wednesday June 07, 2006 @01:13PM (#15488498)
    This is the postfix gateway, external web server, dns server, etc for our little (1000 employee) company:

    root[loki:/]# w
      10:57am up 1030 day(s), 21:27, 1 user, load average: 0.05, 0.02, 0.04

    This happens to be a Solaris 9 system. It has never crashed. Actually, over the past 5 years we have had 1 software related bug take down one of our solaris systems (multipathing bug in the FC drivers when used with active/passive disk arrays). This is based on an environment with 40+ solaris based servers (running a wide variety of services, this is not a '40 identical servers shop')

    The best our windows boxes can manage is 6 months (and that is if we skip a few of the security patches).

    I can guarantee that during the past 3 years, every single one of our windows systems (60+ servers) has had an issue that is core OS software related (not counting the security related ones). Kernel memory leaks are the most popular (file server reboot every 115 days or it will freeze up). Security worms are another fun one, but kinda rare today compared to the good old days.
  • by Ryan Amos ( 16972 ) on Wednesday June 07, 2006 @01:33PM (#15488633)
    I administer both Linux and Windows servers as well. Windows servers (2003 here, specifically, but the same applies to other versions as well) actually work ok and are probably as stable as Linux as long as you don't change anything meaningful on them. Adding users, changing settings, etc is all ok, but don't you dare install anything on a working Windows server without a full, bootable drive copy or a SAN snapshot. That's where Windows servers lose their reliability in my book.

    Blanketly saying "Windows is more/less reliable than Linux!" is flat out wrong (or at the very least, misguided) anyway. What were these machines doing? Were they sitting there just passing packets and not reconfigured once, or are they being constantly tweaked and redeployed? How many people were using them?

    Uptime is also usually measured in percentages in the business world. I'm willing to bet the author of this FUD saw "99% uptime for Linux, 99.2% uptime for Windows... That's 20% more!"
  • by marcello_dl ( 667940 ) on Wednesday June 07, 2006 @01:33PM (#15488637) Homepage Journal
    In debian based distro land, you upgrade everything on the fly. You need not reboot. You need not disconnecting from an ssh session to update ssh. Only security problems in the kernel need a reboot... so linux potential uptime = time since latest kernel hole require a reboot, and actual uptime IMHO is 99% or more of that.

    In acronym land, you call BS on TFA, BTW.
  • by gonk ( 20202 ) on Wednesday June 07, 2006 @01:45PM (#15488738) Homepage
    Red Hat released a purely security related kernel update on 2006/05/24:

          https://rhn.redhat.com/errata/RHSA-2006-0493.html [redhat.com]

    I would be very surprised if your kernel did not have known security issues that you are unaware of. Whether or not the various security issues apply to your environment is another question.

    robert
  • by tomhudson ( 43916 ) <barbara,hudson&barbara-hudson,com> on Wednesday June 07, 2006 @01:50PM (#15488778) Journal

    10 to 20 hours of downtime a year for a server? That's awful!. Heck, the last place I was at the linux box (Red Hat 9.0) only had 2 downtime incidents in over a year after it was hooked to a UPS - one of those was caused by a 6-hour power outage (the power co was installing new trunk lines, transformers, etc all along the highway as part of an upgrade to the provincial grid), and another by a lightning strike that, again, killed the power longer than the hour of runtime for the UPS.

    Of course, AFTER I left, it started going down on a regular basis - but what do you expect when you let a Windows Weenie try to admin a linux box?

  • A technical note (Score:3, Informative)

    by cecom ( 698048 ) on Wednesday June 07, 2006 @02:19PM (#15488975) Journal
    On Windows it is impossible to delete or replace a file which is in use (e.g. a shared library). The same applies for directories. Thus for any meaningful upgrade you need to restart the applications and often the OS _before_ you can do anything with their files. There are complicated mechanisms for keeping track of files that need to be deleted/replaced after a reboot. It appears that recently they have added yet another even more complicated feature to avoid reboots: http://www.eweek.com/article2/0,1895,1895276,00.as p [eweek.com]

    Such complicated techniques for a basic thing like an upgrade make me very nervous. What happens if something goes wrong with the extensive bookkeeping in the middle of the upgrade ?

  • Re:WxP Pro (Score:1, Informative)

    by Anonymous Coward on Wednesday June 07, 2006 @02:19PM (#15488977)
    You obviously don't know much. Nearly all the 911 dispatch and callhandling systems are run on Windows. I know, I work for the largest supplier of such things.

    Security is not an issue - effectively a mission-critical dispatching system is one big tracking app - how many units, how many events, who's where, who's doing what. That kind of thing. It doesn't need to be the same kind of app that runs airplane avionics or nuclear shutdown. If it goes wrong, you reboot and continue, or fix the damn Oracle problems and continue :-)

    The entire thing is a highly redundant and fault-tolerant design, but even so downtime is not a problem - everything is planned, so even in a unplanned downtime the dispatchers simply go to paper and everything still works. That said, its never been an issue for our software.
  • In my shop..... (Score:4, Informative)

    by fatboy ( 6851 ) on Wednesday June 07, 2006 @03:03PM (#15489354)
    I have to reboot Windows2K3 jsut about everytime an update is avaliable from Microsoft. I started using the system only a few months ago.
    I have not had a reboot of the Linux system we use here in well over a year, (448 days to be exact) even though I have updrad applications and applied many patches.
  • by a_nonamiss ( 743253 ) on Wednesday June 07, 2006 @03:05PM (#15489374)
    Boy, the maths in this post seem to be getting screwed up pretty bad, but I'll put in my 2 cents to see if that sheds any more light on things.

    Let's use hours. There are 8760 hours in a typical year. (365 x 24)

    Let's say your windows server is down for 30 hours in a particular year. That means it has an uptime of 8730/8760 or 99.66%. Your Linux server has 20% more downtime. That's 36 hours per year. (30 x 1.2) and therefore 99.59% uptime. Is anyone really going to notice a 6 hour per year or 0.07% difference in uptime? (remember, we're not talking specific outages here, just a mathematical statistic - not like "Yeah, if that 6 hours was during our peak time")

    Maybe I got that all wrong, but that's how I read the statistic.
  • Re:WxP Pro (Score:3, Informative)

    by robpoe ( 578975 ) on Wednesday June 07, 2006 @03:08PM (#15489400)
    Oh, you have NO idea of what you're talking about.

    "Back in the day" (TM), 911 dispatch was an old green screen with a serial connection to Ma Bell for the ANI/ALI information. Radios were 20 year old cards in a rack of radio equipment. Stuff gets hard to find replacements for, it gets upgraded.

    Enter new systems:

    The phone switch? Windows controls the user accounts. The phones are Windows interfaces to hardware. Controls which line gets switched to the dispatchers headset. Completely out of my control on those boxes. They're vendor provided, vendor controlled.

    The other box? The 2k one? Radio interface control. Controls what channel the dispatcher's headset talks on.

    None of these boxes are on the Internet. Security? Uhhm. Ok. These boxes have no external drives, run non-admin user, the phones are on a hub network with each other but nothing else (not connected in or outbound to the Internet), and are not on our main network. The radio computers aren't even networked. The computers are locked behind two wooden doors, inside an underground secure facility that is monitored 24x7 with people who have guns.

    Now, as far as the dispatch software? The thing that keeps track of calls? That runs on Netware (soon though to be converted to MS-SQL). That app runs on Win2k or WinXP (vendor suggests NOT running SP2). Those computers DO have outbound internet connections to about 15 hand selected sites (all state or government agencies, or public service sites, like the hospital tracking system, crime victim notification sites). They do have email installed, but the email goes through a server running MailScanner, set to disallow every kind of executable file (and vbs, etc, etc). They are NOT configured to run any Windows Media, no IM. They're locked down tight.

    We've had no virus problems here .. for 3 years now. Malware on some computers? Occasionally, but our antivirus detects most Java dropper programs and kills them.

    Should I be fired? Doubt it. ;)
  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Wednesday June 07, 2006 @03:25PM (#15489505)
    Probably better in mst cases to do a fresh install, though. At least you'll get the opportunity to weed out the redundancies in your files.
    Ah, I can see that you haven't experienced the Love of Debian yet.

    With Debian, grab deborphan and debfoster and you can weed out un-needed packages quickly and easily.

    "deborphan" compares the dependencies of each package so you can see packages that are installed that nothing else needs. Delete the ones that you don't need.

    "debfoster" shows what all the dependencies are for a particular app. For example, Apache can have all kinds of packages it is dependent upon. If you want to get rid of that app, you can also quickly purge all the packages that were installed as dependencies for that app.

    Once you've got the machine stripped down to the basics, just check all the files in the non-home/non-data/non-log directories to make sure that they each belong to a package. Or that you know why you put them there.

    It runs sweet.
    It runs clean.
    It runs exactly what you want.
    Nothing more/nothing less.

    Which makes patching the box soooooooooo much easier. And it means that you have fewer potential security holes because you're running fewer apps.
  • by makemineagrande ( 977054 ) on Wednesday June 07, 2006 @04:08PM (#15489780)
    Here is the note I sent to Laura DiDio - and their PR manager:

    You probably should not read the DiDio-bashing going on over at Slashdot today, but I do see what I believe is an error in the presentation of the data in the press release http://www.yankeegroup.com/public/news_releases/ne ws_release_detail.jsp?ID=PressReleases/news.server reliabilitysurvey.DiDio.htm. [yankeegroup.com]

    The specific statement, "with nearly 20% more annual uptime" is I believe factually not supported by your numbers. Do you mean that Windows has 20% LESS DOWNTIME than RHEL?

    "on average, individual corporate Linux, Windows and Unix servers experience three to five failures per server per year, resulting in 10.0 to 19.5 hours of annual downtime for each server."

    If RHEL had 19.5 hours of downtime, and WIndows had 15 hours of downtime, this would be 20% less downtime. 5 hours less downtime per year is actually real data and would be useful to the press release.

    On the other hand, 20% more annual uptime would actually result in RHEL being down nearly 61 DAYS per year assuming Windows is up 100.000%.Note: 60.8333 days = 365 - (365/1.2)

    ----------- The report may be correct. The press release is most certainly in error.

  • by Mateo_LeFou ( 859634 ) on Wednesday June 07, 2006 @05:01PM (#15490197) Homepage
    "Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime."
  • by waveclaw ( 43274 ) on Wednesday June 07, 2006 @05:54PM (#15490556) Homepage Journal
    Like every other system administrator I have to write and read reports or run tests on hardware and software. To shortcut a lot of problems I start by critizising the (far too often flawed) methodology of any study I get before I base a decision upon it. This is not ment as a personal attack, but (maybe because of marketing mangling) I saw real flaws and a lot of bias in the case study that was originally used in Get The Facts. The biases I claim to have seen were subtle and very nasty, but of a completely different nature than the one in TFA[1].

    I wrote a Microsoft-funded white paper last year with the assistance of two subject matter experts - a Microsoft expert and a linux expert, both certified veterans of their fields.

    Case studies are an important part of understanding a wide variety of phenomena, however, in textbooks containing them there is often a disclaimer: those were particular people, with particular skillsets in a particular situation[2]. Neither I nor anyone else (say Microsoft's marketing department) is justified in generalizing that situation to anyone else. Hence the demand for surveys such as this one. You can translate the metrics used in the Get-the-facts paper into variables and then show that many others, with very different situations still show these results. Unfortunately, this article does no such thing. There is no specification of what kinds of servers, the platform configurations or even the application loads.

    We compared many factors including user management, authentication, "ghosting" new machines remotely, remote application installs, file sharing, delegating authority to subordinate administrators, and much much more. ...
    We wrote about all these factors and rated them on 10-point scales per lab, and condensed those into one comprehensive graph showing overall ease-of-use of each NOS.


    I would hope that, given an expert on any topic that I'd get a good ease-of-use for that topic. At that level of operator skill and performance, which I have tried to mention is very atypical, I would surprised if the huge resources of Microsoft had put out a failure. Was there was something that the Microsoft product could do the Linux one could not[3]? What features were missing? Why was that feature missing? That was then, this is now, how do those compare today? The pace of change in Linux features is not determined by a single vendor[4].

    Long story short, Windows came out on top by a huge margin in every field - ease, usability, intuitiveness, support, everything.

    The reason systems administators exist is because of their skills at doing things that are not easy. Otherwise they don't keep their jobs very long (but this is the same for any job.) I really can't argue for or against ease as a metric.

    I would hope that with the huge desktop penetration that Microsoft's OS leads in intuitiveness. Now if your Windows admin had grown up in a Macintosh home, used a Mac and home and on his workstation at work I'd be inclined to consider the intuitiveness argument. 20 years ago, that Linux admin would probably have come from a Unix desktop and Unix workstation and Unix or Mainframe server envrionment. How can we be sure that 20 years from now it will be Linux or OS XXX on the desktop? (On the other hand, the byzantine way OSS is developed does encourage only-developer-friendly interfaces.)

    MS soon compiled our white paper into marketing materials and stuck them on http://www.microsoft.com/getthefacts [microsoft.com] (but it's been replaced by more recent studies).

    I belive that Novell, one of those 'niche players' in the Linux world (11% Linux webserver share vs 49% RedHat Netcraft 2004,) released a much better take on those marketing materials with it's Get the Truth [novell.com] campaign.

    I personally was funded by MS to spearhead an impartial study, and MS management had a genui
  • Re:Math Nitpick (Score:1, Informative)

    by prtsoft ( 702850 ) on Wednesday June 07, 2006 @05:55PM (#15490562) Homepage
    The fact that your change OSs negates your argument. An a soft reboot, all memory is cleared, you are starting with a clean slate again. In any event, the uptime is mesured by the amount of time the OS is running, not the computer.
  • Re:Math Nitpick (Score:3, Informative)

    by knifey ( 976510 ) on Wednesday June 07, 2006 @11:57PM (#15492314)
    Well, the Windows machine would report 432000 seconds. Which is a much bigger number than 5 days. Maybe the people doing the report just have maths issues, as is already suggested by the 20% more uptime.

    PS, as someone who administers both Win and Linux servers, I gotta say the report is so full of sh!t it's scary. 233MHz half dead Fedora C3 machine has about a 99.95% uptime. Win2K3 machine with latest hardware, ~99.2%. Um, lemme think about this.

  • by Bert64 ( 520050 ) <bert AT slashdot DOT firenzee DOT com> on Thursday June 08, 2006 @05:37AM (#15493218) Homepage
    In the case of redhat, you can use the standard mail systems shipped with the OS... Infact, you should never install things manually because then you won't be able to update them using the system package management system.

    The same problem can occur with windows, people could be running any one of many mail servers on it, and they won't all be centrally updated.

    I have encountered the same problems you describe with multiple systems, a consultant sets up the machine and then leaves, it happens with windows too, but less often, and it's much harder to fix when they've made all kinds of weird registry tweaks, usually the fix is to reinstall, leaving the same problems for someone else in the future.

    There really is no excuse for leaving multiple copies of sendmail installed, some from source and some from rpm... But quite often it's necessary to do manual tweaks to any system to make it behave in the way you want... There's also no excuse for not installing your packages through whatever package management system exists, so you can keep track of them and update them more easily.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...