Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
IBM

Exchange vs. Linux/390 Comparison 276

eclarkso writes: " The Consulting Times has done a quite even-handed study of the TCO for each platform in a fairly large (5000+) enterprise environment. The article is as much a commentary on the mainframe architecture as it is on Exchange vs. Linux groupware."
This discussion has been archived. No new comments can be posted.

Exchange vs. Linux/390 Comparison

Comments Filter:
  • This article did not even touch on the issue of downtime. For most business some of these solution are over kill. But it is nice to see people give a decent overview off the cost for high end equipment. I would like to see IBM come out with some stuff for the little guy for linux.
    • Where are you going to put all these users? I doubt many companies have this many users close enough to the server where bandwidth costs won't be prohibative. My company (PACCAR) employs well over 20,000, but we are all spread out across the nation, the majority being in the Seattle area. The powers that be put Exchange servers at or close to each office with the users mailboxes on them. This makes much more sense because the offices mail within the group far more than they do to outside offices, reducing bandwidth requirements.

      Putting all your mailboxes on one big box is going to be far too slow unless they work in Ethernet-distance from the server, and even that will be problematic.

      • Where are you going to put all these users? I doubt many companies have this many users close enough to the server where bandwidth costs won't be prohibative. (sic) I live in SoCal. My Notes mail server is in Boulder, CO. VPN over DSL is _fast_ (~1,250Kb/s). I easily work from my home office much of the time.
  • Am I the only one that noticed how skewed the results were with regards to support costs?
    The first set of figures are skewed in support of the MS solution. The second are skewed in support of the Linux version.
    I am wondering what the bases for those numbers were.

    • The assumption in the second figure's was that they already had the main hardware and support capability, and were just adding to it. Thought I agree that the $0 support delta was stretching Gray's Law od Programming a little to far...

    • Re:Skewed Results (Score:5, Interesting)

      by Zwack ( 27039 ) on Thursday September 13, 2001 @06:49PM (#2295081) Homepage Journal
      Not entirely...

      If this is a BIG IBM mainframe then it will take
      more floor space than a single rack of twin processor CPUs.

      I assume that VM programmers are in short enough supply that paying one $90k p.a. is reasonable. It's not far from what I get paid as a Unix SA.

      And the hardware is WAY more expensive.

      The second set of figures shows how much less you will be paying if you already have an IBM mainframe (for some other purpose) that you can use for a Linux partition (virtual machine, whatever you want to call it) compared to bringing in an NT server farm with exchange. You've already paid for the hardware, the floor space, the support staff. You're just pushing your hardware a little harder.

      Does it make sense now?

      Z.
      • by denshi ( 173594 ) <toddg@math.utexas.edu> on Thursday September 13, 2001 @07:09PM (#2295169) Homepage Journal
        In this case, the S/390 series. The zSeries mainframes (12 CPUs) are about half the height of a rack and a bit wider. And the power requirements are way lower.

        Why do you need VM programmers? The port is already done, the logic for running Linux as a guest OS is there, and it's stable. Henceforth you should be coding on the Linux level, not the VM level.

  • Comparison (Score:2, Insightful)

    by 1alpha7 ( 192745 )

    Nice number crunching, but in my dealings with mainframes, I've found the best advantage is that, when overloaded, they just slow down, as opposed to crashing. That wasn't considered in the article.

    1Alpha7

    • Re:Comparison (Score:4, Interesting)

      by swordgeek ( 112599 ) on Thursday September 13, 2001 @07:13PM (#2295189) Journal
      Now that's an interesting point! My experience has been that when overloaded...

      1) mainframes and real Unix servers (Sun, HP, etc.) slow down instead of crashing.
      2) Linux (and NT) crashes hard.

      So the question is, does the OS crash on a given platform because of the hardware, the software, or a combination of the two? What will Linux on a Mainframe do when hit with an enormous load?

      Unless the kernel has been rewritten extensively to deal with the hardware, I suspect it would crash just as effectively on an S/390 as on a stack of Pentiums. I'd love to find out for sure, though.
      • Damn near every NT crash these days is driver related. You can make a pretty bullet proof OS when you control the hardware; see MacOS for another example.
        • Damn near every NT crash these days is driver related. You can make a pretty bullet proof OS when you control the hardware; see MacOS for another example.

          I don't think we are talking about the good ol'BSOD. I think we are talking about application and system lockups. Not quite the same thing. This is a good question, though whether a mainframe, which has been designed to be under heavy load all the time and perform as multiple server would suffer the same fate as a Pentium. I don't know. Does this happen when you overload Slowaris (Solaris x86)?
      • Now that's an interesting point! My experience has been that when overloaded...
        1. mainframes and real Unix servers (Sun, HP, etc.) slow down instead of crashing.
        2. Linux (and NT) crashes hard.

        I don't know what you're running, mate. I've been running Linux on Intel, BSD on ARM, Solaris on SPARC, AIX on RS6000, UnixWare on Intel, and NCR and Data General badged System V.4 on Intel and Aviion hardware for fifteen years. All of them slow up under load. If you get your swap badly wrong, and you run out of memory hard, all of them will fall over in a heap. Linux is in my experience just as robust under very heavy load as any other UN*X. The quality of the hardware matters, of course; if you buy cheap hardware, you will get reliability problems.

        I've never run Windows, so I can't comment on that.

  • Linux/390 manual (Score:4, Interesting)

    by alewando ( 854 ) on Thursday September 13, 2001 @06:33PM (#2295010)
    Some of the academic folks I know have had a bit of trouble installing Linux/390 [ibm.com] (<----- ibm's linux/390 developer page), but linux390.marist.edu/ [marist.edu] has a decent manual [marist.edu] they've found helpful.

    Of course, it'll long be obsolete before I ever get my hands on one of these beasts. *sigh*
  • by OSgod ( 323974 ) on Thursday September 13, 2001 @06:38PM (#2295033)
    Which traditionally are high for IBM systems and when you count up servers on the Intel side also count higher?

    Did we count the difference in functionality? Exchange vs. what on Linux?

    The mainframe may be back -- but make no mistake it is still the domain of the priesthood. The priesthood that the server architecture was to break up. Do Linux users really want that? A handful of techs who are well paid (the business people are cheering) but no need for the thousands of SA's and small shops can just buy time on a 390.

    • I guess you must have... Although the article is confusing...
      There are basically three sets of figures...

      One for a bunch of dual pentium servers running exchange.

      One for a brand new IBM ZSeries with four support staff.

      One assuming that you already had the IBM and support staff and were just adding another partition with Linux running on it.

      Of course if you aren't paying for any more staff or hardware you can guess which one is cheaper... Not earth shattering news.

      Z.
    • Lotus Notes runs on Linux. If I was doing the comparison to Exchange that's what I'd use.

    • The obvious comparisons to make were Exchange on N PCs vs. Linux on N PCs vs. Linux on Mainframe, and perhaps the comparison of the dedicated mainframe vs. adding a virtual OS on the existing one. Obvious sets of mail software for the Linux boxes range include Sendmail (also runs on 390 mainframes), Several Netscape-or-its-descendants Products, Postfix, etc., if you want to use commercial products and not just Built In Unix Mail.

      My experience as a user of Exchange is that if you let the administrator do a traditional Microsoft Office closed-system implementation, you're forcing all of your users into using an appallingly bad piece of software which leads to horrendous support problems down the road. It's not just the Virus Of The Week problem - Outlook Mail, while much much better than some of the previous MSMail products, fundamentally doesn't get it, and it keeps the user's mail in one big honking file that's increasingly fragile and bloated, and has an undocumented and unrepairable format - if it croaks beyond your client program's self-repair capabilities, you're hosed. It also Encourages Users To Mail Around Attached MSWord Documents or several other proprietary formats instead of just sending the message as real plaintext - leads to extra work for the reader (and usually sender), and bloats mail substantially, so your system has to carry a factor of 3-10 more traffic.

      Exchange also encourages the users to send mail around with Internal Email Addresses - messages appear to come from "Joe User, Marketing" instead of "juser@foo.com", which looks pretty but fails badly whenever mail gets forwarded out of the system - if you send mail to Joe, Jane, and Fred@customer.com, Fred can reply to you@foo.com, but doesn't have a way to reply to "Joe User, Marketing" or whatever Jane's fictitious title is.

      It's not like Sendmail doesn't have a long history of evil on its own, or like you can't build Turing Machines out of sendmail.cf files. But at least it's open, documented, and transparent, and runs on real operating systems.

      • "Fred can reply to you@foo.com, but doesn't have a way to reply to "Joe User, Marketing" or whatever Jane's fictitious title is. "

        Huh?

        Exchange automatically does the conversion when it goes out the SMTP layer.

  • I like how the last comparison of Linux/IFL vs Exchange/Intel has no cost of support for the Linux part. Come on, last time I checked Linux admins weren't working for free. Sure the tech support for the software is pretty much free due to the community, but someone has to set that stuff up and make sure it keeps working. Unless, Linux now supports itself through magic!! Oooh, I just think about adding 500 more mailboxes and they just appear... Yeah right. This article looks suspicious.
    • They are assuming you are paying the support costs already since you already have the big iron, and would already have needed a support contract.
  • And even if I made some similar proposal, the common response would be that our sales staff would work 'better' with Exchange compared to a groupware solution on Linux. The 2 million dollars gained by the now improved sales staff would recoup the costs.

    Total bullshit, but that would be the common company response.
  • Another thing to note was the fact that they figured they needed 10 (1 backup) exchange servers?
    Where did they come up with that one?
    One Compaq Proliant 6450 server with 4 x 550MHz Pentium III Xeon processors each with 2MB of L2 cache, 4GB of RAM and a 100GB external Fibre Channel disk array. Can easily handle 50k users with Exchange 2000...and if that is not enough storage for you it is easy to continue adding more disk arrays as space is needed.

    That being said I wonder how the TCO would come out over 3 years between the above solution on a Win 2k platform vs a Linux platform with the same hardware and functionality. Can anyone help on this one?

    • Can anyone help on this one?

      Answer: You're lying. You're talking out of your ass based on specs and dead reckonin'. They did the numbers, you did the empty speculation, and guess what? Reality wins, hands down.
      • Umm... before you accuse check your facts. The numbers above are the result of some scalabilty tests actually done by MS/Compaq. And yes, you could come close to them in a properly engineered environment. Much the same as a 390 has to be properly engineered.
    • Jimmy went out of his way to be fair to the Exchange/PC solution, since the industry average is 350 mailboxes per server...

      Apparently industry does not like to bog down their exchange servers.

      Seriously, imagine what would happen if even 10k users tried to simultaneously use the quad xeon you mention. The machine would probably self-destruct. Let alone function at a reasonable pace.

      Granted, saying that the industry average of users/server is 350 is a rather meaningless number. What kind of systems does the average refer to?

      Regardless, there is no way in hell that one quad xeon would be able to handle such an huge load.
      • Granted, saying that the industry average of users/server is 350 is a rather meaningless number. What kind of systems does the average refer to?


        More than likely, the average is skewed because of smaller shops running Exchange. Just because your server/OS/groupware can host 2000 mailboxes doesn't mean you have 2000 users.

    • by SClitheroe ( 132403 ) on Thursday September 13, 2001 @07:27PM (#2295247) Homepage
      There's no way Exchange2K could handle 50K users on a single box.

      First, you've never obviously worked with Fibre Channel on the kind of scale that 50K users would require (ie. a big EMC box)...that many users pounding the same box will easily chew up 50% or more of your CPU power on I/O alone. Fibre is fast, but it is so fast that it can easily swamp Xeon CPU's. I know, because I did the benchmarking at my company.

      Second, connection limitations in Win2K and Exchange alone mean that you are running very close to the theoretical maximum the OS supports..not a good idea.

      Third, running that many users off of a single box is suicide. And if you've ever watched Exchange2K failover on a Win2K cluster, you'd know that it can take several minutes for everything to come up on the second node, if you've got a lot of users.

      Finally, a 100GB array for 50K users results in a 2 megabyte mailbox..that's freaking ridiculous!

      In short, you're either running a 50-user shop, or you have no idea what you are talking about.
      • And if you've ever watched Exchange2K failover on a Win2K cluster, you'd know that it can take several minutes for everything to come up on the second node, if you've got a lot of users.

        If you've ever watched Exchange2K failover on a Win2K cluster, I pity you, for you've surely suffered through the same hell (especially pre-sp1) that so few of us have.

        And if you have a lot of users, it certainly takes a long time. The cluster that I currently manage (until tomorrow, as I resigned) has a mere 600 users, a large percentage of which check their mail only twice a day, and the failover can take just as long as it takes to reboot the server, depending on how it's being used at the time of failover. In fact, I'd bet that the SQL cluster, used much more intensively, could failover and back again, several times, in the same amount of time.

        -Tommy (who doesn't understand what use a cluster is, if you need to have a single point of failure front-end server to access it (yes I know there are ways around this, please don't flame me))

    • My first boss bought into the "The 486 will be as powerful as a mainframe" hype, too. So we deployed one on a customer site with a proprietary multi-user OS and tried to run 11 users off it. They ended up removing most of the users because the system was so pathetically slow the solution was unworkable. The IBM mainframe at my college a couple of years before had no problem handling upward of 5,000 users at a time. One 486 with 20 MB of RAM whould have been more than enough to handle our problem. Yeah. Right.
  • Well, not really, but mostly they are designed
    for MOVEMENT of data as fast as possible. There
    is very little logic and optimizations for the
    mathematical calculations. There are extra modules
    you can get for doing ultrafast math, the fact
    remains, when you buy a mainframe you pay for
    technology that was made to move data not
    do any calculations. Supercomputers are
    mainframes that do computation very well.
    Compilers for those things resemble giant hairballs
    due to setups for specific pipelines, preparations
    to do some parts of code very very fast.
    Thats why Beowulf(?spell?) is such great thing, because
    mass market CPUs are dirt cheap, they do math
    very fast, and scalability potential is enormous.

    http://titan.puj.edu.co/~pdelgado/pics/rebuildin g_ the_wtc.jpg
    • No, I'm sorry, it was a good attempt, but just too opaque for most readers. Try something like this:

      "Man, wouldn't it be cool if you could set up a Beowulf cluster of those OS/390 boxes!"
  • once again we see something evolve out of a community and then profited upon by a large multinational. it's going to be ironic to see VA Linux (and a bunch of others fail) but IBM come up smelling like a rose.

    VA and redhat and a lot of others sucked in a lot of investment for development and polishing up linux to bring it to where it is. now they are gone (some are almost gone and some are hinting towards it.) IBM jumps in and saves the day. interesting. nothing against IBM of course. after all, linux isn't just for the underpriviledged or any one particular group for that matter. but a very 'interestingly' timed move on IBMs part.

    and for all those nay sayers: i guess linux is ready for prime-time now that IBMs in the game?
  • by isj ( 453011 ) on Thursday September 13, 2001 @06:56PM (#2295112) Homepage
    I jumped at IBMs offer for developers to try out Linux running on a z/Architecure thingy. My experience so far has been pleasant and "boring" (in a positive way). Pleasant because everything ported easily. Boring because I was expecting challenging porting problems. Everything worked.

    Now for the article about TCO and stuff... I believe that it is correct. For small installations use cheap hardware to bring down initial costs. But don't be afraid of mainframes when you business grows - they are not that different.
  • The article goes out of it's way to be seen to be 'fair' to NT, and then ends with a comparison that shows the support cost for adding a 5000 mailbox solution to an existing mainframe is exactly $0. Presumably this is because it is assumed the site allready has mainframe support resources?

    But by the same token a site that uses NT for file & print servers (and therefore has an existing NT support team) should be able to use the same support resources for looking after their Exchange servers.

    I'm not saying that there would be NO increase in support demands going from NT file & print to NT file & print plus Exchange, but then I don't believe that adding 5,000 whinging email users won't affect the workload of a mainframe support team either.

    So to do a comparison, you should either add support costs to both NT and Mainframe, or neither. Doing it to just one is very misleading.
    • But by the same token a site that uses NT for file & print servers (and therefore has an existing NT support team) should be able to use the same support resources for looking after their Exchange servers


      Do you actually know anyone with more than say 100 users who has anything but Exchange running on their Exchange servers?

      .

      .

      .

      .

      .

      Yah that's what I thought. If your doing much more than file and print serving for a small shop your going to want seperate hardware for seperate functions. If for no other reason than you won't bring your mail down to reboot for your IIS patch of the week and vice versa.

  • by Bass Clarinet ( 521556 ) on Thursday September 13, 2001 @06:57PM (#2295118)
    What email/groupware software are they using on this Linux/390 machine? Is it some port of Lotus Domino server? I am only aware of Domino running natively on S/390 and on Intel x86 Linux, not on 390 mainframe Linux. Also bear in ming that there is no antivirus vendor supporting Domino on any Linux platform so that makes Domino on any Linux platform rather useless, doesn't it?
    • by ninjaz ( 1202 ) on Thursday September 13, 2001 @11:29PM (#2296481)
      It was likely Bynari's Insight Server - shown at Bynari's site [bynari.net]. It's designed to be feature-complete for Outlook clients and also work with standards-based clients. That, of course, makes it especially plausible that Bynari was the software in question. Also, while the 5000 user license isn't mentioned in plain view on Byari's site, it's $19449 for 1000 users, which would put it in line with the $71000 for 5000 users mentioned in the article.

      Of course, Bynari also runs on Linux/x86 and Solaris/sparc, for folks with a more typical environment.

  • Management Overhead. (Score:5, Informative)

    by Rimbo ( 139781 ) <rimbosity@sbcgDE ... net minus distro> on Thursday September 13, 2001 @07:01PM (#2295130) Homepage Journal
    "For the sake of simplicity, certain items such as depreciation and management overhead were excluded from the comparisons."

    It's kind of interesting, since management overhead is widely regarded as the main reason why people prefer Windoze systems to Linux systems. People believe that it costs less money to perform essential administration tasks in Windows than it does in Linux.

    I'm not stating that the costs actually are lower, but it's not a terribly informative article if they're going to eliminate that important bit of information.
    • by isj ( 453011 ) on Thursday September 13, 2001 @07:08PM (#2295164) Homepage
      That is my experience too. Some uninformed managers think that because WindowsNT/2000 has a familiar user interface it is easier to manage and can be done by less competent adminstrators (or even themselves :-)

      Let's face it: The major factor is the system administrator. If he/she is competent the system TCO will go down. If he/she is incompetent the TCO will go up.
      Good system administators are lazy and try to automate everything so they don't have to work. *nix systems are better at that than Windows (or OS/2, DOS for that matter)
      • Comment removed (Score:5, Interesting)

        by account_deleted ( 4530225 ) on Thursday September 13, 2001 @07:44PM (#2295315)
        Comment removed based on user account deletion
        • Anyways, you are precisely right - the best admin is at heart a lazy, worthless bastard who will do anything, script anything, to get out of work.

          And you are a either a liar, or just completely clueless. Good admins are lazy, worthless bastards who will do anything, script anything, to get back to reading /. and playing the 3D game du jour. It's not that we dislike work. It's just that it distracts us from what is truly important, our FRAG ratio.

    • Management overhead would be pretty specific to a company.

      I would think that counting for depreciation would tilt things to IBM a little more. Those mainframe's value usually stays higher than PC's, where within a year their value has dropped 25-50%
    • In addition, for 11 Intel servers he has Networking 2x3,000=6,000 and for the one IBM mainframe he has 1x3,000... Hunh?? Did Al Gore do the math?? Wiring 11 servers (In paralell) is far more expensive than wiring one mainframe.

      He also has a listing of 4 people supporting each platform. Even for 24x7 operation, you would not need 4 people to manage one Linux mainframe. This also neglects the ease of remote administration enjoyed by Unix.

      Simple would be the word I would use to describe this "study"...

      ~Hammy
      • He also has a listing of 4 people supporting each platform. Even for 24x7 operation, you would not need 4 people to manage one Linux mainframe.


        Looks like he was assuming there would be someone available on site 24X7. In which case you actually need more than 4 people. Do the math:

        (168 hours a week) / (40 hour work week) = (4.2 people need per week). It actually work out to even more people needed once you start adding up cover for vacation (4 weeks per year = 640 hours), Stat holidays (varies say 10 days per year = 80 hours), plus sick days, family days, funeral days, moving days, professional development days, out of town meetings etc. etc.

        Even something as mundane as a 24 hr rent a cop job requires at least 4 full time salaries and 1 part timer.

      • This also neglects the ease of remote administration enjoyed by Unix.
        Which are pretty much matched by Windows 2000.
    • "So you see, with Automatic Volume Recognition your operators can pre-mount labelled tapes on any online tape drive and they'll be allocated to the correct jobs. But this doesn't mean you can hire CHIMPANZEES to run your systems!..."
      - IBM Instructor, "Introduction to System/360," Circa 2Q 1966
    • > It's kind of interesting, since management overhead is widely regarded
      > as the main reason why people prefer Windoze systems to Linux systems.


      no, no, no. That's "depreciated managers' heads" that leadsto such solutions. :)


      hawk

  • the comercial they have linked is rather humerous tho.

    Cry havoc and let slip the dogs of war!
  • Consider doing this analysis for a company that already owns a 390. How appealing would it be to keep your existing hardware and just switch the OS and apps? It'd be easier and more cost effective than buying a whole bunch of new servers and hiring new people. It's this customer that IBM should be targeting.

    +tl

  • by BrookHarty ( 9119 ) on Thursday September 13, 2001 @07:13PM (#2295187) Journal
    Has anyone tested opensource freeware for 25K users? You could save money on different designs, but this really isnt the meat of the article.

    I know it can be done cheaper, we have designed email/groupware for millions of subscribers cheaper than that, all are webbased with oracle/ldap on sun equipment, with network load balancers.

    I like how Exchange looks like a cheap solution, untill you grow past your user base, then costs sky rocket.

  • by aralin ( 107264 ) on Thursday September 13, 2001 @07:24PM (#2295230)
    If I did understand it right, then 70% of the TCO was always the support personal cost. So if there is no need for support personal for IFL, its clear
    that it rocks. The thing that I didn't read in the article is WHY it does not need to support.

  • Note from the comparison, that the mainframe hardware is always more expensive than the PC hardware for a given number of users. The only reason the PC _solution_ ends up being more expensive is because of the price of MS Exchange ($50 per seat, or $2.5million for 50,000 users). In the PC solution, it is the cost of MS Exchange, not the hardware, that costs all the money.

    If you remove the cost of licensing NT and Exchange, the Mainframe solution is more expensive in all circumstances, except with more than 50,000 users.

    This article only demonstrates that Exchange is overpriced, not that mainframes make good mail servers.
    • Might I suggest that your licensing for Exchange would be considerably less than list at the 50,000 user level? Perhpas not as much as the rumored discounts given by Oracle (90% or more!) to huge customers but a definite published sliding scale does exist. The more you buy, the less it costs and the pricing model needs to use the volume pricing, not the relative list cost.
  • by loony ( 37622 ) on Thursday September 13, 2001 @07:36PM (#2295286)
    Somehow those numbers look pretty high - especially if you look at the solutions other companies run...

    3 6xXeon systems 2 to 1 failover $80k
    1 Linux retail box $75
    2 Admins @ 75k/year $150k
    -----
    $230.075

    Well, dont know but somehow this whole linux on mainframe seems like overkill for me - especially since the mainframe CPU's arent all that impressive and the linux vm's dont profit all that much of the datatransfer rates a mainframe offers...
  • Non-techinical users who come one board with Exchange skills may not be productive on the "cheaper" Linu8x solution. How do you factor in productivity?
  • by mindstrm ( 20013 ) on Thursday September 13, 2001 @07:46PM (#2295318)
    aside form the fact that the first IBM system he first describes is vastly overpowered, and a rediculous 'solution' to supporting 5000 users.. It doesn't matter if it can scale to 50,000; that's not what it's there for.

    Why compare it on big iron? Why not compare it solely on the same hardware?
    I can support 50,000 users doing all kinds of neat things on the same hardware, running linux, for a LOT less money.

    Notice the Exchange licensing costs? a quarter million bucks?

    Keep in mind; most companies do NOT use exchange for what it is good at.. they use it for pure email, though they may purchase it thinking they will use all the groupware features.
    • by sheldon ( 2322 ) on Thursday September 13, 2001 @10:22PM (#2296050)
      That's incorrect.

      At a minimum every company I have ever encountered with Exchange, Lotus Notes etc has used it for email and scheduling. Most critical is the scheduling of conference rooms and other resources.

      I agree that there are a great many features that are not used routinely, but in the companies where they are used they are absolutely critical.

      Many companies have built solutions for ordering office supplies, computers, move/add/change requests, etc. using automated message forms. I've seen these in both Exchange and Notes.

      I think you would have a hard time walking into any major corporation and telling them. "Look, we know you use groupware. But we are a lot smarter than you and we know that all you really need is just simple email."
      • Let me clarify then...
        I've seen a lot of medium-sized operations switch to exchange because of all the 'features'... and basically never use them. THey end up with primarily, a mail server.

        I saw one company switch to it just so they could use shared email accounts, for doing tech support.

        I never said I would walk into a 'major corporation' and tell them they only need email. Many corporations have big infrastructure that you coldn't change with an army.

  • this shows up on /. it's favorable to *nix. Bet you $100 if this artice favored exchange it would never show up here.
  • You can stop reading it as soon as you reach the line

    Linux License (1 x $250 + 3 x $35K) $105,250

    Also, there was no mention of what "groupware" they were using under Linux. The only piece of information in this article is that Exchange is insanely expensive and requires a lot of hardware, but we kinda knew this already.

  • Support (4 @ $55K x 1.5 for benefits) $ 990,000

    4 * 55000 * 1.5 = 330000
  • by ostiguy ( 63618 ) on Thursday September 13, 2001 @08:19PM (#2295476)
    I am a MCSE who has a love/hate relationship with MS and their products. I really like Exchange.

    A lot of exchange shops do stick to a 300-350 user limit per box for Exchange 5.5, but that is with the following conditions:

    No real company has 10 meg mailbox limits

    Until the current generation of tape backup (ultrium, superdlt) came out, having a mail database (priv.edb , the "priv in exchange speak) muuh bigger than 20 gig really alarmed people due to SLA's for restoration of service in a server corruption/failure scenario.

    So, if you assume for their scenario that they were running E5.5, I would have put at least 1000 mailboxes per server, probably 1500, allowing me to max out at 15 gig priv. This would cut down the hardware costs considerably.

    With exchange 2000, clustering is a lot more viable, and e2k also allows a lot more (up to 16, instead of 1) private stores (databases of email) per server. MS has had some issues with MAPI clients and clusters , so I am really hesitant to say how many more users I would put per box.

    Overall though, I think its clear that if you have tons of users, linux on big iron can make a ton of sense. Comparing qmail/sendmail to exchange is somewhat unrealist on a features standpoint, but for the major league web email providers, big iron must be worth looking into.

    I really think the 10 meg per user mail limit somewhat discredits the whole analysis though. Sounds way more like webmail than corporate mail

    ostiguy
    • and e2k also allows a lot more (up to 16, instead of 1) private stores (databases of email) per server
      Four. And you need Enterprise version.
    • In Exchange 5.5, you can't have a 20GB priv.edb. You have to buy the "Enterprise Edition" for stores, priv or pub, larger than 16GB. It does say, or did a few years back when we bought Exchange, that starting with version 5.5, the limit was done away with. But, if you read the start-up messages for Exchange in the Application log, it states that it is limited to 16GB.

      I can't comment on Exchange 2000, since we have no plans to update anytime soon.
  • by scooterbooter ( 521625 ) <sdavis&royallepage,com> on Friday September 14, 2001 @12:31AM (#2296790)
    Folks,

    You're missing the entire point of deploying a messaging system in a corporate environment. This is what messes Linux up. It's nice that you can run SendMail, popD and whatever on the big hardware.. but.. for my corporate end users, this isn't adequate.

    Here are my criteria, sorted in no particular order, for a system that I would be happy to deploy to my 700+ users:

    1) Reliable: No loss of data (no PC storage, backups are centralized). [admittedly, tough to maintain with exchange, in the field]

    2) Useability: (l)Users can find their info quickly and easily. (search via header, sender, date, text in body, text in attachments, etc..)

    3) Manageable costs associated with the above two criteria. I'm not claiming $0 cost -- but predictable and manageable costs.

    That's it. Exchange rules at meeting those criteria. I don't want to backup 700+ PC's -- I don't run an ISP! .. In the corporate world, you have to be able to do things such as "recover" a significant (L)user deleted email. If the CEO says "whoops, I poo-poo canned it accidentally", you're expected to fix the situation..

    Which is quite common, for the market that Linux is "trying" to target -- except that most implementers assume there is a *nice* SLA in place.. the small/medium size market is not ready for the lack of end-user features that are present in the *VAST* majority of the distributions.

    gimmie M$ Small Business Server vs. a Linux/POP3/IMAP solution and I only have to wait until the first end-user "OOPSIE" as a sysadmin, before I toss linux out the window..

    Cheers,
    Scoots.

  • ..if I had more than 5000 employees, it would be unlikely to be on one site, and I'd like my computing and email facilities to be distributed so that a disaster (e.g. fire, flood, planes crashing into building) was not fatal to organisation operation. In that light having a number of boxes over the country or site rather than a big lump of metal in one place looks very attractive.

    Even if I did have one massive site, I would like some ability to continue operations if one building was out of action for any reason. In that light, even as a Linux junkie I wouldn't support the idea of buying a single big IBM system. The words 'putting all my eggs into one basket' seem to come to mind.

  • how can the end numbers be so far apart from eachother, you ask? ah, the Linux solution doesn't need support. Ah, so users can add/configure/remove/backup/restore their own GROUPWARE data and do their OWN support! how neat. (while on exchange they need a $990,000,- costing support team. huh?). The mainframe also doesn't need an UPS, the exchange servers need it. I wonder, does the mainframe, costing $125,000,- come with a $135,000,- costing UPS? if so, why not buy a mainframe just for the UPS in the Exchange situation! Saves you $10,000.- plus you have a mainframe for free!

    The more poop like this is spread, the more credit Linux is loosing. Exchange is a resource hog, but that has a reason: it stores the data on the server, to make sharing data easier. (that's the point of groupware in case you wonder what the difference between just email of 1KB a pop and groupware with lots of documents is).
  • 0) The most common way to run Linux 390 is to run it as a guest under VM. Not natively and not in an LPAR. There continue to be some problems with the Kernel vis a vis task scheduling and interrupts where the kernel expects to have uninterrupted access to the HW. This is what limits the use of complex firewall rules running in gated if you try to run virtual routers/firewalls. So the work tuning Linux on 390 isn't done yet.

    1) Nobody has a 10MB mailbox. We have corp dictat to keep them under 250MB and most people complain about that. So you have to use some realistic number. You also have to consider that email/groupware is the poor man's ftp in the corp so you have to be able to bulk move all kinds of very large attachments.

    2) The support costs for Linux 390 or essentially the same as for any other kind of Linux because IT IS Linux. 99% of a sysadmins job would never touch VM even if running as a guest of VM.

    3) Do you really want to manage all the security problems, viruses and macro hacks for 50,000 Exchange users?

8 Catfish = 1 Octo-puss

Working...