Follow Slashdot stories on Twitter


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Intel The Internet

Intel and BlueArc Set New Mail Server Record 104

louismg writes "With e-mail traffic continuing to explode, Intel and BlueArc announced this morning that the two companies have set a new SPECmail benchmark record in cooperation with CommuniGate Pro, offering a solution that can serve 30 million messages per day - 67% ahead of the previous record, owned by Sun Microsystems. Rather than clustering a lot of smaller servers together, large ISPs can now use fewer systems to handle massive traffic load."
This discussion has been archived. No new comments can be posted.

Intel and BlueArc Set New Mail Server Record

Comments Filter:
  • by Anonymous Coward
    And if there was no spam, I doubt there would be such a need for these machines.
    • Perhaps the next generation of mail servers will devote 90% of its resources to filtering out spam, 10% for storing the rest(being legit emails).

      Yes, the future is not how much mail you can store/process, it's how good your spam processing is, to save those 2 legit emails from grandma.
  • There will be partying in the care homes when this news spreads (via email of course).
  • Now that's what I call a server consolidation booster!
  • by mikeophile ( 647318 ) on Wednesday August 17, 2005 @08:04AM (#13337915)
    Turns out it's really spammers.

  • Correction (Score:1, Funny)

    by lbmouse ( 473316 )
    "Rather than clustering a lot of smaller servers together, large ISPs can now use fewer systems to handle massive spam load."
  • Of course, you can always opt to include some stronger transport filter rules. Since most email is spam, this could make your MTA over 50% faster...
    • Re:Spam filtering? (Score:1, Interesting)

      by Anonymous Coward
      Given that it takes more CPU and memory to filter mail than it does to deliver it, chanches are you could make your MTA deliver 50% slower
  • Sun Solaris 10 x86 (Score:3, Interesting)

    by Alex ( 342 ) on Wednesday August 17, 2005 @08:17AM (#13337970)
    CommuniGate Pro Dynamic Cluster - Backend Servers (4 systems)
    CommuniGate Pro: CommuniGate Pro v4.3.6 Operating System: Sun Solaris 10 x86

    CommuniGate Pro Dynamic Cluster - Frontend Servers (5 systems)
    CommuniGate Pro: CommuniGate Pro v4.3.6 Operating System: Sun Solaris 10 x86
  • Redundant? (Score:5, Insightful)

    by Puls4r ( 724907 ) on Wednesday August 17, 2005 @08:17AM (#13337973)
    No, not the post.

    Isn't part of the allure of smaller systems handling the specifically to get away from large dedicated systems that aren't nearly as reliable?

    By now, google should have taught the world something - distributed computing with small cheap specced systems that can each be swapped with multiple redundancy is the way to offer both uptime, speed, and be cost effective.

    It's nearly identical to the "lean" manufacturing techniques pioneered by the Japanese. Small cells that can increase or decrease output based on the amount of workers (systems) that are working that day. Very flexible.

    After all, it's a COMPUTER.... do you really want it dedicated to just email, or can we use it for other tasks in the downtime.
    • Re:Redundant? (Score:5, Insightful)

      by Jeff DeMaagd ( 2015 ) on Wednesday August 17, 2005 @09:15AM (#13338268) Homepage Journal
      Isn't part of the allure of smaller systems handling the specifically to get away from large dedicated systems that aren't nearly as reliable?

      The allure of small systems is because of COST, not reliability. Large systems are and can be very reliable. Using consumer commodity computer parts by themselves is more likely to be less reliable, but if you set up failover clusters, then you get a cheaper overall system that is as reliable.
      • Re:Redundant? (Score:5, Informative)

        by saider ( 177166 ) on Wednesday August 17, 2005 @10:34AM (#13338958)
        I did a little "reliability analysis" for some telecom equipment a while ago, so I am semi-informed (not a guru).

        There are two terms that often get interchanged, when they shouldn't. Reliability is the ability of the system to run without repairs. Availability is the ability of the system to do its job.

        So the large monolithic system can be built out of very good (and expensive) components that do not fail as much as commodity hardware. This will lead to fewer failures and better reliability.

        The commodity hardware can be arranged so that redundancy ensures that if on component fails, then another will take its load. Since the damaged component needs to be serviced, the reliability is lower, although the availiability of the system is the same.

        Reliability is used by planners to determine the labor costs in keeping a system running. Availability is used by planners to make uptime predictions and to take measures to provide a certian level of service.

        Two similiar numbers that are used for different purposes.
  • by Anonymous Coward on Wednesday August 17, 2005 @08:18AM (#13337977)
    Using massive systems for handling mail invites a single point of failure (SPF), whereas using clusters of smaller systems for the same amount of money gives failover capability.

    Of course, ISPs won't realize this.
  • Not that great (Score:5, Interesting)

    by Matts ( 1628 ) on Wednesday August 17, 2005 @08:20AM (#13337992) Homepage
    I didn't even know there was a SPECmail, but this figure doesn't seem too outstanding to me.

    Firstly I assume this is just a raw delivery setup - no spam or virus filtering. You'd be amazed how much of a difference this makes to any real world setup.

    Secondly, does over 2 million mails a day on a dual 2.4Ghz Xeon using an SMTP server written in Perl []. And that's with full anti-virus (clamav) and lots of different anti-spam measures including SpamAssassin (which is known to be slow - I know because I used to be one of the developers).

    I also know of commercial setups doing over 50m (legit, well - mostly) mails a day. Using an SMTP Server designed with performance in mind []. Perhaps they should submit for SPECmail ;-)

    So 30 million doesn't seem terribly amazing to me. Perhaps Communigate Pro isn't a very fast mail server.
    • Well, thats nice for Apache, but this was 12,500 a minute. And if I can use a calculator correctly, thats 60*24*12,500 = 18,000,000 a day. As for your commercial place that does 50M, you might be exaggerating, but anything 2M and over is more than good enough for most businesses. I really wonder how much Exchange can handle now since there is no MS setups or Exchange setups on that list.
      • As I said, their 12,500 a minute is a pure delivery figure - it does not include spam and virus filtering. These things make a HUGE difference to throughput on a mail server.
        • Re:Not that great (Score:3, Insightful)

          by Matts ( 1628 )
          Replying to myself...

          So reading the full disclosure they used 4 quad xeons. That's 16 CPUs. Compared to's 2.

          So using a pure perl SMTPD [] you can have this kind of throughput (~1m mails per day per CPU) with spam and virus filtering enabled.

          No, I am not impressed with this benchmark.
    • Re:Not that great (Score:5, Interesting)

      by antic ( 29198 ) on Wednesday August 17, 2005 @09:01AM (#13338169)

      Probably worth noting that the Louis Gray who submitted this story is BlueArc's Corporate Communications Manager.

      I thought that the blurb seemed a bit too slick to have come from anywhere but the company themselves. I hope there's no dodgy reason that Louis used a email address to submit the story instead of their work account.

      • Very interesting... The wording leaves one thinking Intel & BlueArc are announcing this. In fact the SPECmail report shows CommuniGate submitted the results. Why would Intel want to push a disk array that uses a Freescale PPC chip?

    • > Firstly I assume this is just a raw delivery setup

      It is not. In addition to the SMTP service, the benchmark models POP3 service as well. From the FAQ, [] :

      "SPECmail2001 is an industry standard benchmark designed to measure a system's ability to act as a mail server compliant with the Internet standards Simple Mail Transfer Protocol (SMTP) and Post Office Protocol -Version 3 (POP3). The benchmark models consumer users of an Internet Service Provider (ISP) by simul
    • Re:Not that great (Score:1, Informative)

      by Anonymous Coward
      With SPECmail, the issue isn't how many messages per
      second can your kick ass SMTP server handle; it's how many (simulated) concurrent & slow modem dial up POP connections (many), IMAP connections (some), and
      other dross can the overall system handle. Again,
      concurrently. And SPECmail does lean very heavily
      towards simulating a world with the overwhelming
      majority of users looking like POPers with dialup
      modem connections.

      So, while being able to handle lots of inbound messages per second and delivering them
    • This isn't just a mail relay, this is ( from spec's site []):

      A standardized mail server benchmark designed to measure a system's ability to act as a mail server servicing email requests, based on the Internet standard protocols SMTP and POP3. The benchmark characterizes throughput and response time of a mailserver system under test with realistic network connections, disk storage, and client workloads.

      So that includes users connecting, picking up email, deleting from their data store etc etc etc.


    • We handle over a million messages a minute on our inbound MTA farm (25 boxes). This does not include mail being sent by our users, webmail, pop3 or imap traffic.

      FWIW, mail is primarily a disk function. Use lots of fast SCSI disks in RAID 0 for maximum speed (or battery backed RAMdisk for the queue, incredible performance gain).
  • Sadly, the need for that much capacity is probably driven by spam, which means ISPs are spending a boat load of money to facilitate spam that neither they, nor their customers want, and the ISPs have to turn around and charge higher prices because of it.

    Certainly, any ISP worth its salt is going to be filtering spam, but there aren't a lot of anti-spam systems out there that are effective over the long term or aren't annoying as all hell.

    Jerry []
    • The whole industry is full of situations where you're forced into spending lots of money to solve problems that shouldn't exist in the first place..
  • by thc69 ( 98798 ) on Wednesday August 17, 2005 @08:36AM (#13338046) Homepage Journal
    ...imagine a beowulf cluster of these...

  • 67% more mail, 250% higher cost! Well maybe not, but they're still only telling half of the important story.

    Really though - is this really a useful metric - x million emails per day? Anyone needing this sort of mail throughput is going to be either a megacorp with a centralised mail server (do any actually do this?) or an ISP. If you're talking this level of mail you're already using some multi-box mail system. In that case individual box performance isn't so important, but rather costs.

    In essence - for any
    • not sure about that 250% higher cost figure - we are in the processes of switching to communigate pro where I work because it cost a fraction of what it would've been upgrading to the new version of Sun One.
  • by Temkin ( 112574 ) on Wednesday August 17, 2005 @08:55AM (#13338138)

    Interestingly enough, they set their record using a message store mounted on NFS. It had 140 FC/AL attached disks, and 14Gb of RAM.

    Virtually every file handle an MTA writes to is opened "O_SYNC". One of the quickest ways to make Sendmail or other common MTA's go fast is to mount their delivery queues on a solid state disk. I'm betting this disk array is turning around the queues without ever committing the data to the platters. (Not that there's anything wrong with this...) I am left wondering if there isn't some bit if NFS trickery not reported in the config.

    But looking at the Sun entry, the old record was set using 2 year old software, and a much smaller disk configuration. Sun will probably update their entry in the near future, just to reclaim the crown. Email is much more an I/O problem than a CPU problem. Sun used to push their mail server on much larger HW, but most ISP's don't want to buy big boxes these days. The small to medium sized boxes, connected to a SAN are more cost effective, permit redundancy and easier maintenance.

    • by Tet ( 2721 )
      But looking at the Sun entry, the old record was set using 2 year old software, and a much smaller disk configuration.

      Indeed. It doesn't strike me as being a particularly impressive record when there are only a total of 18 entries submitted, and most of them are 3 or more years old. I'm sure that I could quite easily come up with a system capable of beating the previous record for a reasonable cost simply by using modern hardware and minimal configuration tweaking.

    • We evaluated BlueArc. . . we didn't choose it because it was "too new" in the market, but its network filer has unbelievable performance (Note: I don't work for BlueArc and it makes me a little queasy to support a commercial product).

      BlueArc does all of their fileserving via microcoded hardware. Instead of using a plain old OS to build the fileserver, they do it in hardware modules dedicated to the task. This means that there is no OS overhead and the different modules (TCP/IP, CIFS, NFS, iSCSI, etc) have
      • >(Think one Bluearc machine is 3 times
        >faster than a cluster of the highest end

        Their controller throughput seems to be 20Gbps - I'm lazy to check but I think I've seen one or two vendors with better performance. []

        >Bluearc actually treats all of these raid-5
        >shelves as disks that it does raid-1
        >across. This gives you redundancy and
        >double-striping in hardware from end to

        Don't all high end disk arrays work the same way? You
  • It says a lot about the state of things when something like the fact that an e-mail server can handle millions of messages per hour makes the news.

    Wake me up when there is a news item about how all the pornography on the Internet has suddenly disappeared.
    • Wake me up when there is a news item about how all the pornography on the Internet has suddenly disappeared.
      If all the pornography on the Internet disappears, I don't want to be woken up :-(
  • what we need to do is build LESS capable servers than we have now so spammers cant get as much through.. and use that slow time to both track down the persistent ones and develop better filters, not faster servers. better technology = better abuse of technology.
  • by IGnatius T Foobar ( 4328 ) on Wednesday August 17, 2005 @09:51AM (#13338551) Homepage Journal
    The speed of any MTA is going to be determined largely by how much work is being performed when each message is submitted. The fastest MTA, therefore, is going to be the one that does the least amount of processing.
    • How about that spiffy big Postfix or Sendmail box you've got sitting out there, whose sole purpose in life is to act as a relay? Sure, it'll process millions of messages per day. It doesn't have to do much.
    • What if you're running a virus checker like ClamAV [] or a spam checker like SpamAssassin []? Those take up CPU cycles. Sure, delivery is slower, but value was added.
    • What if your back end mail system is something like the Citadel [] groupware platform, where MIME content drives an event handler system? Again, delivery is impacted, but the functionality of the system depends on it.
    • What if your org has a global directory [] and your mail hub is responsible for making complex routing decisions for each message? Again, delivery is impacted; it'll be slower than the mega-fast-box, but mail won't be delivered correctly otherwise!
    So you see, it's not always just about raw speed. And besides, next year's hardware will be faster anyway :)
  • Thats all I want to know
  • So how the hell is this a record?
    the company I used to work for (a spammer) - hey, I had to eat!
    they were sending on AVERAGE about 100 MILLION emails a day by using a cluster of 100 small (1U) racked machines.
    and we just used POSTFIX. running mostly K6-2 cpus, yeah real old stuff. lots of K6 cpus too.

    so how the hell is this a new record?
    oh I guess because spammers don't do the SPECmail tests... hehehe

    Well they're 1/3 of the way to a simple postfix config. keep working on it guys!
    • they were sending on AVERAGE about 100 MILLION emails a day by using a cluster of 100 small (1U) racked machines.

      From the blurb:
      Rather than clustering a lot of smaller servers together, large ISPs can now use fewer systems to handle massive traffic load."

      Reading TFA, is sounds like this is one server - not 100. Unless the system in the article is using 30 servers, they beat your evil-spammer-boss's system on a per-server basis.
  • My old web firm used Communigate Pro [] and we always found it to be a highly reliable and stable system. Their support used to be handled by one of their lead programmers and was top notch.
  • This can't be right. Right now, I manage a system that does 500,000 messages/day running of 8 IBM 306 boxes with FC3 & Exim and a NFS backend. Its pretty small, most of the companies I know here in japan are 100 times bigger than us and must easily break 50M messages/day ... docomo alone must do 500M/day ...

    And the idea that one would need commercial software to do this is laughable ...
  • The world population is about 6000 million people. Considering that not all of them read mail (e.g.: infants) it's not too way off to assume that average number of mails that a person can cope with is around 10.

    If this box can deliver 30 million messages, 2000 of them are going to saturate the email limit the human civilization can handle (6000 x 10 = 60,000).

    But there is a catch: this is under the assumption that all those 60 billion mails are not spam.

    In other words, this record breaking "system" has

  • Considering that the main cybersecurity (sorry) threats today are viruses and phishing attacks, how long might it be before super-high-volume email systems will be classified as munitions that require licensing and export restrictions?

    Imagine if Nigeria got the e-bomb...
  • Funny math. (Score:3, Insightful)

    by Spazmania ( 174582 ) on Wednesday August 17, 2005 @11:17AM (#13339378) Homepage
    BlueArc's Titan sustained a performance level of 12,500 SPECmail messages per minute, or the equivalent of two and a half million SPECmail users, sending 30 million e-mail messages per day.

    The math seems a little off...

    12,500 messages/minute * 60 minutes/hour * 24 hours/day = 18M messages/day, not 30M.

    That having been said, CGPro is fast as all heck so I can believe it topped the previous record.
  • by sysadmn ( 29788 )
    So you replace 5 Sun 480's with 4 1.2GHz processors with 9 servers with 4 2.4 GHz Xeons, and deliver mail to 2 million users instead of 1.5 million? Excuse me for not being too excited. Yes, 480's are expensive - wonder what a 4 way dual core V40z would do.
  • by thomtunes ( 908316 ) on Wednesday August 17, 2005 @01:15PM (#13340604)
    Just thought I'd add a few details and address some of the questions here. My name is Thom O'Connor and work for CommuniGate Systems (CGS), and was the one who put together and ran these tests - you can (mostly) verify this by looking at the comments in the source on the results page [].

    First off, on the SPECmail test itself. SPECmail is a standardized test (the only one I'm aware of for email) that attempts to closely regulate a level playing field for measuring email performance. It is critical to understand that this is not just measuring SMTP. The 30 million message a day [] text is a little vague, but it is important that this includes a distribution of delivery, relayed, and retrieved email. Sure, anyone can just relay many millions of messages an hour.

    SPECmail does POP and SMTP, so the test measures not just MTA behaviour but also local delivery and then retrieval of the messages. The SPECmail test also uses Quality of Service (QOS) measurements such that a message injected via SMTP to the system MTAs (the CommuniGate Pro Frontend servers in this diagram []) must then be delivered locally into the users' account, then be retrieved within 60 seconds. Satisfying the QOS criteria during the benchmark is often the most difficult part.

    So, SPECmail itself just does POP and SMTP, which is a little 1990s I agree, but SPEC is coming out with a SPECimap [] test in the near future, and CGS is also very interested in seeing a SPEC VoIP/SIP test for measuring CommuniGate Pro's Real-Time [] capabilities.

    A few others questions I've seen raised here:

    1. The CommuniGate Pro Dynamic Cluster described in this test is fully and completely appropriate for production use in all aspects. In fact, if you're running a 2+ million user ISP on a CommuniGate Pro Dynamic Cluster, we'd recommend you to use these results as a guide for your architecture (although load balancers should be added to the gateway point for all inbound connections). In fact, CGS has ISP customers running architectures which match the layout of the described system almost exactly. All systems in the Cluster service all accounts - you could lose 4 Frontend Servers and 3 Backend Servers, and all users could still access their email (albeit with decreased capacity).

    2. HyperThreading was disabled in the BIOS because the downloadable Solaris 10 x86 [] operating system would not (yet?) support the Intel x86_64 Potomoc chipset properly. That said, on top of the recent security vulnerabilities [] on the topic, we have also discovered miscellaneous threading and even NFS issues related to having HyperThreading enabled on Linux 2.6, FreeBSD 5.4, and Solaris 10 x86 systems.

    3. On NFS...NFS is used safely and securely in this test. The integrity of data storage is one of the major criteria that the SPEC organization closely evaluates when reviewing a SPECmail submittal. Obviously, there are many ways to cheat and/or cut corners using Solid State Disks, unsafe RAM for message queueing, and other techniques that you would never want to use on your production message system. However, the test described here was performed using a standard (albeit excellent) BlueArc Titan [] Storage System with write caching only in NRRAM and using proper mount options and layout for security, redundancy, and data integrity.

    Hope this clears up any misconceptions. Obviously, I'm clearly biased about the work here, but assembling and then passing a SPECmail test of this size is a gigantic effort. If anyone thinks
    • PLEASE report the hyperthreaded isue to your contact at Sun. If we don't know about it, we cannot help. The problems with hyperthreaded CPU detection is usually a BIOS problem; it has nothing to do with "chipset support" -- the Potomac is just another CPU, and there is a standard way in which BIOSes describe hyperthreaded CPUs (in ACPI tables).
  • Now we know why Intel has such dominance in the processor market, it has a secret alliance with spammers. (Buy AMD!)
  • I know of a cluster system that sends 400 million messages a day. If they have to use this BlueArc one, then they would still need a cluster of BlueArc mail senders.
  • "Rather than clustering a lot of smaller servers together, large ISPs can now use fewer systems to handle massive traffic load."

    From the testing reports linked in the article we have:

    1. CommuniGate Pro Dynamic Cluster - Backend Servers (4 systems)
    2. High Speed Network File Server for Main Mail Storage (1 system)
    3. CommuniGate Pro Dynamic Cluster - Frontend Servers (5 systems)

    For a total of 10 systems seperated into 3 levels (front end, backend, data storage).

    High quality editing for Slashdot as usual.

Unix will self-destruct in five seconds... 4... 3... 2... 1...