Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Building a Scalable Mail System? 109

clusteredMail asks: "I work for a small ISP that up until now has survived with single servers for most critical roles, including the mail server. We are planning to introduce multiple mail servers (primarily for email collection via POP3 and IMAP) and want to put in place the most scalable, resistant to failure system that we can manage. Everything is currently running on one or another flavour of Linux. In my mind, the ultimate scenario would be to have some sort of distributed/clustered file system between the multiple machines, so that any user could log onto any server, and the loss of a single server would not cause downtime for any group of users. Has anyone in the Slashdot community had to put together a system like this using Linux and Open Source Software? If so, how did you fare and what were the major stumbling blocks?"
"So far, the plan is to split up the mail accounts between multiple servers and use some sort of connection proxy to sort out which account logs into which server but this seems like a rough approach. The disadvantage to this setup: if one server fails all the users who have accounts on that machine will be in the dark, email-wise."
This discussion has been archived. No new comments can be posted.

Building a Scalable Mail System?

Comments Filter:
  • Check out Perdition (Score:5, Informative)

    by Matts ( 1628 ) on Thursday April 20, 2006 @06:21PM (#15169014) Homepage
    If it's IMAP scalability you want then you should look into Perdition, particularly their article on clustered mail server farms [vergenet.net]. This is in use in a lot of high performance, high scaling environments.
    • by Bronster ( 13157 ) <slashdot@brong.net> on Thursday April 20, 2006 @09:18PM (#15169924) Homepage
      Alternatively, check out nginx [sysoev.ru]. Sure you have to wade through the Russian, but the configuration syntax is pretty simple and it's easy enough to build.

      It uses epoll. We replaced a perdition proxy that was seriously loading two servers with a single 8 process nginx instance that's not even breaking a sweat. It's amazing what the change from 32000 process down to 8 processes can do on a busy site! The two frontend machines are now configured with heartbeat to get full failover of IP addresses. Downtime appears to be on the order of 1-2 seconds with an orderly cutover and probably about 10 seconds for a total host failure.

      Cyrus supports replication now, which is a good way to handle the backends. I'd say more about it, but I haven't actually finished configuring the full failover system yet for this - lots of gating logic required to make sure two machines don't both believe they're master for a bit!

      Er, but why would I help you anyway, you're the competition ;)

      (I work for FastMail.FM [fastmail.fm] btw)
      • From the site, it says that nginx is a "high perfomance http and reverse proxy server". How is this in any way related to SMTP/POP3/IMAP servers?
        • Ahh, documentation. We sponsored Igor to add POP and IMAP a while ago. We use Postfix for SMTP - that's a separate function entirely.

          I might write up something about nginx as a pop/imap proxy and ask Igor to link to it.

          It really is very nice to use, if hard to read the docs!
    • Damn, I really should have actually looked at who I was responding to as well! Hi Matt. I've sort of migrated out of doing disgusting things with Perl, XML and databases to doing disgusting things with Perl, RFC822 messages and ... erm, databases. Not to mention sha1hex storage pools and virtual filesystems.

      I still read the Axkit mailing list for its spam and other exciting goodness - but don't have much need to touch XML, bargepole or not, these days.

      Bron.
  • MIT (Score:2, Interesting)

    by Anonymous Coward
    MIT, an organization which you'd think would have a handle on this sort of thing, simply has a bunch of independent servers and assigns accounts to a specific one. One user might use po10.mit.edu for POP/IMAP, another might use po2.

    Of course, that might just be because the IT department at MIT does not take advice from the faculty and students, and just generally sucks.
    • Of course, that might just be because the IT department at MIT does not take advice from the faculty and students, and just generally sucks.

      It's probably a lot more reliable since the IT department doesn't have to deal with the hassles of faculty and students mucking up the email servers. Besides, they probably don't want to share their Diablo 2 game server (which happened at one company I worked for).
  • by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Thursday April 20, 2006 @06:26PM (#15169053) Homepage Journal
    Cyrus [cmu.edu] is the most amazingly flexible POP/IMAP server I've run, and it supports clustering (a "Murder [cmu.edu]") out of the box.

    It's not for the faint of heart, but only takes a couple of "a-ha!" moments to go from lost to competent. Good luck!

    • by vitroth ( 554381 ) <vitroth@cmu.edu> on Thursday April 20, 2006 @06:43PM (#15169161)
      A Cyrus IMAP Murder isn't a clustered system, its exactly the multiple servers with a proxy that the original post was describing. (Note: I work for the CMU IT department, I'm familiar with the way this works, but I don't work for the email group.)

      However if you used the Murder as your frontend for clients, and applied fairly standard high availability tactics to the individual backends you could achieve clustering. Make each backend server a redundant load balanced virtual server, then make the Murder know about the mailbox locations on the virutal systems.

      I'm sure it could be done, but its definitely not something that Cyrus does out of the box.

      In practice the multiple servers w/ proxy has been good enough for CMU. With good hardware for the backend servers, and good RAID arrays, hardware failures are rare.
    • The latest 2.3.x releases of Cyrus also support replication. You can have a Cyrus Murder plus replication on the backends for redundancy. This stuff works very well with very large installations. Fastmail.fm runs on Cyrus, for example, as well as a ton of university email systems.
  • by Quaoar ( 614366 ) on Thursday April 20, 2006 @06:27PM (#15169061)
    ...to make the mail modular so that people of different sizes can wear the armor without the need for re-forging.
    • Perhaps you could use an spiral inside the ring. It'd be more weight, but you could adjust it by twisting the rings to push the connection to the outer rungs.
      • Chainmail gains tremendous strength by flattening the two ends of a ring together with a hammer, than drilling and riveting the joint, as good smiths have done for years. An adjustable spiral would not only be difficult and time consuming to adjust, but much weaker both through the lack of riveted chinks and weakness added by adjusting the spirals (like bending a paperclip back and forth).

        You'd be better of with seperate overlapping sheets of mail head to the body with leather straps. Maybe not as good as a
  • by jellomizer ( 103300 ) * on Thursday April 20, 2006 @06:30PM (#15169078)
    You can Point you MX record to google gmail service and modify the pop addresses to point to google mail too.
    • O RLY? Google will accept messages addressed to user@x.random.domain? All you have to do is point your MX record their way, and they'll have a mailbox all set up and waiting for the message? Wow, those guys really are geniuses.
      • If only it worked that way...

        You'll have to apply a prefix to all of your accounts [eg: -mybusiness-_-username-@gmail.com] and then map users to gmail MX & POP/IMAP accounts... good luck with that. :-)
      • > O RLY?

        YES RLY. It's called GMail for Domains.

        https://www.google.com/hosted/ [google.com]
        • O RLY?

          Gmail for your domain is currently available as a limited beta. If your organization is interested in helping Google test this service, we'll consider your domain for this beta. You'll need to sign in with a Google Account (or get a new one), and answer a few quick questions about your organization and your email needs.

          Sounds like there's a little more you need to do, besides just pointing your MX record at their servers. Like for instance, getting them to accept you as a beta user, answering a f

  • Foundation. (Score:3, Interesting)

    by deep44 ( 891922 ) on Thursday April 20, 2006 @06:30PM (#15169083)
    I would recommend deploying an LDAP-enabled directory server as the foundation for everything else. Almost every other service can leverage the directory for pulling various information about each user. You can really help yourself down the road by making your directory server _the_ single authoritative source of email-related customer information.
    • I agree with this. It is how every large scale (and even smaller scale) environments work. Make sure you setup a multi master LDAP server with replication. This can be a very intense process that requires a lot of work. It is not for the faint of heart.
  • Distributed authentication system (nis, kerberos, ldap, mysql, whatever), and storage (netapp. don't even think about using anything else, just get a damn netapp). have multiple everything and redundancy redundancy redundancy.

    don't forget testing it. get a system up and running, then see what happens if you power off the master. Hint: it shouldn't change anything.
    • Gee, at work we're using one of them IBM "Sharks" with 2 tb of storage... should we really replace it with a netapp?
    • I looked into buying a netapp a few years ago, but the NZ sales manager at the time was the rudest wanker I've ever had the misfortune of dealing with. He didn't stop short of trying to force me to buy, and even called me an idiot for considering competitor's products. So I politely told him to go fuck himself and this company will never touch a NetApp.
      • thats a bad salesmen. so what? that doesn't change the quality of engineering put into the product. don't buy from him, buy from someone else.
        • not an option in a country as small as this, I'm afraid. NZ is a country of 4 million people. I'm sure the States have cities bigger than that... Most vendors only have one local distributor. Some have to be bought from overseas like Australia.

          A pity, cos NetApp gear looks pretty swish.
  • Use NFS (Score:3, Informative)

    by georgewilliamherbert ( 211790 ) on Thursday April 20, 2006 @06:39PM (#15169136)
    From commercial experience: Multiple MTA machines fine. Multiple MUA (IMAP, POP, Webmail) machines fine. Don't use a clustered filesystem; use NFS. All the software of note use locking which plays well with NFS.

    Scaling can be done easily by adding more NFS boxes and managing the directory structure with links or whatnot.
    • All the software of note use locking which plays well with NFS.

      Just to clarify (I initially parsed this incorrectly), you mean, "all the interesting mail processing software is written to use [perhaps optionally] forms of locking that NFS can tolerate", right?

      Usually locking is the problem people hit with NFS.
  • by charlesnw ( 843045 ) <charles@knownelement.com> on Thursday April 20, 2006 @06:39PM (#15169140) Homepage Journal
    I run an open source project that is building an exchange replacement. http://www.thewybles.com/~charles/oser [thewybles.com] is the project homepage. It will be highly available (supporting both hardware (cisco/webmux load balancers) and software based load balancing. Along with a whole host of other groupware functionality. I have done high availability e-mail solution deployments. I am in the SoCal area but am willing to travel if necessary. There are others who can help you as well. Your choice. My blog [livejournal.com] covers a lot of the progress of the project and details. I would be happy to work with you to complete this task. Just e-mail me and we can work out an arrangement.
  • See e.g. article about a university system [linuxjournal.com]. Also Cyrus supports load balancing and failover via murder [cmu.edu], although backends (the actual file stores) still remain single points of failure.
    • I knew somebody had to be applying High Availability tactics to a Cyrus system. Combined those techniques with the multiple server proxy capabilities of the Murder and you've got a system which should scale to unbelievable proportions.
    • ..or you could just get a few decent alpha DS10's for around a grand, install VMS and use it's mail system on the cluster, then measure your uptime in four digit days.
      • ..or you could just get a few decent alpha DS10's for around a grand, install VMS and use it's mail system on the cluster, then measure your uptime in four digit days.

        Amen, brother, amen. But you'd have to pay yearly for the OpenVMS licenses. And that adds up.
        • Aye.. you can negotiate it though with your IT dept. Usually HP's gentle with the rubber gloves for someone testing the water/starting out with it. :)

          It's a bit like a friend that works for $.EDU.AU .. they negotiated with MS and have the entire product suite. Cost per additional license? $7.50au. Exchange with 3-billion L^Husers... $7.60 .. XP home.. 7.50.. Server 2003.. 7.50.. etc. I know of another business (small) that's negotiated with HP for tru64 el-cheapo so I "imagine" VMS (hopefully) wouldn't be t
  • For starters... (Score:5, Informative)

    by metamatic ( 202216 ) on Thursday April 20, 2006 @06:46PM (#15169178) Homepage Journal
    ...don't touch mbox format. Whatever software you choose, make sure it uses Maildir.
    • by p2sam ( 139950 )
      Come on now. If I had mod points, I'd mod you flamebait. :-)
      • Well, it's surprising how many IMAP systems are still based on mbox. It's a complete disaster as soon as you hit them from Apple Mail, which opens multiple threads at once; or try to do filtering of incoming mail into folders while the user might be updating the same folders.

        And then there's the breakage over NFS, the locking problems, the fact that it makes it harder to cluster, and so on.
      • And I would mod you as never having built a real mail system. mbox doesn't scale.

        kashani
    • Re:For starters... (Score:2, Insightful)

      by T-Ranger ( 10520 )
      Or the DBish format of Cyrus. Or Netmail. Or a 100 line Perl, one table MySQL script. Or, fuck, even Exchange or Groupwise. But for the love of of all the is holy, not mbox.
    • Re:For starters... (Score:4, Informative)

      by subreality ( 157447 ) on Thursday April 20, 2006 @11:10PM (#15170422)
      ...don't touch mbox format. Whatever software you choose, make sure it uses Maildir.

      You shouldn't state this as an absolute, because it's not. You also need to give reasons WHY to use maildir.

      An example exception case: We had an application where thousands of very small emails needed to be delivered to a single mailbox every minute. They all get picked up every minute by POP, and all messages are deleted every cycle. mbox is *vastly* better in this scenario, because you don't have to create all the files, move them around a few times, stat large dirs every time POP runs, etc. With mbox, all the delivery threads become sequential, so you cut down seek overhead, and the POP read becomes a single large file read, which is far faster. You also cut way down on metadata updates, and caching works better.

      mbox shines in this scenario, and it's not that uncommon. Many customer service apps work like this.

      In the situation of handling many users's email in a scalable system Maildir is usually better (NFS-safe, concurrent delivery, efficient individual message deletion, etc), but you've not even considered the other range of things available. MH and database backends come to mind. Each has their good and bad points.
      • The original question asked about IMAP. Therefore POP-only scenarios in which mbox makes sense aren't really relevant.
      • Out of sheer curiosity what kind of app needs to dump thousands of tiny emails then fetch them by POP ? Why not just a good old fashioned SQL database ?
        • It's a lot easier to write a good POP client than a good SMTP server, so many customer service apps are written this way. It ends up in the same DB either way, so they write it with the protocol least likely to break things.
      • You shouldn't state this as an absolute, because it's not. You also need to give reasons WHY to use maildir. An example exception case: We had an application where thousands of very small emails needed to be delivered to a single mailbox every minute.

        If you are using mail servers for something other than email, then advice about how to handle email would indeed not apply.

        Your comment is sort of like saying the advice for safely handling uncooked poultry [k12.ga.us] doesn't really apply to the test engineers using the c [straightdope.com]
      • While your scenario probably does need mbox, it's not really typical. The biggest factor is that it deletes all mail in every pickup cycle. Of course that does mean there is mail arriving between the time all mail was fetched and the delete all is requested, which means the IMAP process has to rewrite the mbox file, which means locking out further deliveries while that is taking place. But regardless, it's not the kind of scenario most mail server users are doing. Your setup is probably better off using

    • We use mboxes for our 10^7 users. Indexed mboxes are OK.

      Maildir sucks badly at such scales. Why? Try accessing 100000 files at once or just having 10000000000 files lying around.
  • by ArkiMage ( 578981 ) on Thursday April 20, 2006 @06:47PM (#15169181)
    Multiple servers all looking at the same files? That's what shared storage is all about.. Fibre Channel, iSCSI, etc... Look at GFS from RedHat or any of the others like IBM's recently announced GPFS. Fedora or CentOS for RedHat's GFS on a more reasonable budget.
    • Multiple servers all looking at the same files? That's what shared storage is all about.. Fibre Channel, iSCSI, etc... Look at GFS from RedHat or any of the others like IBM's recently announced GPFS. Fedora or CentOS for RedHat's GFS on a more reasonable budget.

      OpenVMS. Clustering is deeply built into the OS and all relevant libraries.
  • by JeanBaptiste ( 537955 ) on Thursday April 20, 2006 @06:48PM (#15169198)
    cheap, reliable, scalable, and there doesn't seem to be any shortage of them.

    now if I could only figure out how to receive...
  • Scales beyond anything you can be doing, and has the features you're going to want. Check it out: http://stalker.com/ [stalker.com]

    Run it on Linux, it just works...
  • there are many elements to this problem

    o when a user points their mail client at the server do you want one address ?

    if yes
    then you want to invest in equipment/software load balance the IMAP/POP3/HTTP sessions across mail servers And shared require all serve out same same client data
    and
    you want several/all mail servers to be able serve out all the accounts then you want a shared stroage backend
    (either file system OR database wise)
    else
    simple option many mail servers all acting exactly the same with exactly t
  • I have been running CommuniGatePro mail servers for years. They have all the features you're looking for and more. The main thing I love about CommuniGate is the fact that I have one application that is the MTA, IMAP server, POP Server, and WebMail all in one. No dependencies. No futzing around with PHP or config files for half-a-dozen different applications. And it also supports clustering. A low-end cluster would consist of 2 machines with a NFS or CIFS/SMB backend for the storage. They also suppor
  • This page might have some ideas. [shupp.org]. Also see this page [shupp.org] for a howto on setting up the mail server.
    • Qmail is bad news because you need 50 different patches from 30 different people. Not to mention the all so famous security guarentee isn't even (a) honored by DJB and (b) doesn't apply after you start using Joe's qmail-ldap-mysql-pop3-super-vacation-script patch.

      Don't depend on finding anything in the code either. Everything, I mean _everything_ is hard coded. If you can navigate around regular open source code, it's not going to help for qmail. Everything is a nightmare in there. I encourage you to compar

  • It costs, but since I've been looking into mail servers lately I can let you know Scalix has an enterprise edition [mailto] that run on multiple servers.
  • by Midnight Warrior ( 32619 ) on Thursday April 20, 2006 @07:30PM (#15169404) Homepage

    If you have a lot of data, then you can choose a scalable system like IBRIX [ibrix.com] and then use stateful load balancers between each of the POP3/IMAP servers. When you get to multiple nodes on the same filesystem, you have two problems: synchronization between nodes and locking.

    Note that the Oracle Clustered Filesystem v2 has now been merged with the mainline kernel.

  • I would recommend RadWare http://radware.com/ [radware.com] or f5 http://f5.com/ [f5.com] to load balance the traffic to multiple IMAP and/or POP back end servers.

    You can even cluster the load balancers...
  • Layer 4/7 switching (Score:2, Informative)

    by thegrort ( 860868 )
    The best solution I have found is to put generic smtp/imap/pop servers behind layer 4 switching.

    Using this the only thing your servers need in common is backend storage that you can easily mount off NFS etc.
    • This way is the only inexpensive, noncommercial software, non-kludge, non-propretary like way to do it.
      I've built several systems like this, one is close to 100,000 accounts with no problems. This system scales out (i.e. adding another cheap server for more power) as opposed to scaling up with huge servers (price, power demands, price, and price).

      It's also very easy to troubleshoot.

      Do not split your email users between servers and proxy them. Big problems.
  • by Etcetera ( 14711 ) on Thursday April 20, 2006 @08:17PM (#15169657) Homepage
    A properly configured, customized, qmail/vpopmail cluster is a beauty to behold. Unfortuantely, it takes the better part of a month to get up to speed on how the system works, and it will be many months overall before you really feel "comfortable" with how it works (longer if you're coming solely from a sendmail background).

    That being said, it's also rock-solid, extremely fast when properly configured, and more flexible than you can imagine.

    We currently use a single RAID-10 NFS and MySQL DB system handling the backend, with 5 cluster servers in front of it, each of them able to perform any number of roles. (We had a load balancer in front of them at one point, but it actually more just got in the way than anything else.) A sixth box handles all DNS requests for the servers, and we'll be bringing a 7th up soon to offload some of the spam processing from the three that currently run our asnychronous processing code. The cluster boxes are cheap MicroATX Athlon XP 3000+ machines with 2 GB of RAM. I've seen each box take well over a 100 simultanous SMTP connections without CPU being noticably affected. Current 1 does webmail, 1 does incoming MX, 1 does POP3/IMAP, 1 is for development and servers IMAP to the webmail box, and 1 is running SMTP, 587, and SMTP-SSL.

    When properly administered, I think it beats anything out there. However, if you can't afford the time and 3am-bang-your-head-against-your-monitor agony, I'd suggest one of the other solutions people have mentioned here.

    My $.02
  • My solution (Score:5, Informative)

    by subreality ( 157447 ) on Thursday April 20, 2006 @08:30PM (#15169713)
    It's been a while since I built a mid-size email system, but the last time I did it I used:

    Data stores were maildirs on NetApps
    SMTP servers running Postfix
    IMAP servers running Courier IMAP
    Logins via NIS
    IMAP and SMTP failover by means of load balancers

    The SMTP and IMAP servers get NIS-distributed automounter tables, so everyone's homedir is available everywhere. The load balancers distribute the load out to the SMTP and IMAP servers, and work around any that fail. Mail comes into the SMTP servers, and Postfix delivers to maildirs in the users' homedirs. Any SMTP server can deliver to any user. Users log in with IMAP on the Courier IMAP servers. Again, all homedirs are everywhere, so it doesn't matter which server they hit.

    Adding capacity at any point is easy - you just add more servers of the appropriate type when you need more. IMAP and SMTP are fully redundant. Load balancers usually only operate in failover pairs, but you can add more A records in DNS for more LB pairs if you need it.

    The one sticky point is the data stores on the NFS servers. Adding capacity is easy (just add more servers). but there's no easy way to make this fully redundant. See notes for more.

    So there you have it. That'll scale to a pretty large system, and it's simple to implement. It's not THE MOST scalable system, but if you have to ask, this is probably sufficient for your needs.

    Notes:

    You must use maildirs, not mbox. Maildirs perform very well even on NFS, because there can be multiple simultaneous readers and writers. mbox requires locking.

    With NetApp, or Red Hat Cluster Server, or any other cluster NFS server, you can make the head end redundant, so your disk shelf becomes the last single point of failure. If you run RAID 1+0, you can have all the disks mirrored across two shelves, so at least the hardware is completely redundant. However, there are still rare, but possible failure modes. STONITH is, ultimately, a problem that has no perfect solution. (Look it up if you're not familiar with STONITH.)

    NetApp makes very reliable NFS servers. Even in single head configurations my uptime experience has been incredibly good. Dual head is even better. But they're god awful expensive. There are other ones you can buy at all different price points. Clustered file systems like Coda sound really sexy, but they're still half baked. Lustre http://www.lustre.org/ [lustre.org] might work well, but it wasn't available when I last did this, so I can't say. Choose what's appropriate to your needs and budget.

    I used NIS. These days LDAP is more fashionable. Make your LDAP server redundant of course.

    You need redundant networks. In the simplest case, put half of each type of servers (IMAP, SMTP, LB, NFS) on two different switches.

    I never bothered with POP, but you can get POP servers for maildirs, too.

    Configure your load balancers to balance per session - IE, if a user creates multiple IMAP connections, they all go to the same server. This helps keep down the number of NFS mounts, LDAP requests, etc.

    Software opinions: I like Postfix and Courier. They're simple, robust, flexible enough for most situations, and perform very well. Cyrus also has a good following in the large-scale arena, but does things different. Qmail's non-OSS license prevents people from releasing versions that strip out djb's quirky way of doing things, which is why I left it for Postfix (and never looked back). Sendmail doesn't suck as much as it used to, but I haven't really seen why I SHOULD use it these days either. Any of these can be made to work, though, so use whatever you're comfortable with.

    Tip for any email system: outright reject (IE, don't accept at all, don't send to someone's spam folder) as much spam as you can. If 90% of your mail is spam, and you reject the 90% most-likely-spam (delivering the other 10% more questionable stuff to a spam folder), you've just increased your mail performance and disk space by > 5x.

    Good luck!
    • > You must use maildirs, not mbox. Maildirs perform very well even on NFS, because there can be multiple simultaneous readers and writers. mbox requires locking.

      Locking is obviously not an issue with mail. A mailbox is accessed by one user -- his owner -- and local delivery agent. Locks are fine here. There is indeed some concurrent access but as far as mail mostly sits idle in the file and waits it's not a problem.
      • This changes when you have multiple LDAs and MUAs sharing an NFS store. Locking matters a lot. On mbox, if you have a user that has a 500MB mailbox, and they delete the first message, the whole file has to be rewritten. While that's happening, all your SMTP servers have to queue mail for that user, because the mbox is locked. When you have a user sitting there incrementally deleting messages one by one off the top of their mail, at the same time as spammers are delivering mail to the end of the file, th
  • by jafo ( 11982 ) * on Thursday April 20, 2006 @09:00PM (#15169849) Homepage
    The typical way to set this up is:

    High availability redundant NFS servers for storing the mailbox data and user information.

    One or more machines mounting this file-system for handling POP, IMAP, and SMTP from accounts and mailfolders off the NFS server.

    Webmail can be tricky because you need to make sure that either users always hit the same machine for webmail during a session, or session information is shared among the cluster. LVS systems can handle either of these scenarios, so it's not a problem, just something you have to be aware of.

    LVS systems up front, again running High Availability which do load-balancing and automatic removal of failed servers. These are the machines that have the IPs which your customers contact, and then get spread across the real machines in the middle layer above.

    This sort of solution works really well, and we have deployed it for customers of ours with good results. You can get started for only $5k to $10k worth of hardware and if you're building this from scratch it will probably only take you around 100 hours. If you have experience with this sort of setup it can take as little as 10 to 20.

    If $5k to $10k for hardware is out of your budget, you probably shouldn't be looking at this sort of solution. Individual stand-alone servers or even a single pointy box, possibly with high availability, is probably where you want to be in that case.

    linux-ha.org is the place to go for High Availability software on Linux.

    Sean
  • Scalemail looks like its a good start. http://www.inoi.fi/open/trac/scalemail/ [www.inoi.fi] "A scalable (but not fully highly available, atleast not yet) virtual domain system for handling mail for many users, based on Postfix, LDAP and Courier-IMAP."
  • by jjgm ( 663044 ) on Thursday April 20, 2006 @09:51PM (#15170057)
    I recently designed and built a mail system for a six-digit ISP userbase.

    Before I feed you the design, let me tell you a *crucial* concept that you must carry with you at all times.

    EMAIL SYSTEMS ARE PROTOCOL SPEAKERS BETWEEN USER DIRECTORIES AND STORAGE.

    Read that and inwardly digest it before you even start to design your system.

    For the design, first, I'm going to proselytize a particular piece of software.

    DOVECOT IS THE FREE POP/IMAP SERVER OF THE FUTURE. It leaves the Cyrus codebase rotting in the slime. It already kicks Courier's butt in performance and ease of deployment. It's beautifully coded; it has the most elegant authentication architecture; it's exceptionally fast. It isn't complete yet but it's featureful and stable enough that I have successfully deployed 1.0-betas into production. http://www.dovecot.org/ [dovecot.org] for the last IMAP server you'll ever need.

    Here is the design:

    1 x OpenLDAP 2.3 master server
    2 x OpenLDAP 2.3 read-only replicas
    2 x world-facing mail servers running Postfix 2.3
    4 x mail scanning servers running amavisd-new 2.3.3, ClamAV, SpamAssassin, Sophos SAVI and Sophos PMX-ENGINE. LMTP in from the mail front-ends; ESMTP out to the mail storage.
    2 x mail storage front-ends running Postfix 2.3 and Dovecot IMAP/POP3 1.0-beta. These servers also run mysql for amavisd-new quarantine and squirrelmail user options. Actual storage is over NFS to the NetApps. Using Dovecot's Sieve-based delivery agent for server-side filtering.
    2 x Squirrelmail webmail servers. We have our own skin, and our own sqm plugins as the user interface to our various system options - which are all in LDAP. We have integrated MailZu into sqm as a quarantine view/release interface.
    2 x NetApp FAS3020c heads w/4TB NFS storage allocated to mail.

    Everything is load-balanced using foundry hardware LBs. It's very high-throughput and very reliable. It's also easy to monitor (we're using Nagios).

    Base OS is Debian Sarge with applicable backports. I'd prefer FreeBSD but this happens to be a Debian shop, and I wasn't out to change their world, just their mail system.

    Probably the most borderline item is mysql's performance as a quarantine DB; however much RAM and index/query tuning we throw at it, I'm yet to be satisfied with InnoDB's performance on this 100GB+ INSERT-heavy database.

    If I could change one thing about it, it'd be to use the extremely pretty and surprisingly good value @mail (a commercial choice) rather than SquirrelMail. I'd also consider Fedora Directory Server over OpenLDAP, but it wasn't looking ready for this design at the time.

    I have to say there is some bad advice in this thread; now for the hatchet:

    Cyrus: difficult to configure, doesn't support shared storage, horribly ugly codebase, and has some nasty-ass failure modes.
    Qmail: stale, poorly integrated MTA software from the bitchiest developer in town.
    Sendmail: doesn't scale. Even the developers think so, which is why Sendmail X is a rip-off of postfix.
    Communigate Pro: if I don't get to futz with the source for integration and value-add, I'm not interested.
    GFS/GPFS: you don't need the complexity or interesting failure modes of shared-block-storage filesystems. Stay away.
    Linux NFS: isn't reliable enough. We've had problems with data corruption to Linux NFS, both kernel and userland. Right now the only NFS server implementations I trust are NetApp's and Solaris's. No doubt the Linux one can/will improve, or already has, but trust is a hard thing to build :). Plus NetApps are shiny, marvelously reliable, and I love their support.
    • GFS/GPFS: you don't need the complexity or interesting failure modes of shared-block-storage filesystems. Stay away.

      I'd be curious about the "interesting failure modes" you mention. Do you have (bad) experience with this? I've deployed this technology multiple times with great success so far. It appears from your design you also try to avoid the "single point of failure". That's why with FC shared storage and GFS I generally design in more than one SAN/RAID unit and keep them sync'ed. GFS in my experience t
    • -> Communigate Pro: if I don't get to futz with the source for integration and value-add, I'm not interested.

      CGP has a well-documented api for all kinds of third-party integration. I have sucessfully integrated ClamAV and SpamAssassin to communigate.....so I don't really understand the need for source here. In fact - CommuniGate's flexibility in this area (the ability to interface with other applications) makes it easier to work with than any other mailer I've worked on before. What kind of value-add/
  • I started a free email service to compete with Gmail about 2 months after Gmail launched. For those interested, the name is Nerdshack.com.

    At first I used Postfix and Cyrus, but I found it to be a nightmare when your talking about more than 50k accounts.

    What I wanted was an email platform that integrated with ClamAV, DSPAM, supported SPF, Greylisting/Blacklisting/Whitelisting, and was all controlled from a MySQL database. I also wanted it to support SSL, and clustering.

    Frankly I didn't find anything. So I wr
    • Wow, that's really interesting. I work at FastMail.FM and we've built our system on Cyrus as the backend, which has lots of advantages but also some big disadvantages.

      We use open source software throughout our system and contribute back most of our changes (where they actually have some utility outside our little world, 50 line perl programs that just query out database for status information need not apply - and we wouldn't want to inflict our web framework on the world. It certainly doesn't need another
  • Think of redundancy. What is it supposed to achieve? One goes down the other keeps going...

    Now 3 servers would be a waste. Think about it. What are the chances a casing would FAIL?

    So lets put 3 servers in one box. Data has to go onto each of the 3 disks. Instantly. Theres so much IO involved. Should each email coming in have to go through the tcpip stack, through the kernel API levels, through the HAL out the driver, out the network card, through the switch and all the way back down to the disk? Using too m
  • http://www.hserus.net/mailboxes-srs-inboxevent200 4 .ppt [hserus.net]

    http://72.14.203.104/search?q=cache:v5XWBwgqXQcJ:w ww.hserus.net/mailboxes-srs-inboxevent2004.ppt+inb oxevent2004.ppt+site:hserus.net&hl=en&ct=clnk&cd=1 [72.14.203.104]

    That system currently handles over 41 million users, serves up POP3, IMAP, Webmail, spam and virus filtering for paying customers, and deals with over half a billion messages per day.

    Every service is on physically separate hardware: MX, outbound MTAs, content filters, frontends....
  • Just use gmail for domain.
    Definitely more scalable than anything that you can come up with.
  • Mail:Toaster (Score:3, Informative)

    by bemis ( 29806 ) on Friday April 21, 2006 @06:47AM (#15171719) Homepage
    you should check out mail toaster from tnpi - it's open source - built on/against FreeBSD - but a creative soul can pretty easily get it going on Linux (if Linux is that important to you).
    http://www.tnpi.biz/internet/mail/toaster/ [tnpi.biz]

    it's qmail/imap based and scales quite well in my experience.
  • http://www-uxsup.csx.cam.ac.uk/~fanf2/hermes/doc/t alks/2004-02-ukuug/ [cam.ac.uk]

    is how the University of Cambridge do it....

    lots of nice details in there
  • I am currently researching DBMAIL [dbmail.org]. This is an GPLed email system that permits one to store mail in an SQL database. The advantage of this is that all writes are transactional, permitting
    overview [dbmail.org]
    • Scalability

      Dbmail is as scalable as the database system that is used for the mail storage. In theory millions of accounts can be managed using dbmail. One could, for example, run 4 different servers with the pop3 daemon each connecting to the same database (cluster) server.

    • Manageability

      Dbmail is based u

  • We've been using Sun's Java Enterprise System (formerly SunONE formerly iPlanet formerly Netscape blah blah) Messaging Server for the last few years for 25,000+ (and growing) users. It's proven to be very reliable and is extremely flexible. For scalability and added reliability look into their Messaging Multiplexor and High Availability options. Other significant advantages are the availability of well-written and thorough documentation and Sun's support team.

What is research but a blind date with knowledge? -- Will Harvey

Working...