Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Should Servers be Mono-Process or Multithreaded? 96

An anonymous reader wonders: "How would you design the fastest possible Linux-based server application today? A few years ago, the thinking was that multi-threading was not the way to go — instead, high-performance servers used an event-driven, mono-process model (consider lighttpd and haproxy). However, things have changed. Today CPUs have dual cores, and over the next few years this is only likely to increase. Also, the 2.6 Linux kernel has made multi-threading much more efficient. So I'm wondering, does Slashdot think that modern high performance server software should be designed to be multi-threaded, or does it still make more sense to use an event driven, mono-process architecture, despite the advances in the Linux 2.6 threading and the arrival of multi-core CPUs?"
This discussion has been archived. No new comments can be posted.

Should Servers be Mono-Process or Multithreaded?

Comments Filter:
  • info (Score:5, Informative)

    by illuminatedwax ( 537131 ) <stdrange@alUUUum ... inus threevowels> on Wednesday July 12, 2006 @07:54PM (#15709648) Journal
    Check out the C10K page [kegel.com] for a very detailed discussion about this.
    • outdated info (Score:2, Informative)

      by Anonymous Coward
      The last time this page was updated was November 2003, since there there have been two major revisions to Java and at least one major revision to the linux kernel (as we;; as changes to FreeBSD, OpenBSD and Windows)... and in both all cases these revisions introduced changes to address scalability/concurrency. This page is incredibly out of date.

      Also... they opening statement and its bias toward Unix "for obvious reasons" doesn't lend towards it's credibility. I've tuned out high volume systems on both pl
      • Yet the methods contained therein have yet to be put into widespread use. At least use this as a starting point, which most people may not have known about.

        Does Windows have the kind of kernel control necessary to implement things quickly like on this page? Serious question; I remember discovering that Windows' support for asynchronus process communication left me severely wanting.
        • Windows has had correct async support for sockets, files, and pipes ever since winnt. Pipes are usually the method of choice for IPC.
        • Re:outdated info (Score:2, Informative)

          I'm the original anonymous coward... wish it hadn't posted as such.

          Windows support for IPC is actually very robust, shared memory and semaphores is about as fast as you can get and it exists on both Windows and *nix just under different names.

          A lot of people either don't know or forget that the NT->XP->2k3 kernel owes a lot to the Vax/VMS group from Digital who were snatched away to build the NT kernel. There seems to be this illusion that XP is the bastard step child of Windows 1.x and that is not t
          • I want my asynchronous signals though!!
            • If you're asking for a queue arrangement, how about shared memory for data transfer and an IO completion port [microsoft.com] for synchronization? IOCPs are great for async IO as well, and can schedule a set number of active threads, even taking into account the number currently sleeping.
          • IIRC the graphics subsystem *did* run in user space in NT up to and including v3.51. It was NT4 that put them all in the kernel, allegedly on performance grounds, with the inevitable loss of stability. Not that I ever noticed and performance gain when I "up"graded from 3.51 to 4.0.
          • Yes, OpenVMS is very robust (if a bit arcane) but multithreading support is abysmal. We had project here to remove the multithreading from OpenVMS processes and it was providing unbelievable performance improvements. QIO is the way to go in OpenVMS.

            To be fair Windows handles threads much better.
      • Re:outdated info (Score:3, Interesting)

        by Phillup ( 317168 )
        Also... they opening statement and its bias toward Unix "for obvious reasons" doesn't lend towards it's credibility.

        Eh?

        Here is what I see:
        The discussion centers around Unix-like operating systems, as that's my personal area of interest, but Windows is also covered a bit.


        Perhaps the bias, and lack of credibility, is elsewhere...
  • combination (Score:3, Informative)

    by TheSHAD0W ( 258774 ) on Wednesday July 12, 2006 @08:02PM (#15709673) Homepage
    If you're going to be serving more than a few connections at a time, it's easy for threads to eat monstrous amounts of resources. It's better if you can handle network connections via a single thread. On the other hand, moving other tasks to separate threads can help. For instance, you will probably want to run the UI with a separate thread (yes, even if it's text-only), and it's useful to be able to split file operations among several threads and let the OS optimize disk access.
    • Re:combination (Score:4, Informative)

      by agent dero ( 680753 ) on Wednesday July 12, 2006 @08:17PM (#15709731) Homepage
      On what operating system? With FreeBSD'sSMPng [freebsd.org] project, they've made most of the network stack (from my understanding) SMP safe, and the kernel now supports pushing multiple threads across multiple CPUs (like Solaris, Xnu, and Linux)

      it's easy for threads to eat monstrous amounts of resource

      Don't you mean forked processes? The advantage of threads is that they're lightweight and use shared memory (dead locks hoorah!), forked processes are heavy because they need their own memory, etc.
      • AIUI forked processes on Linux have a very lightweight memory model using copy-on-write. So the memory for each process is mapped to a single block of memory until one of them tries to write to it, in which case that page is duplicated for the other process(es).
        • That's good - Windows also does Copy-on-Write too. But, in the real world, how much memory that is used by a process will be written to? I'd say quite a lot of it. Executable code, and static data (strings etc) will be effectively read-only, as will startup initialisation, but past that and especially in a dynamic process that is a small engine that works with a lot of dynamic data (ie a web server), you'll see a lot of private memory used for each process.

          On the other hand, with threads you'll still use a
          • But, in the real world, how much memory that is used by a process will be written to? I'd say quite a lot of it. Executable code, and static data (strings etc) will be effectively read-only, as will startup initialisation, but past that and especially in a dynamic process that is a small engine that works with a lot of dynamic data (ie a web server), you'll see a lot of private memory used for each process.

            In my real world very little of the process memory is written to.

            Specifically, I'm talking about a w

      • The rule of thumb for the number of network threads is (# of CPUS) + 1. Much more then that and you lose performance from context switches.
    • Re:combination (Score:3, Insightful)

      If you're going to be serving more than a few connections at a time, it's easy for threads to eat monstrous amounts of resources. It's better if you can handle network connections via a single thread.

      I disagree. While using a single thread per connection is definately a retarded way to go, making I/O operations asynchronous and handling callbacks via a thread pool is definately the fastest and most efficient way to build a scalable daemon today. And once you get used to the idea, it is even much easier

      • Re:combination (Score:3, Informative)

        by ILikeRed ( 141848 )

        disclaimer: I have no experience doing this in Linux

        That's because it's only needed (as a performance hack) with Windows. Forking is cheap with *nix.

        "Those who don't understand UNIX are doomed to reinvent it, poorly."
        --Henry Spencer

        Get ESR's "The Art of UNIX Programming" - or if cheap read it online. Chapter 7 [catb.org] might be a good place for you to start.

        • This is not about how cheap threads/forking is. It's about how much context switches you want to waste when you shouldn't be.
        • Forking is cheaper than Windows, not cheap. (define cheap. lol).

          If you have to fork to serve a new request, regardless of how efficient unix makes it, it will still require a fair amount of resources and start up some context switching. In addition you then have to set up the network connection and so on. Its a lot more expensive than a single process running and sharing work within itself. (you can see this in action by comparing cgi based webservers with modular ones. The cgi ones are a tenth of the perfo
    • How about this?

      Either use asynchrous and non-blocking calls throughout, OR use multiple threads.
  • Of course, It would help if you asked a meaningful question;

    Absent context, servers should be both and neither and something we havent invented yet.

    What is the server for? is it serving a database to 27 hojillion simultaneous users?? serving a static intranet to 5?? filtering mail for AOL.com??

    On suspects that each problem will have an optimum solution... so, as the first post noted, nothing to see here.

    err!
    jak.
    • is it serving a database to 27 hojillion simultaneous users?

      I'd say any time there's over 27 brazilian simultaneous users.

    • by Anonymous Coward
      What is the server for? is it serving a database to 27 hojillion simultaneous users?? serving a static intranet to 5?? filtering mail for AOL.com??

      Bingo.

      I recently replaced a server for a client. His previous one was just too slow, and his response was to keep replacing it with a new one with a faster CPU and more RAM. His result? Only marginal increases in speed.

      I profiled the machine as he was using it, noticed that the bottleneck was disk access, and replaced the IDE drives he was using with a caching
      • To quote David Abrash: Profile before you optimize.

        I had a very similar experience with a client in 1997 or so. They had an old file server: dual Pentium 133's running Windows NT 3.51 with 6 hot-swap SCSI drives in RAID-5. It would take approximately 45 minutes for a designer to open a single file (about 1GB in size). Their previous consultants told them that their server was hopelessly out-of-date (which it was) and that they needed a new file server: $15,000, please.

        I came and did some profiling

  • fastest way (Score:5, Informative)

    by r00t ( 33219 ) on Wednesday July 12, 2006 @08:25PM (#15709762) Journal
    Well first, you probably should keep things simple and just buy nice hardware. Most servers sit idle most of the time anyway. If you truly do need the perfornamce though...

    Have 1 process per node. I mean "node" in a NUMA sense. 64-bit AMD systems have one node per chip package. Other PCs (except exotic stuff) have one node for the whole system. Lock your processes to separate nodes, so that they all get local memory. If you don't do this, at least remember to use the new system call for moving pages from one node to the other. (eh, "move_pages" if I remember right -- see unistd.h in the kernel source)

    You'll need extra threads to do disk IO. Not counting those: On each node, have at least 1 thread per bottom-level (usually L2 or L3) cache, but not more than 1 thread per virtual core (hyperthreading thing). If you go with 1 thread per physical core but have virtual cores (hyperthreading) enabled, lock your threads to virtual cores that don't share phycical cores.

    A lot of this should be configurable. Hopefully you'll make an easy way to automatically determine the best configuration, writing out the appropriate config file so that manual config hacking is not required to get the best performance.
    • Disable the IRQ balancing. Direct IRQs to the correct CPUs so that you don't send/receive packets on a different CPU from where the app will be running. This has been proven to help with network cards. The same might apply to disk.
  • Debugging (Score:3, Interesting)

    by Spazmania ( 174582 ) on Wednesday July 12, 2006 @08:28PM (#15709772) Homepage
    Debugging a preforked C program like Apache 1.3 is still much easier than debugging a monolithic process with non-blocking I/O like Squid.
    Debugging a monolithic C process like Squid is still much easier than debugging multithreaded software like Microsoft Windows.

    This isn't likely to change regardless of how many processors you throw in the machine.

    The rules change when you move to something like Java where cross-thread contamination of the data structures is relatively difficult. But then if you're talking about Java then why are you seriously considering hardware performance issues?
    • Re:Debugging (Score:3, Insightful)

      by samkass ( 174571 )
      >

      Is this intended as flamebait? Performance in Java is as important as performance anywhere else. Most of the performance in any complex system depends more on the algorithms and how easy it is to debug a complex, fast algorithm, than on any inherent speed advantage of a particular language. I'd say the question of what design of network I/O and threading maximizes performance is at least as important a question in Java as any other language.
      • The original poster distincly said hardware performance issues. Algorithm efficiency is something that I'd probably characterize as software performace. I think the OP was trying to point out that, if your hardware is fast enough to run a Java VM comfortably, it'll be fast enough for you to not worry about the performance impact of "low level" design decisions.

        • This topic is about whether you should write single-threaded or multi-threaded programs for the highest performance. If for your application there is a cost of one over the other that grows faster than linearly as the number of users scales upwards, that cost would soon eclipse the difference in performance due to implementation language, which is probably a percentage speedup that holds pretty constant as number of users grows.
        • Yeah, I worded it badly. The point I was trying to make is that if you aren't talking about something written in C/C++ then the performance issues associated with threaded/not-threaded are the least of your concerns. Its only when you get to a fairly low-level language like C that they start to make a noticable difference.
          • I'd be curious to see actual data that backs this up. Link?
            • Actually, I'd be curious to see data too on this too. I haven't really heard anything of note about preforking or single-thread non-blocking IO outside of the C/Unix universe. It strikes me that you only use those methods when you're trying to squeeze out the last ounce of performance from the hardware. Other languages (like Java) generally have different priorities and the programmers using those other languages generally have priorities in line with their language's strengths.

    • Re:Debugging (Score:1, Interesting)

      by Anonymous Coward
      You are right....for a C/C++ based program. For Erlang (or even Oz) I would disagree. Ejabberd and Tsung are good examples of what Erlang is capable of in terms of simultaneous users and performance. Check it out.
    • Debugging a monolithic C process like Squid is still much easier than debugging multithreaded software like Microsoft Windows.


      Why would you want to debug Microsoft Windows?
  • Multiple processes (Score:4, Informative)

    by Alex Belits ( 437 ) * on Wednesday July 12, 2006 @08:38PM (#15709815) Homepage
    You almost never have less performance-critical processes (say, web server + database server processes) than CPUs, so for most of applications, in situation where you don't have a shared context that you need a read-write access to from all requests, use multiple processes. If you do have such a shared context, you need to consider the overhead from synchronization between threads vs. no simultaneous access. Also take into account that socket i/o is not synchronous -- a process may have sent a buffer to the kernel and switched to processing another request, but kernel is still sending the data from that buffer. The same happens with receiving data -- it may have arrived already, but the kernel is filling the input buffer while the server process is not looking there. If the data never arrives faster than it can be processed, you gain no benefit from trying to process it in parallel -- your threads will be sleeping more, and performance will remain the same.

    On systems with crippled schedulers and VM, threads are very efficient compared to everything else because your application becomes its own scheduler, and it reduces the total physical footprint and amount of cache invalidations. With better scheduler and VM, it makes more sense to rely on the OS insulating processes and scheduling their i/o, so the solution with multiple processes becomes more efficient in the absence application's need to share some data between multiple requests.
  • by KidSock ( 150684 ) on Wednesday July 12, 2006 @09:03PM (#15709944)
    You forgot multiprocess. Like anything in software the answer is, "it depends on the application". But one of the most overlooked and frequently very important factors that affects performance is cache locality. If the CPU has to fetch something from main memory (or heaven forbid it actually has to drudge it up from disk) the program has to wait. That wait time is often much much greater than the execution time of the target code. Aside from simply writing small code (that only get's you so far), one way to get better cache locality is to break up your processing into a pipeline. Mail servers frequently do this. One process will accept connections do some sanity checking and write the message to another process. The next process juggles addresses for routing and writes it to another process. That process might then work on delivery either locally or remotely. What happends (or what is supposed to happen under high load) is that one process becomes hot and processes as many messages as it can until the buffer to the next process is full. Then the next process runs processing all of those messages until it either runs out of stuff to process or cant write anything more to the next process in the pipeline. If you have multiple cores / CPUs this scales pretty well too.

    But again, "it depends on the application". The above pipelining method only performs well if you're processing items in an assembly line fashion. If you're an HTTP proxy server you wouldn't want that model. You would probably want a single process libevent type of thing. I have some code that doesn't use either of those models. It's a multiprocess model but event driven with *everything* in shared memory. It's very close to a multithreaded model but I needed security context switching. Also, contrary to popular belief threaded servers are slower than equivalent multiprocess servers. So in-general, the benifit of a multithreaded server is pretty much just about convenience for the programmer. Since you can acheive the same effect by just creating an allocator from a big chunk of shared memory mapped before anything is forked, there's very little reason to use threads at all.
  • by thebiss ( 164488 ) on Wednesday July 12, 2006 @09:11PM (#15709991)
    Since you didn't say what kind of server you're building, I'm going to assume:
    - that you're building a custom-purpose, client-server or message processing application,
    - it needs to be highly parallel to be efficient
    - the language is C, C#, or C++, and not Java (process-based servers in Java?)

    I have done this before, using both processes and threads, for the same application. Consider the impact of application faults on your design, and then consider how hard it will be to create thread-safe code.

    o A highly multithreaded server, where threads are performing complex and/or memory demanding tasks, will be susceptable to complete failure of all running jobs on all threads, if just a single thread SEGFAULTs. And despite your best testing efforts, complex code (1M+ lines) will at some point, somewhere, fail.

    o Threaded code must be thread safe. Static variables, shared data structures, and factories all previously accessed through a singleton must now be protected with guard functions and semaphores. Race conditions need to be considered. Design for this up front. It will be much harder to add it later.

    The Project

    I worked on a team which added an asychronous processing engine to a web application. The engine was responsible for performing memory and time-intensive financial analysis and reporting for 16,000 accountants, so that they could close a large company's financial books. Unlike a webserver triggered by on-line end users, this engine is triggered by events in the company's financial database: once the database raises the "ready" flag, this engine begins running as many reports as it can, as fast as it can, on behalf of the 16K users. The analysis and report code was 2 million lines of C++, running on AIX.

    Process implementation

    The initial implementation used processes. A dispatcher job monitored the database for the ready flag, and then forked children of itself to analyze slices of the data, and generate the reports. One child job was used for each analysis and report pair, and the manager controlled how many jobs ran in parallel, maintaining a scoreboard of which jobs succeeded, and which failed.

    Due to the complexity of the system, failures (core) occasionally occurred. The monitor would record this, retry the failed analysis up to 3 times, and keep a uniquely named core file of the event. Other analysis reports would continue to be generated, otherwise unharmed by the thrashing thread. Approximately once every 90 days, the development team would collect the few cores generated, use the gbd/xldb debugger to determine the cause of failure, and correct the fault.

    The downsides of this? The solution was slowed because couldn't re-use resources like database connections (they were destroyed with each process), and more memory was used than need be. DB2 caching helped somewhat, but potential performance improvements remained.

    Threaded implementation

    In a large company, there are IT standards, and one of the standards at my company is that applications shall never, ever, ever fork(), even if running on a large dedicated machine. After losing the fight against this, my team re-architected the report engine. Largely
    the same as the previous, the new engine waits for the "ready" signal, and then spawns pthreads (POSIX threads) as workers to analyze the data and generate the report. In theory, it was robust.

    The alpha version of this solution immediately failed (cored) during testing. We neglected to identify the less obvious non-thread-safe code in the application, and failed to identify several race conditions. Unlike previous failures, this faults were total: a SEGFAULT in code on one of 20 threads would halt the entire application. And the corefile generated was now huge - it contained a snapshot of memory for all 20 running jobs, instead of just the one of interest.

    Extensive root-cause analysis, design, and restart management solved this, and the current version is as robust, and a good bit faster, than the previous. At a significant price.

    • Threaded code must be thread safe. Static variables, shared data structures, and factories all previously accessed through a singleton must now be protected with guard functions and semaphores.

      Not necessarily. It is entirely possible to design code that is threadsafe without using locks for data access. Instead, in many cases you can ensure that due to program logic variables you are accessing can only be used by a single thread (e.g. by only allowing a single thread to work on a connection's state at onc
    • >> In a large company, there are IT standards, and one of the standards at my company is that applications shall never, ever, ever fork(), even if running on a large dedicated machine.

      May I ask - Gods, man ! Why oh why ?
    • by Anonymous Coward
      A highly multithreaded server, where threads are performing complex and/or memory demanding tasks, will be susceptable to complete failure of all running jobs on all threads, if just a single thread SEGFAULTs.

      In Windoze and OS/2 (am I showing my age here :) it is possible to trap these type of exceptions on a per thread basis. You can then create a "manager" that does effectively what you had in the multi-process scenario. The exception handling code does whatever cleanup it can, and then triggers some
      • In Windoze and OS/2 (am I showing my age here :) it is possible to trap these type of exceptions on a per thread basis. You can then create a "manager" that does effectively what you had in the multi-process scenario. The exception handling code does whatever cleanup it can, and then triggers some action that will cause a thread to be spun up if an existing one bites the dust.

        However, if the dying thread has run amok over the heap already, that's not going to help much. Instead, you are likely to get other

    • Since you didn't say what kind of server you're building, I'm going to assume:
      - ...
      - the language is C, C#, or C++, and not Java (process-based servers in Java?)

      Standard API package java.nio, serving you since 2002.

      Introduced in J2SE 1.4 (Merlin) after an almost unanimous vote in the Java democracy: http://jcp.org/en/jsr/detail?id=51 [jcp.org]
  • Mono-process is synchronous IO (select system call)
    which blocks, where as multithreading is asynchronous and non-blocking. You can get far more throughput with the same amount of memory with threads, and be able to levarage multi-core, multi-processor and distributed processing architectures.

    mb
  • Real life examples (Score:5, Informative)

    by sdfad1 ( 880883 ) on Wednesday July 12, 2006 @10:38PM (#15710365) Homepage Journal

    I cannot speculate, but I can look at what people are doing today. One thing that I have noticed, is the widespread research into, with compelling arguments, for massively multithreaded programming techniques. See Erlang for example. It is designed right from the beginning for this sort of problem - high throughput, high reliability, high uptime telephony networks.

    As a rough benchmark, someone's got this [www.sics.se].

    That's an order of magnitude increase in "performance" (depends on what you mean by performance". I thought I'll do a casual informal test of my own, with a decent static file size (instead of the 1 byte used in that benchmark)

    Server Software: Yaws/1.56
    Document Length: 402 bytes

    Concurrency Level: 500
    Time taken for tests: 8.480740 seconds
    Complete requests: 5000
    Requests per second: 589.57 [#/sec] (mean)
    Time per request: 848.074 [ms] (mean)

    Server Software: Apache/2.0.54
    Document Length: 402 bytes

    Concurrency Level: 500
    Time taken for tests: 29.787216 seconds
    Complete requests: 5000
    Requests per second: 167.86 [#/sec] (mean)
    Time per request: 2978.722 [ms] (mean)

    Output edited to get past lameness filter.

    Err crap, I could have sworn the first time I tried this, when Yaws was first installed, its performance was worse! Oh well, perhaps it's something I've inadvertently done since then. Could have been due to my computer reboot (this is a desktop PC). It seems I've proven my point, although I was trying to disprove it. Standard caveats regarding benchmarks apply. Both servers are default Ubuntu installs with no configuration changes - I didn't compile anything manually.

    Additionally it has also been noted that:

    > Linus Torvalds: 100k threads at once is crazy Using Posix style threads, I'd have to agree. Posix threads were just not designed with this level of usage in mind. Which is why concurrent lanugages like Erlang and Mozart/Oz don't use Posix threads.

    Well, that's where it could be headed anyway - a multiprocessor system with green threads (ie simulated threads, like Java ones) implementing massive concurency and redundancy. Some prototypes for systems like this are already available, and being used. Cheers.

    • The important thing about Erlang is it doesn't show threads to the user. Everything you do in Erlang is asynchronous and based on message passing. Erlang processes do not share any state with each other, and often do not have much state of their own (state lives in messages in a well-designed Erlang program). This means it's much easier to reason about your code, which means it's easier to debug, and easier to write working code.
    • Concurrency Level: 500
      Time taken for tests: 8.480740 seconds
      Complete requests: 5000
      Requests per second: 589.57 [#/sec] (mean)
      Time per request: 848.074 [ms] (mean)

      I don't understand this. These results are HORRIBLE. 5000 requests in 8 seconds? 1 request in .8 seconds? I think your decimal point is off.

      • This is not a beefed up server - it's a very old computer, and is used only for development. Remember that there are 500 simultaneous connections, with the test itself also running on the same machine, and I'm also running a shitload of other stuff (X, mozilla-firefox - heaps of tabs open, Lisp (this one shows hundreds of megs of memory usage, but I've read that for this kind of applications, top is a deceptive measure), emacs etc).

        The benchmarks I've shown are from running "ab". I was going to show all p

  • by inflex ( 123318 ) on Wednesday July 12, 2006 @11:06PM (#15710481) Homepage Journal
    start-rant:

    Threads are useful, that's granted - but it would seem a lot of people are trying to convert wholesale over to this threading model just for the hell of it, running along with the apparent reasoning that threading is "lighter" than processes. Maybe threads are lighter/cheaper on Windows systems - but a Unix system with copy-on-demand paging forking/process system is _DESIGNED_ to handle processes. Right now a lot of the time threads are a hack. Unix and processes work nicely together.

    As for "maximising" available resources, well don't forget there's typically another couple of dozen processes running on any give Unix setup, more so on a multi-user multi-purpose machine (let's say WWW, email and DNS setup - throw in SpamAssassin for lots of fun) there's no shortage of available processes to use up a CPU. On a monolithic system where it's running only one process, sure, threads become useful there to spread the load.

    My gripe basically boils down to a lot of people going along and choosing to use threads rather than forking because they think that it's "cool" or (supposedly) "lighter" - not because they've done any real world testing/checking. Remember, Unix was built around the idea of many small processes/programs working together, so that'd tend to naturally allow usage of multiple CPUs without any exotic hacks. /rant.
    • There is a genuine difference between multithreading and forking. The kernel does take longer to switch between processes than between threads since there's an address space change between processes. 10,000 threads in one process will use fewer per-process resources than would 10,000 processes of one thread each. I want to say that process accounting (on creation/destruction) takes more time than thread accounting, but I'm not intimately familiar with their implementation on Linux. For some applications
    • I've run into that too. A lot of people seem to think that threads are free when they aren't. They use memory and add context switches.

      Which reminds me. Why the hell does the new Yahoo! Messenger use 25 threads? There's no good reason for that.

  • non-blocking (Score:5, Informative)

    by pizza_milkshake ( 580452 ) on Wednesday July 12, 2006 @11:24PM (#15710550)
    higher throughput can be achieved with one process or thread (whichever floats your boat) per CPU, using epoll() (linux 2.6 only, use poll() for more portability) with non-blocking I/O.

    however, it's easier conceptually to write a threaded server, it's more natural to write, and you just launch a single thread per connection. unfortunately, currently, this doesn't scale (see Why Events Are A Bad Idea (for High-concurrency Servers) http://www.usenix.org/events/hotos03/tech/vonbehre n.html [usenix.org] for an argument that thread implementations, and not their design, are the issue).

    the former method can handle thousands of simultaneous connections with high throughput, even on a decent workstation; the latter cannot. threads simply have an inherent overhead that cannot be eliminated.

    i've actually been working on writing a non-portable insanely fast httpd in my spare time (svn co svn://parseerror.dyndns.org/web/) over the past few weeks as a way to explore non-blocking I/O + epoll() and it performs very well (~600% faster conns/sec than a traditional fork()ing server (which i wrote first)).

    for further discussion see The C10K Problem http://www.kegel.com/c10k.html [kegel.com] which goes in-depth on these very subjects

    • I think you misunderstood the original submitters question: as I read it, he's not considering thread-per-connection as is discussed in the paper you link, but rather a small number of threads (e.g. 4 on a dual-processor machine with hyperthreading) each running an event-based server, the idea being that putting everything in a single thread wastes CPU time as only 1 of the 4 CPUs presented by such a system can be used.

      You'd have to write such a system carefully, but I think if it was done right you'd get b
  • For an interesting hybrid approach between threads and events, check out SEDA - Architecture for Highly-Concurrent Server Applications [harvard.edu]. Basically, you write a server as a collection of stages connected by event queues. A stage receives an event on an incoming queue, does some processing on it, and then places it on an queue to some other stage. This mirrors the way an event-driven system is designed. Each stage has its own thread pool to handle events. All IO is asynchronous and is treated like any oth

  • Threads are used because they are easy for development. They can also keep the multiple units of a parallel processor busy. However, using more threads than there are processing units introduces overhead into the system. There are better ways to do task scheduling... like event-driven models.

    So, why not combine the multi-threaded and event-driven models? Some very interesting research has been done in this area. Check out Staged Event-Driven Architecture, or SEDA [harvard.edu]. Like threads, it has high fairness in sch
  • by kingradar ( 643534 ) on Thursday July 13, 2006 @03:09AM (#15711157) Homepage
    What should I do?

    This debate hits home with me. I wrote a server daemon to handle the SMTP and POP protocols, and when I first started out I had to make a choice. The choice I made back then was to use a threaded model. The way it works is I spawn X threads which collectively use blocking calls to the accept() function. Each thread will only return from accept() once they have been assigned a new connection by the kernel. For performance I spawn the threads ahead of time. This architecture was a mistake. The issue is that I have to spawn a seperate pool of threads to listen on port 25 (SMTP), port 110 (POP), port 465 (SMTP over SSL) and port 995 (POP over SSL). With this model if I could end up with extra threads listening on port 25, when I need more threads listening and processing connections on port 465. This problems leads me to overcompensate by spawning _extra_ threads just in case. Of course this strategy wastes resources as now extra threads eat memory without benefit.

    To address the SEGFAULT issue, ie one rouge thread taking the whole system down, I also fork multiple processes. In my case I fork 12 processes with 128 threads each. If one process gets killed by a SEGFAULT, the remaining processes continue to work. When I first launched the system, and it faced a torrent of email... 100K+ messages a day, I would have about one process die every 24 hours. With careful debugging work, I've gotten the code stable enough now that I haven't lost a process in about 9 months.

    My theory when I first wrote this code was to leave scheduling to the kernel. I figured that if a thread was blocked waiting for IO data the kernel wouldn't schedule time slices for it. This meant those extra threads sat in the background waiting, but not using CPU time. I am starting to wonder whether this is a good theory? I am considering switching to a different model (more on that in a second), but am not sure which one is best? By the way, the reason each process has so many threads is for DB connection pooling. Each process gets 8 DB connections which are shared between the 128 threads. Each process also has its own copy of the antivirus database. I know its possible, but trying to share DB connections and data between processes is much more difficult.

    I plan to refactor this code soon, and have been struggling with what to do. I am curious to hear the thoughts of others?

    The current plan is to move to a model where I spawn a single thread for each port. When these listening threads have a new connection, they dump the socket handle, and the the protocol into a buffer. I would then also spawn a pool of worker threads which read the incoming connections out of the buffer. Using semaphores and reflection these worker threads would pickup incoming connections and feed them to the right function depending on the protocol. I think this model would work much better than what I have now, but is this the best option?

    The other option is to create system where I spawn only 8 worker threads (or some similar number). This pool of 8 threads then uses epoll() to find out which sockets need attention and address them accordingly. The problem with this model is that if an incomplete message is receieved, the thread couldn't process all the way into the output stage. Instead the data would need to be stored until the message sending was complete. Let me give an example, the thread might get "RCPT TO: " the first time it checked a socket. The thread stores this incomplete message. Then the second time around another thread picks up "example@example.com". The thread assembles the message into "RCPT TO: example@example.com" and then processes the entire command accordingly.

    Does this model work better? Keep in mind that when DB calls need to be made, the MySQL library won't work the same way. A slow database server could hang all 8 worker threads effectively killing the model. There are also SPF, SPAM and Virus libraries. Any one of them could tie up a thread for an extended period, thereby killing this model. What does everyone else think? Am I not thinking about an event processing model correctly? Or is that this type of daemon is better off served using the one thread per connection model?
    • I should probably mention that I've chosen to use CentOS 4.3, running on dual processor Dell 1650 servers. All of my code is written in C, and compiled with GCC.

      This is important because it means I use the 2.6 version of the Linux kernel, and the CentOS/RHEL version of the kernel uses the Native Posix Thread Library (NPTL) by default.

    • by TheRaven64 ( 641858 ) on Thursday July 13, 2006 @07:05AM (#15711652) Journal
      First, you do not want to be using semaphores for inter-thread synchronisation. Semaphores are IPC primitives, not ITC. The POSIX threading API provides mutexes and condition variables for this kind of thing. If you are doing message passing, then you have a classic producer-consumer situation which is exactly what condition variables were created for.

      Each condition variable has a mutex associated with it. The first thing you do, is lock the mutex. If someone else has the mutex locked, then this blocks. You then (atomically) release the mutex and wait on the condition variable. The other thread then acquires the mutex, signals the condition variable, and releases the mutex. The first thread then wakes up with the mutex.

      Really though, you should be using an asynchronous model if you want to be able to reason about your code. Difficulty in debugging scales linearly with the number of asynchronous threads/processes and exponentially with a synchronous approach.

      To be honest, if you need concurrency, you would be better off using a language like Erlang which is designed for concurrency, rather than trying to shoehorn it into a hacked version of PDP-11 assembler.

    • One thing I have found useful is moving from having a thread for each request to having a object per request and using asynchronous network and database calls. With this model, when you see anything interesting happen on the network or in the database, you dig out the right object and call the appropriate function. The advantage is you can keep all you request handling code in one place along with all the data required for that request.

      It is also good to avoid having a thread per request (practically if y
  • Event devices (Score:2, Insightful)

    by bytesex ( 112972 )
    We need proper event devices in systems programming, both between threads and processes. I want to be able to select() (or poll() or epoll() dammit) and wait on not only file-descriptors, but semaphores, system signals, threads waking up out of a sleep() call, and even a crucial variable changing its value. And I want it API-proof (no signals) and statically initializable. And tomorrow. And a pony.

    Seriously though, I'm not sure what methods the WIN32 API has, under the hood, of its event waiting calls,
  • by master_p ( 608214 ) on Thursday July 13, 2006 @03:46AM (#15711229)
    If your software is destined to serve few clients and your hardware has a single core CPU, then mono-process is better: easier to debug, easier to change etc. You may use threads for long computations.

    If your software is destined to serve few clients and your hardware has more than one CPU core, and then your services are not I/O bound, then performance would increase if your O/S can dispatch threads to different cores. If your services are I/O bound, then increased performance from threading depends on O/S and hardware architecture (in other words, how fast I/O can be multiplexed).

    If your software is destined to serve thousands of clients, you need clustering: a few thousand machines that can process requests plus a few others to dispatch requests to the cluster. I actually have no idea how this is done though, so take this advice lightly. In this case, your software is going to be multiprocess/multithreaded anyway.
  • by ziggyboy ( 232080 ) on Thursday July 13, 2006 @04:27AM (#15711309)
    I just happen to be in the middle of the design process of a server for a large telco in Australia. We have decided to use both select() and threads in handling client connections. Clients of the same class/type will be handled each by a thread. Each have their uses, pros and cons, but if you intend on using threads for spawning each client then that's not a very good idea. Pre-created threads would be ok, though. Better than pre-forked processes.

    My only complaint with POSIX threads is they do not have a "generic" join function that grabs *any* threads that have exited.
  • by pieterh ( 196118 ) on Thursday July 13, 2006 @07:09AM (#15711664) Homepage
    My company designed high-performance mono-process servers (portable ones too) starting in 1995, using event-driven virtual threads and state-machine frameworks. Very elegant, very fast, and really easy programming. The Xitami web server was one example - I remember seeing a Win95 system with Xitami survive a slashdotting (it was serving static pages but that was still impressive.)

    We worked in C, because we needed guaranteed low latencies.

    In 2004 we decided to rebuild these frameworks to handle OS multithreading. The reason was that on a single CPU we could not get the performance we needed, and the choice was either to use clusters, or multithreading.

    We continued to work in C. C, and C++ are really nasty for multithreading because the languages have zero support for concurrency. You need to handle everything yourself, and most threading errors are extremely hard to detect.

    It cost us about 10 times more to write our software as multithreaded code than using virtualised threads and we had to build whole reference management frameworks to ensure that threads could share data safely.

    We did keep virtual threading, in fact, but virtual threads get handled by a pool of OS threads. Using 1 OS thread per connection is not scalable beyond a few hundred threads. Modern Linux kernels handle lots of threads but we also target Solaris, and Windows with the same code. So we use two virtual threads per connection, for full-duplex traffic, and we design most of the major server components as threaded objects, which are asynchronous event-driven objects.

    Doing multithreading in C is a *huge* work. C++ has frameworks like ACE that help a lot.

    But there is a performance gain. Our software is a messaging server (implementing the AMQP draft standard). We maxed out at around 55,000 messages per second using a pure virtual-threaded model. Very efficient code. On a single CPU the multithreaded code hits 35,000 messages per second. With two CPUs we're back at 55k, and with 4 dual-core Opterons we're at 120k-150k and higher. (Our software runs a massive trading application that processes 1.5bn messages per day). We still need to improve some of the low-level locking functions to use lock-free mechanisms, and we max out a gigabit network. It is difficult to find machines powerful enough to really stress test the software.

    Without very robust frameworks, I'd never attempt such a project. As it was, we paid a lot for the extra performance. Our frameworks will eventually be released as free software, along with the middleware server.

    Interestingly, a very similar application written in Java 1.5 and using the BEA runtime gets similar performance to ours. Java's threading is so good that I'd be hesitant to chose C on the basis of performance again. I'm not sure whether ACE can reach the levels of performance we need; 100k messages per second is extreme.

    Other questions that are very important to ask:

      - The number of clients you expect to connect at once. If it's less than 500 you can probably use one or two OS threads per connection. If it's more you need to virtualise connections or share your OS threads.
      - The footprint. If you don't care, then I'd advise using Java. If you want a native Linux service, consider C++ and ACE. If you really want to write multithreaded C code, and don't have a full toolkit, consider seeing a doctor.

    When it comes to the future, clearly multiple cores are the way we're heading. This was clear two years ago, and was the main reason we bit the bullet and chose to write our software multithreaded rather than using a clustering model. It seemed clear to me that within a decade, systems would have 32, 64, 128 cores, and software that could take advantage of this would survive for longer. Clustering is not as powerful an abstraction as multithreading.

    • In general, every thread will need a stack. The CPU does not have an infinite cache, nor does it have infinitly many TLB slots. You probably get something like 128 TLB slots, to be used for more than just stacks. Right there you have a thread limit; exceed it and your CPU goes really slow.
  • Programmers (Score:3, Insightful)

    by SpeedBump0619 ( 324581 ) on Thursday July 13, 2006 @09:18AM (#15712363)
    While most of the discussion here covers the technical aspects of the question I think that the biggest factor here is being overlooked. I've been employed now in 5 different environments 4 of which used threading for parallelism. For some reason many programmers just have a problem wrapping their minds around shared memory problems. The number of times I have reveiwed code changes and seen someone not lock a mutex, or forget to release a semaphore is mind boggling.

    As far as I'm concerned threading should only be considered when you have either:
            1) a high tolerance for extremely difficult debugging
            2) a customer who likes it (pays you more) when your program crashes
            3) a team of programmers experienced with threading (at least 75%...if not 100% you *must* review code changes)

    In modern Unix systems most people won't be able to really tell the speed difference between create_thread and fork. If you screw up pushing data through a pipe it just doesn't come out the other end, but if you screw up using shared memory you won't know until something unexpected (and seemingly unrelated) happens.
  • ... the fastest implementation is always the lean, event-loop (e.g. while/select loop) version that never blocks. This is because when a threaded implementation works properly on a single CPU, that's what it ends up doing anyway -- code in each thread runs, until it does something that blocks that thread, and another thread wakes up... If you split out those same code segments into a common event loop, you get rid of the thread context-switch overhead. You can even break up long event-handlers with "Idle events" -- you do part of the work, and send yourself an "idle event" to remind yourself to do the next chunk when you don't have new work coming in. This is generally less overhead than a thread time-slicing context switch.

    If you then want to take advantage of multiple CPU's efficiently, you need to look at the event handler code in the above implementation, and see if you have race conditions if two branches are run in parallel. If not, you can just fork a couple of processes and/or threads to tag-team the same event stream, and the code just cruises and uses more CPUs.

    If there are race conditions between branches, you take those branches that need to share a resource, make a single separate thread/process for those handlers, and forward the required events to that thread/process so they get handled sequentially.

    This style of code is often harder to design than a more obvious multiple-threads type implementation, but it is faster (and often easier to maintain) when properly done. In either case, the source of obscure bugs is race conditions that are overlooked.

  • We recently developed a programming language system called Flux [umass.edu] that we presented at USENIX this June that addresses exactly this problem. Flux is a domain-specific language for building server applications from serial C and C++ components. The compiler then generates event-driven or multithreaded code -- it's just a compiler flag and a different runtime system. You as a programmer do not deal with the nitty-gritty details of managing concurrency. Moreover, Flux can optionally generate a simulator that le

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...