Search code examples
javamultithreadingpthreadsnio

Can the thread per request model be faster than non-blocking I/O?


I remember 2 or 3 years ago reading a couple articles where people claimed that modern threading libraries were getting so good that thread-per-request servers would not only be easier to write than non-blocking servers but that they'd be faster, too. I believe this was even demonstrated in Java with a JVM that mapped Java threads to pthreads (i.e. the Java nio overhead was more than the context-switching overhead).

But now I see all the "cutting edge" servers use asynchronous libraries (Java nio, epoll, even node.js). Does this mean that async won?


Solution

  • Not in my opinion. If both models are well implemented (this is a BIG requirement) I think that the concept of NIO should prevail.

    At the heart of a computer are cores. No matter what you do, you cannot parallelize your application more than you have cores. i.e. If you have a 4 core machine, you can ONLY do 4 things at a time (I'm glossing over some details here, but that suffices for this argument).

    Expanding on that thought, if you ever have more threads than cores, you have waste. That waste takes two forms. First is the overhead of the extra threads themselves. Second is the time spent switching between threads. Both are probably minor, but they are there.

    Ideally, you have a single thread per core, and each of those threads is running at 100% processing speed on their core. Task switching wouldn't occur in the ideal. Of course there is the OS, but if you take a 16 core machine and leave 2-3 threads for the OS, then the remaining 13-14 go towards your app. Those threads can switch what they are doing within your app, like when they are blocked by IO requirements, but don't have to pay that cost at the OS level. Write it right into your app.

    An excellent example of this scaling is seen in SEDA http://www.eecs.harvard.edu/~mdw/proj/seda/ . It showed much better scaling under load than a standard thread-per-request model.

    My personal experience is with Netty. I had a simple app. I implemented it well in both Tomcat and Netty. Then I load tested it with 100s of concurrent requests (upwards of 800 I believe). Eventually Tomcat slowed to a crawl and exhibited extremely bursty/laggy behavior. Whereas the Netty implementation simply increased response time, but continued with incredibly overall throughput.

    Please note, this hinges on solid implementation. NIO is still getting better with time. We are learning how to tune our servers OSes to work better with it as well as how to implement the JVMs to better leverage the OS functionality. I don't think a winner can be declared yet, but I believe NIO will be the eventual winner, and it's doing quite well already.