Search code examples
javamultithreadingperformanceservletscgi

Is servlet architecture faster than CGI because it uses threads instead of processes?


I am new to Java and to web development in general so I am reading this tutorial where it says that one advantage of the servlet pattern over CGI is:

better performance: because it creates a thread for each request not process.

However, I really do not get why it should be so. Based on this answer, at least for Linux, the general consensus seems to be that threads are not necessarily faster than processes and it might be actually advantageous to use processes instead of threads.

In the tutorial it is written that CGI works as follows:

For each request, it starts a process and Web server is limited to start processes.

When taking into account the startup cost of a process, this could make sense. However, I am not sure why it would be necessary to start a new process for each request, instead of having a pool of running CGI shells serving the queued requests.


Solution

  • The main performance difference is that GCI forks / execs a new process for each request, but a well-designed Servlet container creates a (bounded) pool of threads on start up, assigns them to requests, and recyles them when the request completed.

    The cost of creating the threads (which is significant) is amortized over the lifetime of the servlet container.

    If you could maintain a pool of "CGI shells", I suppose that would be more efficient. However, the normal assumption of a CGI app is that is starting with a clean sheet.

    There are a couple of other issues:

    • In a servlet container you can also maintain shared session and request caches, shared pools of DB connections and so on.
    • The performance of a CGI implemented using a JVM per request would be awful ... because of the overheads of JVM startup / warmup. A typical request probably wouldn't run long enough for bytecodes to be JIT compiled.