Search code examples
multithreadingcgifastcgi

multi threaded FastCGI App


I want to write a FastCGI app which should handle multiple simultaneous requests using threads. I had a look at the threaded.c sample which comes with the SDK:

#define THREAD_COUNT 20
static int counts[THREAD_COUNT];

static void *doit(void *a)
{
    int rc, i, thread_id = (int)a;
    pid_t pid = getpid();
    FCGX_Request request;
    char *server_name;

    FCGX_InitRequest(&request, 0, 0);

    for (;;)
    {
        static pthread_mutex_t accept_mutex = PTHREAD_MUTEX_INITIALIZER;
        static pthread_mutex_t counts_mutex = PTHREAD_MUTEX_INITIALIZER;

        /* Some platforms require accept() serialization, some don't.. */
        pthread_mutex_lock(&accept_mutex);
        rc = FCGX_Accept_r(&request);
        pthread_mutex_unlock(&accept_mutex);

        if (rc < 0)
            break;

        server_name = FCGX_GetParam("SERVER_NAME", request.envp);

        FCGX_FPrintF(request.out,…
        …     

        FCGX_Finish_r(&request);
    }

    return NULL;
}

int main(void)
{
    int i;
    pthread_t id[THREAD_COUNT];

    FCGX_Init();

    for (i = 1; i < THREAD_COUNT; i++)
        pthread_create(&id[i], NULL, doit, (void*)i);

    doit(0);

    return 0;
}

In the FastCGI specification there is an explaination, how the web server will determine how many connections are supported by the FastCGI app:

The Web server can query specific variables within the application. The server will typically perform a query on application startup in order to to automate certain aspects of system configuration.

• FCGI_MAX_CONNS: The maximum number of concurrent transport connections this application will accept, e.g. "1" or "10".

• FCGI_MAX_REQS: The maximum number of concurrent requests this application will accept, e.g. "1" or "50".

• FCGI_MPXS_CONNS: "0" if this application does not multiplex connections (i.e. handle concurrent requests over each connection), "1" otherwise.

But the return values for this query are hard coded into the FastCGI SDK and returns 1 for FCGI_MAX_CONNS and FCGI_MAX_REQS and 0 for FCGI_MPXS_CONNS. So the threaded.c sample will never receive multiple connections.

I tested the sample with lighttpd and nginx and the app handled only one request at once. How can I get my application to handle multiple requests? Or is this the wrong approach?


Solution

  • Tested the threaded.c program with http_load. The program is running behind nginx. There is only one instance of the program running. If the requests are served sequentially, I would expect it would take 40 seconds for 20 requests even if sent in parallel. Here are the results (I used same numbers as Andrew Bradford - 20, 21, and 40) -

    20 Requests, 20 in parallel, took 2 seconds -

    $ http_load -parallel 20 -fetches 20 request.txt
    20 fetches, 20 max parallel, 6830 bytes, in 2.0026 seconds
    341.5 mean bytes/connection
    9.98701 fetches/sec, 3410.56 bytes/sec
    msecs/connect: 0.158 mean, 0.256 max, 0.093 min
    msecs/first-response: 2001.5 mean, 2002.12 max, 2000.98 min
    HTTP response codes:
      code 200 -- 20
    

    21 Requests, 20 in parallel, took 4 seconds -

    $ http_load -parallel 20 -fetches 21 request.txt
    21 fetches, 20 max parallel, 7171 bytes, in 4.00267 seconds
    341.476 mean bytes/connection
    5.2465 fetches/sec, 1791.55 bytes/sec
    msecs/connect: 0.253714 mean, 0.366 max, 0.145 min
    msecs/first-response: 2001.51 mean, 2002.26 max, 2000.86 min
    HTTP response codes:
      code 200 -- 21
    

    40 Requests, 20 in parallel, took 4 seconds -

    $ http_load -parallel 20 -fetches 40 request.txt
    40 fetches, 20 max parallel, 13660 bytes, in 4.00508 seconds
    341.5 mean bytes/connection
    9.98732 fetches/sec, 3410.67 bytes/sec
    msecs/connect: 0.159975 mean, 0.28 max, 0.079 min
    msecs/first-response: 2001.86 mean, 2002.62 max, 2000.95 min
    HTTP response codes:
      code 200 -- 40
    

    So, it proves that even if the FCGI_MAX_CONNS, FCGI_MAX_REQS, and FCGI_MPXS_CONNS values are hard-coded, the requests are served in parallel.

    When Nginx receives multiple requests, it puts them all in the FCGI application's queue back to back. It does not wait for a response from the first request before sending the second request. In the FCGI application, when a thread is serving the first request for whatever time, another thread is not waiting for the first one to finish, it will pick up the second request and start working on it. And so on.

    So, the only time you will lose is the time it takes to read a request from the queue. This time is usually negligible compared to the time it takes to process the request.