Search code examples
javastateless

Stateless Blocking Server Design


A little help please.

I am designing a stateless server that will have the following functionality:

  1. Client submits a job to the server.
  2. Client is blocked while the server tries to perform the job.
  3. The server will spawn one or multiple threads to perform the job.
  4. The job either finishes, times out or fails.
  5. The appropriate response (based on the outcome) is created, the client is unblocked and the response is handed off to the client.

Here is what I have thought of so far.

  1. Client submits a job to the server.
  2. The server assigns an ID to the job, places the job on a Queue and then places the Client on an another queue (where it will be blocked).
  3. Have a thread pool that will execute the job, fetch the result and appropriately create the response.
  4. Based on ID, pick the client out of the queue (thereby unblocking it), give it the response and send it off.

Steps 1,3,4 seems quite straight forward however any ideas about how to put the client in a queue and then block it. Also, any pointers that would help me design this puppy would be appreciated.

Cheers


Solution

  • Why do you need to block the client? Seems like it would be easier to return (almost) immediately (after performing initial validation, if any) and give client a unique ID for a given job. Client would then be able to either poll using said ID or, perhaps, provide a callback.

    Blocking means you're holding on to a socket which obviously limits the upper number of clients you can serve simultaneously. If that's not a concern for your scenario and you absolutely need to block (perhaps you have no control over client code and can't make them poll?), there's little sense in spawning threads to perform the job unless you can actually separate it into parallel tasks. The only "queue" in that case would be the one held by common thread pool. The workflow would basically be:

    1. Create a thread pool (such as ThreadPoolExecutor)
    2. For each client request:
      1. If you have any parts of the job that you can execute in parallel, delegate them to the pool.
      2. And / or do them in the current thread.
      3. Wait until pooled job parts complete (if applicable).
      4. Return results to client.
    3. Shutdown the thread pool.

    No IDs are needed per se; though you may need to use some sort of latch for 2.1 / 2.3 above.

    Timeouts may be a tad tricky. If you need them to be more or less precise you'll have to keep your main thread (the one that received client request) free from work and have it signal submitted job parts (by flipping a flag) when timeout is reached and return immediately. You'll have to check said flag periodically and terminate your execution once it's flipped; pool will then reclaim the thread.