I'm currently writing a simple webserver in C for a course I'm doing. One requirement is for us to implement a thread pool to handle connections using pthreads
.
I know how I would go about doing this roughly(calling accept
in a main thread and passing the file descriptor to a freee thread), however my friend suggested an alternate method than the one I had in mind: creating all my threads up front, and getting them all to loop forever on a call to accept
. The idea being that accept
will block all the idle threads and when a connection comes in, only giving the file descriptor to one of them. Then when a given thread is done with a connection it loops back around and blocks on a call to accept
again. Using the call to accept()
as a semaphore essentially. This would simplify the implementation quite a bit he figures, as you wouldn't need to keep track of which threads are busy and which are ready for a connection. It would also be lower latency in theory, as the thread can immediately start executing (without the need to create it).
My question is, is this safe? I'm planning to implement it and try it out, but I'm not ready yet and I'm quite curious to know the answer. I've searched on google and here on stackoverflow, but couldn't find anyone doing it this way. Is accept
thread safe? I assume there will be more overhead with this approach as you are running all your threads all the time, are the two approaches simply a simple memory/latency tradeoff?
Edit: I'm unsure if this should be community wiki, apologies if it should be, I can't find the button :P
Yes. This is a common way to design multithreaded servers and accepted design practice.
You can also fork
several times and have the child processes call accept
, this will allow you to do multithreading without needing a threads library. Older servers do this.