I am creating a server application and I have a question I cannot seem to find answers online.
I want my server to be able to serve multiple clients at once. In my code I am creating a child process for each connection that will serve the client so the server will only be responsible for accepting connections and creating children.
In listen()
we set as arguments the file descriptor for the socket and the queue size. Now given that queue would mean how many are waiting when is the number of "free" spots in the queue increased? When the child process starts serving the client or when the "serving" has completed and the client disconnects from the server?
The backlog
parameter to listen()
sets the maximum number of incoming connections the operating system queues for the application.
Queued incoming connections are taken off this backlog-queue at the moment the application successfully accept()
s a connection.
Note:
Backlogged connections are the incoming connections that happen to get into the system when you are in the process of duplicating the socket, forking a new process and accept(2)
ing them. Normally the system allocates a default value of 5 for this queue, almost all time enough for normal purposes. Your process normally accept(2)
s in the unbound server socket for a new connection, then prepares to fork and pass the subprocess a copy of the bound socket before you go again for accept(2)
. In the meanwhile, a new connection can come in and will be queued by the system for the process to get back to the accept(2)
call. With a queue of, let's say, 10, the system will queue that number of incoming connections before beginning to drop them (connection refused error message in that case)