Search code examples
socketsnetwork-programmingudpports

Pros and cons to one port per client?


Referring to UDP. Some would suggest that having a single port (and thus bound socket) per client, as seen in eg. Quake III, would be better for buffering incoming streams. I'm not entirely sure I buy this.

Isn't it after all down to one's own code to make sure that the contents of those buffers are constantly consumed? On my server, I plan to do this about 20-30 times a second, and if my clients are pushing out packets at that same rate, I can't see how buffering would be a problem. FWIW, my packets will be up to 1024 bytes in length. I'd have 4 or at most, 8 clients. I understand from a number of sources (eg. this answer) that the default buffer size on Windows is 8k. So with 4 clients, this should typically be okay, to my mind... though I guess I might need to up the buffer size somewhat, and am not sure if there are any pitfalls to that, though I'm aware this is done via setsockopt().


Solution

  • The code doing the buffering in the OS and in the language is the same regardless of the port so whether its coming in to multiple buffers in multiple sockets or one buffer in one socket makes no difference. Setting a larger (N times) buffer on one socket on one port would be equivalent to N buffers on N ports.

    I would also say that if you are talking about 8 * 30 packets per second (240 packets/sec) then unless you are running this on a calculator from the 1980s you don't need to worry about buffering performance.

    If the send rate is higher than the read rate then the buffer is going to fill up and drop packets regardless of how large your buffer is. The size of the buffer would just specify latency.

    If you have N clients though and they are all sending packets at a rate of 20/sec then your server needs to read packets at a rate of N*20/sec at a minimum but realistically it should actually read more quickly than that since there will be variance in the timing (clocks) on the machines, particularly under load, so the server should try to read more often than you calculate the minimum to be to ensure it compensates for that, or it should drain the buffer N times per second (however frequently you like, as long as you specify a buffer of the correct size to cope).

    Additionally because you might occasionally get a delayed packet which might be batched up with one or two others from some router along the path I would say set the buffer size slightly larger than 8k (2x or 3x) so that you don't get 3 packets that have been batched up from one client (2 of which are old and you will discard or overwrite on reading) overwriting fresh packets from some other clients.