Search code examples
linuxnetwork-programmingtcp

Why UDP socket's buffer size doesn't support autotuning?


Linux has support autotune TCP socket's buffer size:

logic in the Linux kernel that adjusts the buffer size limits and the receive window based on actual packet processing. It takes into consideration a number of things including TCP session RTT, L7 read rates, and the amount of available host memory.

But, why UDP doesn't support this feature?

Thanks.


Solution

  • The goal of TCP is to have a reliable data delivery, so anything which helps to achieve this goal is useful. Automatically tuning buffer sizes based on usage pattern clearly helps here.

    But reliability isn't a goal for UDP. Instead the protocol is often used for things like real time audio and video, where it is important to maintain a low latency even if packets are lost. Or it is used for VPN where reliability in favor of latency in the underlying transport layer leads to unwanted effects like TCP meltdown.

    Having a system automatically tune for reliability by increasing the buffer if the application is slow in reading leads to less packet loss but higher maximum latency - which is the opposite of what the application requires. Therefore it is best if the application is kept in control of the buffers because it best knows what it needs.

    For another case where a well-intended favor of reliability over latency lead to unintended problems in the network see bufferbloat.