Search code examples
javasocketsudpdatagram

RTT decrease when increasing the packet size


I am experiencing a quite unintuitive result when trying to calculate the RTT (Round Trip Time) between a UDP client and server. When I use a packet size of 20 bytes the RTT is 4.0 ms, but when I increase the packet size to 15000 bytes the RTT is 2.8 ms. Why is this happening? Shouldn't the RTT increase as the packet size is increased?

Here's the code for the UDP server. I run this is java RTTServer 8080.

public class RTTServer {
    final static int BUFSIZE = 1024, COUNT=100000;

    public static void main(String args[]) {
        long start=Integer.MAX_VALUE;
        byte[] bufferRecieve = new byte[BUFSIZE];          
        DatagramPacket recievePacket = new DatagramPacket(bufferRecieve, BUFSIZE);

        for (;;)
        try (DatagramSocket aSocket = new DatagramSocket(Integer.parseInt(args[0]));) {
            aSocket.receive(recievePacket);
            DatagramPacket sendPacket = new DatagramPacket(recievePacket.getData(), recievePacket.getLength(), recievePacket.getAddress(), recievePacket.getPort());
            aSocket.send(sendPacket);
        } catch (Exception e) {            
            System.out.println("Socket: " + e.getMessage());            
        } 
    }
}

Here's the code for the UDP client. I run this as java RTTClient 192.168.1.20 8080 15000.

public class RTTClient {
    final static int BUFSIZE = 1024;
    final static int COUNT = 1000;

    public static void main(String args[]) throws UnknownHostException {
        InetAddress aHost = InetAddress.getByName(args[0]);            
        byte[] dataArray = args[2].getBytes();
        byte[] bufferReceive = new byte[BUFSIZE];
        DatagramPacket requestPacket = new DatagramPacket(
                dataArray, dataArray.length, aHost, Integer.parseInt(args[1]));

        DatagramPacket responsePacket = new DatagramPacket(bufferReceive,BUFSIZE);

        long rtts = 0;

        for (int i =0 ; i < COUNT; i++){
            try ( DatagramSocket aSocket = new DatagramSocket();) {

            long start = System.currentTimeMillis();    
            aSocket.send(requestPacket);
            aSocket.receive(responsePacket);
            System.out.println(i);
            rtts += System.currentTimeMillis() - start;
            } catch (Exception e) {
                System.out.println("Socket: " + e.getMessage());
            }
        }
        System.out.println("RTT = "+(double)rtts/(double)COUNT);     
    }
}

Solution

  • What you are measuring is how fast you can request the client-side OS to send UDP packets. Your are not measuring how fast the server receives them ... or indeed, if it is receiving them at all.

    What I suspect happens is that when you crank up the packet size on the client side you are actually overwhelming the client-side UDP stack (in the kernel). The OS is dropping a large percentage of the packets (silently) which it can do faster than it can accept them for transmission.

    You can get some evidence to support (or not) this theory by measuring how many packets are being received.

    The other issue that may be affecting this is that UDP messages that are too large for a single IP packet will be split into multiple packets and then reassembled. It packet loss is causing reassembly to fail, leading to ICMP "Time Exceeded" messages being sent to the sender. This might cause it to do something unexpected ...