Search code examples
javacnetwork-programmingudpdatagram

Java dropping half of UDP packets


I have a simple client/server setup. The server is in C and the client that is querying the server is Java.

My problem is that, when I send bandwidth-intensive data over the connection, such as Video frames, it drops up to half the packets. I make sure that I properly fragment the udp packets on the server side (udp has a max payload length of 2^16). I verified that the server is sending the packets (printf the result of sendto()). But java doesn't seem to be getting half the data.

Furthermore, when I switch to TCP, all the video frames get through but the latency starts to build up, adding several seconds delay after a few seconds of runtime.

Is there anything obvious that I'm missing? I just can't seem to figure this out.


Solution

  • Get a network tool like Wireshark so you can see what is happening on the wire.

    UDP makes no retransmission attempts, so if a packet is dropped somewhere, it is up to the program to deal with the loss. TCP will work hard to deliver all packets to the program in order, discarding dups and requesting lost packets on its own. If you are seeing high latency, I'd bet you'll see a lot of packet loss with TCP as well, which will show up as retransmissions from the server. If you don't see TCP retransmissions, perhaps the client isn't handling the data fast enough to keep up.