I'm working on implementing the sliding window protocols in C++ for an assignment. I am using UDP (SOCK_DGRAM) sockets. Occasionally, the program must send a large number of packets (as large as the window size) back to back. So far, I have not increased the window size past 30, but it should be able to reach 256 eventually. The packet size must be taken from user input, so it could be anything reasonable. When the packet size is small, like 512 bytes, there aren't problems. When the packet size is larger, as in 40KB, the first few packets are read correctly, and then suddenly my readNBytes() function hangs on one of them after reading only part of it. I'm assuming that the operating system's receive buffer is being filled, and that part of one of the packets is being thrown away. The part that makes it into the buffer is read, and then readNBytes() is waiting for the rest, which was discarded by the OS.
When this happens, are there any flags set by the OS for me to read? Ideally, I would like to force the OS to throw away the entire packet if it does not fit into the receive buffer, instead of just taking part of it. IP_DONTFRAG is not defined on my system, so I don't know how to do this. I would also settle for a way to make the receive buffer size a multiple of my packet size, so that a packet couldn't partially fit into the buffer. What is the best way to overcome this issue?
The OS is not going to pass half a packet to the application.
It is IP's responsibility to handle fragmentation on the send side, IP packets can go up to 64K, and will be fragmented by IP to fit into the underlying layer's MTU.
On the receive side the opposite happens, reassembly. With UDP you either receive the whole packet, or nothing. The only reason to receive only a part of it might be that your application receive buffer is small. Some socket implementations chop it of, even though everything was received