Let's say I have a sender application and a receiver application that communicate via UDP.
First, in the sender application, I send some data in two separate calls. First I send these 15 bytes:
[MYHEADER]hello
...then, immediately thereafter, I send another 15 bytes:
[MYHEADER]world
Now, in the receiver application, I'm listening on the same port with a UDP socket that's bound to the same address. Let's say that both messages have arrived (and in the same order) since the last time I checked the receiver socket.
Here's some pseudo-code that shows how I'm polling the socket for incoming data each frame:
uint32 PendingSize;
while (Socket->HasPendingData(PendingSize))
{
uint32 BytesRead;
uint8 MessageData[kMaxMessageSize];
if (Socket->Recv(MessageData, kMaxMessageSize, BytesRead))
{
// Do stuff here
// Will BytesRead be equal to PendingSize?
}
}
HasPendingData
wraps a call to ioctlsocket
with FIONREAD
, returning whether data is waiting in the receive buffer, and populating PendingSize
with the number of bytes waiting. Recv
calls recv
to read that data into a buffer that I can read. If it returns true, then I respond to the data I've received.
Here's my question. Which of these scenarios accurately reflects what would happen in this situation?
Option A.
HasPendingData
returns true and shows a pending size of 15 bytes. Recv
gives me the message [MYHEADER]hello
.HasPendingData
returns true and shows a pending size of 15 bytes. Recv
gives me the message [MYHEADER]world
.HasPendingData
returns false.Option B.
HasPendingData
returns true and shows a pending size of 30 bytes. Recv
gives me the message [MYHEADER]hello[MYHEADER]world
.HasPendingData
returns false.Any insight is appreciated. Thanks!
UDP datagrams are individual and self-contained.
send()
and sendto()
send a new datagram each time.
recv()
and recvfrom()
read a single whole datagram. If your buffer is too small to receive a given datagram, you will get an WSAEMSGSIZE
error and that datagram will be lost if you do not specify the MSG_PEEK
flag.
FIONREAD
tells you the total number of raw bytes in the socket's receive buffer, not the number of datagrams, or the size of those datagrams. This is clearly stated in the documentation:
FIONREAD
Use to determine the amount of data pending in the network's input buffer that can be read from socket s. The argp parameter points to an unsigned long value in which ioctlsocket stores the result. FIONREAD returns the amount of data that can be read in a single call to the recv function, which may not be the same as the total amount of data queued on the socket. If s is message oriented (for example, type SOCK_DGRAM), FIONREAD still returns the amount of pending data in the network buffer, however, the amount that can actually be read in a single call to the recv function is limited to the data size written in the send or sendto function call.
If you need to check the size of the next datagram, call recv()
or recvfrom()
with the MSG_PEEK
flag. Once you have determined the actual size of the datagram, you can read it without the flag so it is removed from the socket buffer. Otherwise, just allocate a buffer that is large enough to accommodate the largest datagram you will ever receive, or even just 65535 which is the largest size that UDP supports.
So, to answer your question, what will really happen in your example is Option A, except that the first HasPendingData
will report 30 pending bytes instead of 15.