I am working with a code that uses XML over TCP communication. This implementation has a 10 seconds timeout for each send()
and recv()
to wait for the whole data using setsockopt()
. After sometime working I found that sometimes recv()
doesn't wait for the timeout and returned a -1 value. While I was trying to solve the problem I added a sleep(2)
in the code and I found out that the sleep()
was interrupted every time I encountered the recv error. Based on this I think the root of the problem is a signal, but I've failed to find which signal is.
My question is the following: Could recv()'s wait be interrupted by a signal? Note: recv()
's wait is set at setsockopt()
EDIT: Here is the solution (Thanks for the help):
while (buf > 0)
{
rsize = recv(socket, bufsize, buf, 0)
if (rsize == -1)
{
if (errno == EINTR)
continue;
break;
}
break;
}
On Linux (and UNIX in general), a call to recv()
can be interrupted by delivery of a signal.
[EINTR] The
recv()
function was interrupted by a signal that was caught, before any data was available.
POSIX
If you encounter EINTR
or detect a shorter than expected message size, simply restart your recv()
(adjusted for however many bytes have been read so far).
If you use sigaction()
to establish your signal handler, you can set the SA_RESTART
flag to allow system calls to be automatically restarted after the signal handler is called. recv()
is one of the calls that will be restarted under Linux (details found with man 7 signal
).