I have written a very small test program to check the value of errno
when read()
is interrupted by a handled signal.
#include <stdio.h>
#include <unistd.h>
#include <signal.h>
#include <errno.h>
void handler(int sig){
printf("signal: %d\n", sig);
}
int main(){
signal(SIGINT, handler);
char arr[10];
read(0, arr, 10);
perror("emsg");
return 0;
}
According to everything I know and the man page on read(2)
,
EINTR The call was interrupted by a signal before any data was read;
see signal(7).
read
should return -1
and set errno
to EINTR
. However, the output of the program suggests that it blocks again on read
after returning from the signal handler.
This makes absolutely no sense to me and I cannot figure out what is going wrong.
Here is the output I got:
$ ./a.out
^Csignal: 2
^Csignal: 2
^Csignal: 2
^Csignal: 2
hello
emsg: Success
My question is different from this one. The latter does not talk anywhere about what happens when syscalls are interrupted. The crux of that discussion is which should be used.
Additionally this answer on the same thread says that signal()
calls sigaction()
underneath, so why is the behaviour in case of system calls different for the two?
According to everything I know and the man page on
read(2)
,EINTR The call was interrupted by a signal before any data was read; see signal(7).
read
should return-1
and seterrno
toEINTR
.
You are reading too much into that. read()
can return -1 and set errno
to EINTR
, which combination should be interpreted to mean that it was interrupted by a signal (before any data were read). It's even safe to say that if read
fails on account of being interrupted by a signal, then that's the way to expect it to manifest. But that does not mean that read
is guaranteed to fail in the event that a signal is received while it is blocking.
However, the output of the program suggests that it blocks again on read after returning from the signal handler.
That is indeed one possible behavior. In particular, it is part of the BSD semantics for signal()
and signal handling, and that is the default for Glibc (subject to the _BSD_SOURCE
feature-test macro), so that's what you would expect by default on both Mac (because BSD) and most Linux (because Glibc). The "Portability" section of the manual page for Glibc's implementation of signal()
goes into the matter in some detail.
Additionally this answer on the same thread says that
signal()
callssigaction()
underneath, so why is the behaviour in case of system calls different for the two?
The key thing about sigaction()
is that it provides mechanisms for specifying all the details for the handling of the signal whose disposition is set, including, notably,
Thus, if some implementation of signal()
operates by calling sigaction()
, that has no inherent implication for these or other ancillary behaviors upon subsequent receipt of the signal. Nor does it mean that registering a handler for that signal directly via sigaction()
must produce the same effect as doing so indirectly via signal()
-- it all depends on the arguments to sigaction()
. The caller gets to choose.
Note in particular the advice from the manpage linked above:
The only portable use of
signal()
is to set a signal's disposition toSIG_DFL
orSIG_IGN
. The semantics when usingsignal()
to establish a signal handler vary across systems (and POSIX.1 explicitly permits this variation); do not use it for this purpose.
(emphasis in the original.)
Supposing that you want system calls not to be resumed after being interrupted by SIGINT
and handled by your handler, you get that from sigaction
by avoiding specifying SA_RESTART
among the flags. Most likely, you don't want any of the other flags, either, so that would mean something (for example) like this:
// with this form, flags and signal mask are initialized by default to all-bits-zero
sigaction(SIGINT, & (struct sigaction) { .sa_handler = handler }, NULL);