It's hard to explain this behavior, so here's a reproducible example (tested on macOS).
First, I have the following C file. The details aren't important, but I essentially use the read
system call to read 16 bytes from standard input, or until an EOF is encountered. Note that read
will return 0 on an EOF.
// test.c
#include <fcntl.h>
#include <stdio.h>
#include <unistd.h>
int main() {
char buf[1];
for (int i = 0; i < 16; i++) {
if (read(STDIN_FILENO, buf, 1) == 0) {
printf("EOF encountered! Number of bytes read: %d\n", i);
return 0;
}
}
printf("Read all 16 bytes\n");
return 0;
}
Suppose I compile this file into a binary file called test
. Here's the program in action:
$ echo 'sixteen bytes!!' | ./test
Read all 16 bytes
$ echo '' | ./test
EOF encountered! Number of bytes read: 1
$ ./test # Waits for user input
Makes sense right? I can even take the last command and run it as a background process (although this is quite useless):
$ ./test &
[1] 19204
[1] + suspended (tty input) ./test
Suppose I take this command and put it in the following shell script (called huh.sh
):
#!/bin/sh
./test &
When I run this shell script, here's the output:
$ ./huh.sh
EOF encountered! Number of bytes read: 0
This means read
immediately encountered an EOF, and this only happens in the context of a shell script.
I see similar behavior if I replace test
with another program that is sensitive to EOFs. For instance, if I instead run node &
directly in a terminal, I'll be able to see the node
process in the output of ps
. However, if I run node &
in a shell script, it immediately exits.
Can someone explain this?
Job control is enabled by default in interactive shells, but disabled by default in shell scripts. Here's a relevant quote from POSIX:
If job control is disabled (see set, -m), the standard input for an asynchronous list, before any explicit redirections are performed, shall be considered to be assigned to a file that has the same properties as /dev/null. This shall not happen if job control is enabled.