I've been looking for a way to use Bash indirection to re-route all outputs (1 (STDOUT), 2 (STDERR), 3, etc.) to named pipes. Here is a script that I wrote to test this theory:
#!/bin/bash
pipe1="/tmp/pipe1"
pipe2="/tmp/pipe2"
pipe3="/tmp/pipe3"
mkfifo "${pipe1}"
mkfifo "${pipe2}"
mkfifo "${pipe3}"
trap "rm -rf ${pipe1} ${pipe2} ${pipe3}" EXIT
printer() {
echo "OUT" >&1
echo "ERR" >&2
echo "WRN" >&3
}
# Usage: mux
mux() {
cat "${pipe1}"
cat "${pipe2}"
cat "${pipe3}"
}
printer 1>"${pipe1}" 2>"${pipe2}" 3>"${pipe3}"
mux
This code seems to be alright, but the terminal hangs indefinitely until it's terminated. As I understand it, pipes are like files in that they have an inode, but rather than writing to disk, they simply write to memory.
That being said, it should be accessible like any other file. I know the script hangs on the line calling the printer function. I have also tested several combinations of subshells and more advanced redirections (namely, redirecting to STDOUT to handle each of the other pipes). Perhaps I am missing a terminator in the named pipe (Whereby it is locked and cannot be accessed by the mux function). If that is the case, how is this achieved?
EDIT After more testing, it appears that the issue only happen when attempting to redirect with multiple pipes. For example:
#!/bin/bash
pipe1="/tmp/pipe1"
mkfifo "${pipe1}"
trap "rm -rf ${pipe1}" EXIT
(exec >"${pipe1}"; echo "Test") &
cat < "${pipe1}"
will work as expected. However, adding STDOUT (for example), will break this, forcing it to hang:
#!/bin/bash
pipe1="/tmp/pipe1"
mkfifo "${pipe1}"
trap "rm -rf ${pipe1}" EXIT
(exec >"${pipe1}" 2>"${pipe2}"; echo "Test"; echo "Test2" >&2) &
cat < "${pipe1}"
cat < "${pipe2}"
More specifically, the code hangs once the exec >"${pipe1}" 2>"${pipe2}
statement executes. I imagine that adding more subshells in certain places will help, but this may become messy/unwieldy. I did learn, however, that the named pipes are meant to bridge data between shells (hence the added subshells and background operator &
).
If you want to be able to read the content after the file descriptor was closed, you need to just use files. The think with pipes is, that the reading command needs to be running first before the command that writes.
In such setup:
cmd1 | cmd2 | cmd3
cmd3
is run first, then cmd2
, then cmd1
. So if you want to setup it using pipes, you would need to open each fifo for reading in parallel and then call printer
:
printer() {
echo "OUT" >&1
echo "ERR" >&2
echo "WRN" >&3
}
# Usage: mux
mux() {
cat "${pipe1}" &
cat "${pipe2}" &
cat "${pipe3}"
}
mux &
printer 1>${pipe1} 2>"${pipe2}" 3>"${pipe3}"
The shell will block on this snipped:
(exec >"${pipe1}" 2>"${pipe2}"; echo "Test"; echo "Test2" >&2) &
cat < "${pipe1}"
cat < "${pipe2}"
On cat < "$pipe1"
Cause you need to read from both pipes for the exec to continue:
(exec >"${pipe1}" 2>"${pipe2}"; echo "Test"; echo "Test2" >&2) &
cat < "${pipe1}" &
cat < "${pipe2}"
If you want buffered output from a command, ie. read the output of the command after it had written something or exited, use just files for that, they are actually called logs.
As a workaround, you can use bash pipe internal buffering to buffer your messages:
printer() {
echo "OUT" >&3
echo "ERR" >&4
echo "WRN" >&5
}
# Usage: mux
mux() {
timeout 1 cat "${pipe1}"
timeout 1 cat "${pipe2}"
timeout 1 cat "${pipe3}"
}
printer 3> >(cat >$pipe1) 4> >(cat >$pipe2) 5> >(cat >$pipe3)
mux
What happens here, is that pipes are always open for writing, even after the printer function exists and will remain open until the process substitution is running. You can close them manually, by exec 5>&-
, which will write EOF to the pipe letting cat $pipe3
return normally. cat "$pipe1"
will never exit if the function does not close the file descriptors, that's why the timeout functions are used, so that we can drain pipes without blocking on them.