Search code examples
dockervideomemorybuffershared

What is the most efficient way to stream data between Docker containers


I have a large number of bytes per second coming from a sensor device (e.g., video) that are being read and processed by a process in a Docker container.

I have a second Docker container that would like to read the processed byte stream (still a large number of bytes per second).

What is an efficient way to read this stream? Ideally I'd like to have the first container write to some sort of shared memory buffer that the second container can read from, but I don't think separate Docker containers can share memory. Perhaps there is some solution with a shared file pointer, with the file saved to an in-memory file system?

My goal is to maximize performance and minimize useless copies of data from one buffer to another as much as possible.

Edit: Would love to have solutions for both Linux and Windows. Similarly, I'm interested in finding solutions for doing this in C++ as well as python.


Solution

  • Create a fifo with mkfifo /tmp/myfifo. Share it with both containers: --volume /tmp/myfifo:/tmp/myfifo:rw

    You can directly use it:

    • From container 1: echo foo >>/tmp/myfifo

    • In Container 2: read var </tmp/myfifo

    Drawback: Container 1 is blocked until Container 2 reads the data and empties the buffer.

    Avoid the blocking: In both containers, run in bash exec 3<>/tmp/myfifo.

    • From container 1: echo foo >&3

    • In Container 2: read var <&3 (or e.g. cat <&3)

    This solution uses exec file descriptor handling from bash. I don't know how, but certainly it is possible with other languages, too.