This is my specific use case to better explain the problem, but I'm looking for a generalized solution.
I am using a Raspberry Pi with the adafruit LED display that displays an RGB matrix (this code is in Python 2). I have a python script that generates RGB data every 1/n seconds.
Is there a package or library that allows me to continuously generate RGB data in the form of an np.ndarray in one process and have it be polled by the the matrix script so that it does near-real time interprocess communication of numpy arrays.
An easy and fast solution is to use the numpy.memmap
library. It just creates a memory-map to an array stored in a binary file on disk. In that way you can share numpy arrays between processes very easily and incredibly fast.
For example, the main process that generates the RGB data can be:
import numpy as np
myshape = (100,100,3) # my RGB array
shared_array = np.memmap("/tmp/testarray", mode='w+', shape=myshape)
while True:
data = some_function_to_get_rgb_data() # returns a numpy array of myshape
shared_array[:] = data[:]
And the other process that reads that array can be:
import numpy as np
read_shape = (100,100,3)
shared_array = np.memmap("/tmp/testarray", mode='r', shape=read_shape)
while True:
# shared_array will behave as a numpy ndarray
do_something_with_array(shared_array)
Only take into account that the shape must match (also the dtype
of the array can be passed to the memmap
function) and the process that creates the file must run first. In practice any change made to the shared_array
in the main process will be reflected in the other process very fast.
Tests I have done sharing a (480,360,21) array with 4 processes simultaneously it takes less than 1 millisec to read and write from that array.