Search code examples
pythonparallel-processingmultiprocessingshared-memorypool

Multiprocessing shared memory to pass large arrays between processes


Context:
I need to analyse some weather data for every hour of the year. For each hour, I need to read in some inputs for each hour before performing some calculation. One of these inputs is a very large numpy array x , which does not change and is the same for every hour of the year. The output is then a vector (1D numpy array) y, which contains the calculation result for every hour of the year.

Objective:
Speed up the calculation time using multiprocessing module. In particular, I am trying to pass x to each process using the shared_memory submodule of multiprocessing.

I'm running CPython 3.10.8 on Windows 10, with Spyder 5.3.3 as the IDE.

Code (simplified for testing purposes):

import multiprocessing
import numpy as np
from multiprocessing import shared_memory

def multiprocessing_function(args):
    nn, shm_name, y_shm_name, shape, dtype, N = args
    existing_shm = shared_memory.SharedMemory(name=shm_name)
    x = np.ndarray(shape, dtype=dtype, buffer=existing_shm.buf)
    existing_y_shm = shared_memory.SharedMemory(name=y_shm_name)
    y = np.ndarray((N,), dtype=dtype, buffer=existing_y_shm.buf)
    y[nn] = 1
    existing_shm.close()
    existing_y_shm.close()

if __name__ == '__main__':
    x = np.random.rand(int(1e7), 16)
    N = 8760 # Number of hours in a year
            
    dtype = x.dtype
    shm = shared_memory.SharedMemory(create=True, size=x.nbytes)
    shm_array = np.ndarray(x.shape, dtype=x.dtype, buffer=shm.buf)
    np.copyto(shm_array, x)
    y_shm = shared_memory.SharedMemory(create=True, size=N * x.itemsize)
    y_array = np.ndarray((N,), dtype=x.dtype, buffer=y_shm.buf)
    
    args_case = [(nn, shm.name, y_shm.name, x.shape, dtype, N) for nn in range(N)]
    with multiprocessing.Pool() as pool:
        pool.map(multiprocessing_function, args_case)
    
    y = np.array(y_array)

    shm.close()
    y_shm.close()
    shm.unlink()
    y_shm.unlink()

Issue:
When I run the code, it returns the correct vector, but 50% of the time, I get a "Windows fatal exception: access violation" and the kernel crashes. If I then change the size of the array, it might have no issues, but if I restart Spyder and try to rerun the same code with the new array size, the same error would come up and the kernel would crash again. This inconsistent behavior is incredibly strange. I have a feeling this is a memory leakage issue, but I don't know how to fix it.


Solution

  • Either Spyder itself or the IPython shell is trying to access one of your shared numpy arrays after the shared memory file has been closed. My first guess is that Spyder is trying to populate it's "Variable Explorer" pane by enumerating local variables. This causes an access to the numpy array, but the memory location it is pointing to is no longer valid because the SharedMemory has been closed.

    SharedMemory creates files on the filesystem (so they're sharable) in a way that they will only reside in memory (so they're fast). Then you are given memory-mapped access to that file as a buffer. There are some differences depending on the OS, but in general this holds true. Like any other file, you have a bit more responsibility to clean up after yourself: close() and unlink().

    Unfortunately Numpy has no way to know the buffer it's pointing to has been closed, so it will go ahead and try to access the same memory it was previously pointing to. Windows calls it "Access Violation", and everyone else calls it "Segmentation Fault".

    To solve this problem:

    • You can simply change the settings in Spyder to run scripts in an external terminal and not access the arrays after the shm's are closed.
    • You can call del shm_array and del y_array after you're done with them (right before closing the shm's). This will remove them from the module scope so the IPython kernel doesn't try to access them.
    • You can put the stuff in a function so your numpy arrays go out of scope and get garbage collected automatically when the function returns.
    • You can just not close the shm files: Like other file handles, they get closed as part of the python process exiting. Closing your current shell (and starting a new one) should do the trick. On windows if no process has a handle to any given SHM open, it will automatically be deleted. On Linux (and maybe MacOS?) the file will only be deleted when calling unlink or when the computer is rebooted.