Search code examples
c++shared-memorympiboost-interprocesspbs

shared memory, MPI and queuing systems


My unix/windows C++ app is already parallelized using MPI: the job is splitted in N cpus and each chunk is executed in parallel, quite efficient, very good speed scaling, the job is done right.

But some of the data is repeated in each process, and for technical reasons this data cannot be easily splitted over MPI (...). For example:

  • 5 Gb of static data, exact same thing loaded for each process
  • 4 Gb of data that can be distributed in MPI, the more CPUs are used, smaller this per-CPU RAM is.

On a 4 CPU job, this would mean at least a 20Gb RAM load, most of memory 'wasted', this is awful.

I'm thinking using shared memory to reduce the overall load, the "static" chunk would be loaded only once per computer.

So, main question is:

  • Is there any standard MPI way to share memory on a node? Some kind of readily available + free library ?

    • If not, I would use boost.interprocess and use MPI calls to distribute local shared memory identifiers.
    • The shared-memory would be read by a "local master" on each node, and shared read-only. No need for any kind of semaphore/synchronization, because it wont change.
  • Any performance hit or particular issues to be wary of?

    • (There wont be any "strings" or overly weird data structures, everything can be brought down to arrays and structure pointers)
  • The job will be executed in a PBS (or SGE) queuing system, in the case of a process unclean exit, I wonder if those will cleanup the node-specific shared memory.


Solution

  • One increasingly common approach in High Performance Computing (HPC) is hybrid MPI/OpenMP programs. I.e. you have N MPI processes, and each MPI process has M threads. This approach maps well to clusters consisting of shared memory multiprocessor nodes.

    Changing to such a hierarchical parallelization scheme obviously requires some more or less invasive changes, OTOH if done properly it can increase the performance and scalability of the code in addition to reducing memory consumption for replicated data.

    Depending on the MPI implementation, you may or may not be able to make MPI calls from all threads. This is specified by the required and provided arguments to the MPI_Init_Thread() function that you must call instead of MPI_Init(). Possible values are

    { MPI_THREAD_SINGLE}
        Only one thread will execute. 
    { MPI_THREAD_FUNNELED}
        The process may be multi-threaded, but only the main thread will make MPI calls (all MPI calls are ``funneled'' to the main thread). 
    { MPI_THREAD_SERIALIZED}
        The process may be multi-threaded, and multiple threads may make MPI calls, but only one at a time: MPI calls are not made concurrently from two distinct threads (all MPI calls are ``serialized''). 
    { MPI_THREAD_MULTIPLE}
        Multiple threads may call MPI, with no restrictions. 
    

    In my experience, modern MPI implementations like Open MPI support the most flexible MPI_THREAD_MULTIPLE. If you use older MPI libraries, or some specialized architecture, you might be worse off.

    Of course, you don't need to do your threading with OpenMP, that's just the most popular option in HPC. You could use e.g. the Boost threads library, the Intel TBB library, or straight pthreads or windows threads for that matter.