Search code examples
pythoncopenmppython-multiprocessingswig

Why does my function in a shared library which uses OpenMP hang when called from a subprocess via swig?


I'm trying to wrap a minimal C library, consisting of a file "locks.h" containing

#ifndef LOCKS_H
#define LOCKS_H
void f(void);
#endif

and "locks.c" containing

#include <stdio.h>

void f(void) {
#pragma omp parallel
  {
    fprintf(stderr, "Hello World!\n");
  }
  return;
}

with swig, using the swig input file "locks.i" containing

%module locks

%{
#define SWIG_FILE_WITH_INIT
#include "locks.h"
%}

void f(void);

I then create and build the wrapper using

swig -python locks.i
gcc -fPIC -shared -I/usr/include/python3.6/ -fopenmp locks.c locks_wrap.c -g -o _locks.so

and a quick test like

python3 -c "import locks; locks.f()"

seems to work as expected.

However, when I call the function f twice, once from the python main process and once from a subprocess like this:

from multiprocessing import Process

import locks

locks.f()

print('Launching Process')
p = Process(target=locks.f)
p.start()
p.join()
print(p.exitcode)

the code hangs in the call in the subprocess, printing only

Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Launching Process
Hello World!

in Python 3.6 and

Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Hello World!
Launching Process

in Python 3.8 on an Intel CPU with 4 cores and 8 hyperthreads.

If I only call the function from the subprocess, and not in both processes, the call in the subprocess also succeeds as expected.

The target system is 64 bit Linux (Ubuntu 18.04 in this case).

How can I fix this?


Solution

  • As pointed out by Zulan in a comment, the core issue appears to be that one cannot call OpenMP functions after a fork.

    Fortunately, Python multiprocessing allows you with the set_start_method() function to request that instead of forking, it spawns completely new interpreter processes from scratch.

    So by adjusting the python script to

    import multiprocessing as mp
    
    import locks
    
    if __name__ == '__main__':
        mp.set_start_method('spawn')
    
        locks.f()
    
        print('Launching Process')
        p = mp.Process(target=locks.f)
        p.start()
        p.join()
        print(p.exitcode)
    

    the issue is resolved.