Search code examples
c++osx-mountain-lionopenmpi

Avoid Accept Incoming Network Connections dialog in mpirun on Mac-OSX


I am a MPI beginner. I am trying to run the simplest MPI "hello world" code on my macbook running Mac_OSX Mountain Lion. It has only 1 processor but it has 4 cores. The C++ code goes like this

#include <iostream>
#include "mpi.h"
using namespace std;

int main(int argc, char* argv[])
{
    int rank, size;
    MPI::Init();
    rank = MPI::COMM_WORLD.Get_rank();
    size = MPI::COMM_WORLD.Get_size();
    std::cout << "Hello, world!  I am " << rank << " of " << size << std::endl;
    cout << "size is " << size << endl;
    cout << "rank is " << rank << endl;
    MPI::Finalize();
    return 0;
}

Then I compile and run the code

$ mpic++ -o bb code2.cpp
$ mpirun -np 2 bb

I instantly get 2 dialog boxes showing the warnings "Do you want the application "bb" accept incoming network connections ?". The dialog boxes appear and disappear and the code runs fine-

Hello, world!  I am 0 of 2
size is 2
rank is 0
Hello, world!  I am 1 of 2
size is 2
rank is 1

I think MPI uses network connections when running on clusters or groups of CPU. But seeing the firewall dialog boxes appear and disappear again and again is annoying. I could disable the firewall or I could allow incoming connections for specific executables, but i don't want to do that. Is there a way to tell MPI to not use network connections since I am running it on a single computer ? Thanks.


Solution

  • No, there is no way to tell Open MPI not to use network connections at all. Besides when specifically instructed by the programmer to do so, Open MPI processes also talk to each other and to the MPI launcher in order to exchange control data - the so-called out-of-band messaging. The oob framework takes care of exchanging out-of-band information and there is a single implementation of that framework, which implementation uses TCP/IP.

    There are a lot of hidden communication channels in Open MPI. For example, when all processes run on the same node they use shared memory segments to transport the data but they also use FIFO's to pass control information and TCP/IP to connect to the MPI launcher orterun (usually called as mpiexec or mpirun).