Search code examples
c++ubunturuntime-errormpibarrier

Why am I receiving a fatal error using MPI barriers [c++]


I'm new to MPI and have been getting a fatal error when trying to use barriers. I have a simple for loop that distributes indices to each process in a round robin fashion, immediately followed by an MPI barrier:

mpi.cc

#include <iostream>
#include <mpi.h>
#include <vector>
#include <sstream>

int main() {
    int name_len, rank, comm_size;
    char processor_name[MPI_MAX_PROCESSOR_NAME];
    MPI_Init(NULL, NULL);
    MPI_Get_processor_name(processor_name, &name_len);
    MPI_Comm comm = MPI_COMM_WORLD;
    MPI_Comm_rank(comm, &rank);
    MPI_Comm_size(comm, &comm_size);
    std::stringstream ss;    
    ss << "hello from: " << processor_name << " " << "Rank: " << rank << " Comm size: " << comm_size << "\n";

    for (int i =0; i < 20; i++) {
        if (i%comm_size != rank) continue;
        ss << "   " << i << "\n";
    }

    MPI_Barrier(comm);                          // Fails here
    std::cout << ss.str();
    MPI_Finalize();
}

I compile with:

mpicxx mpi.cc -o mpi

And then run on my 2-node cluster using:

mpirun -ppn 1 --hosts node1,node2 ./mpi

And I receive the following error:

Fatal error in PMPI_Barrier: Unknown error class, error stack:
PMPI_Barrier(414).....................: MPI_Barrier(MPI_COMM_WORLD) failed
MPIR_Barrier_impl(321)................: Failure during collective
MPIR_Barrier_impl(316)................: 
MPIR_Barrier(281).....................: 
MPIR_Barrier_intra(162)...............: 
MPIDU_Complete_posted_with_error(1137): Process failed
Fatal error in PMPI_Barrier: Unknown error class, error stack:
PMPI_Barrier(414).....................: MPI_Barrier(MPI_COMM_WORLD) failed
MPIR_Barrier_impl(321)................: Failure during collective
MPIR_Barrier_impl(316)................: 
MPIR_Barrier(281).....................: 
MPIR_Barrier_intra(162)...............: 
MPIDU_Complete_posted_with_error(1137): Process failed

Running on one node works but fails when running on 2. Any ideas where I might be going wrong?


Solution

  • I managed to resolve my problem. Instead of

    mpirun -ppn 1 --hosts node1,node2 ./mpi
    

    I explicitly used the ip addresses of node1 and node2 respectively and no longer have the issue. Appears the problem was with my /etc/hosts file:

    127.0.0.1   localhost
    127.0.0.1   node1
    

    It appears the hosts were attempting to reach localhost instead of node1. Further info here.