Search code examples
mpicommunicator

Creating MPI communicators for reading files


I need some help with MPI communicators, a subject to which I am relatively new.

I have an MPI code that will be reading input from several input files. Every process will read from at least one file, most will read from more than one. Every file will be read.

I need to create a communicator for each file. Let's say for example that processes 0, 1, and 2 read from file "A.dat", processes 2, 3, and 4 read from file "B.dat", and processes 4, 5, and 6 read from "C.dat". (In practice there will be many more processes and files.) So I need three communicators. The first should contain procs 0, 1, and 2; the second 2, 3, and 4; the third 4, 5, and 6. I'm rather at a loss as to how to do this. Anyone know how?


Solution

  • Its possible to split a larger communicator into smaller communicators of some preferable size:

    #include <mpi.h>
    #include <stdio.h>
    
    int main(int argc, char** argv) {
        // Initialize the MPI environment
        MPI_Init(NULL, NULL);
    
        // Get the rank and size in the MPI_COMM_WORLD communicator
        int world_rank, world_size;
        MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
        MPI_Comm_size(MPI_COMM_WORLD, &world_size);
    
        int color = world_rank / 4; //4 process (colors) for each communicator
    
        MPI_Comm row_comm;
    
        /*
         The first argument is the communicator that will be used as the basis for the new communicators.
         The second argument determines to which new communicator each processes will belong.
         The third argument determines the ordering (rank) within each new communicator...
         The process which passes in the smallest value for the third argument will be rank 0, the next smallest will be rank 1, and so on.
         The final argument returns the new communicator back to the user.
         */
        MPI_Comm_split(MPI_COMM_WORLD, color, world_rank, &row_comm);
    
        int row_rank, row_size;
        MPI_Comm_rank(row_comm, &row_rank);
        MPI_Comm_size(row_comm, &row_size);
    
        printf("WORLD RANK/SIZE: %d/%d \t ROW RANK/SIZE: %d/%d\n",
               world_rank, world_size, row_rank, row_size);
    
        MPI_Comm_free(&row_comm);
    
        // Finalize the MPI environment.
        MPI_Finalize();
    }
    

    Or you could create groups (more flexible way)

    #include <mpi.h>
    #include <stdio.h>
    
    int main(int argc, char** argv) {
        // Initialize the MPI environment
        MPI_Init(NULL, NULL);
    
        // Get the rank and size in the original communicator
        int world_rank, world_size;
        MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
        MPI_Comm_size(MPI_COMM_WORLD, &world_size);
    
        // Get the group of processes in MPI_COMM_WORLD
        MPI_Group world_group;
        MPI_Comm_group(MPI_COMM_WORLD, &world_group);
    
        int prime_group_size = 6;
        const int ranks[6] = {2, 3, 5, 7, 11, 13};
    
        // Construct a group containing all of the prime ranks in world_group
        MPI_Group prime_group;
        MPI_Group_incl(world_group, prime_group_size, ranks, &prime_group);
    
        // Create a new communicator based on the group
        MPI_Comm prime_comm;
        MPI_Comm_create_group(MPI_COMM_WORLD, prime_group, 0, &prime_comm);
    
        int prime_rank = -1, prime_size = -1;
        // If this rank isn't in the new communicator, it will be
        // MPI_COMM_NULL. Using MPI_COMM_NULL for MPI_Comm_rank or
        // MPI_Comm_size is erroneous
        if (MPI_COMM_NULL != prime_comm) {
            MPI_Comm_rank(prime_comm, &prime_rank);
            MPI_Comm_size(prime_comm, &prime_size);
    
            printf("WORLD RANK/SIZE: %d/%d \t PRIME RANK/SIZE: %d/%d\n",
                   world_rank, world_size, prime_rank, prime_size);
    
            MPI_Group_free(&prime_group);
            MPI_Comm_free(&prime_comm);
        }
    
    
        MPI_Group_free(&world_group);
    
        // Finalize the MPI environment.
        MPI_Finalize();
    }
    

    REFERENCE