Search code examples
cmpiintel-mpi

MPI Broadcast 2D array


I have a 2D double precision array that is being manipulated in parallel by several processes. Each process manipulates a part of the array, and at the end of every iteration, I need to ensure that all the processes have the SAME copy of the 2D array.

Assuming an array of size 10*10 and 2 processes (or processors). Process 1 (P1) manipulates the first 5 rows of the 2D row (5*10=50 elements in total) and P2 manipulates the last 5 rows (50 elements total). And at the end of each iteration, I need P1 to have (ITS OWN first 5 rows + P2's last 5 rows). P2 should have (P1's first 5 rows + it's OWN last 5 rows). I hope the scenario is clear.

I am trying to broadcast using the code given below. But my program keeps exiting with this error: "APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)".

I am already using a contiguous 2D memory allocator as pointed out here: MPI_Bcast a dynamic 2d array by Jonathan. But I am still getting the same error.

Can someone help me out?

My code:

double **grid, **oldgrid;
int gridsize; // size of grid
int rank, size; // rank of current process and no. of processes
int rowsforeachprocess, offset; // to keep track of rows that need to be handled by each process

 /* allocation, MPI_Init, and lots of other stuff */

 rowsforeachprocess = ceil((float)gridsize/size);
 offset = rank*rowsforeachprocess;

 /* Each process is handling "rowsforeachprocess" #rows.
 * Lots of work done here
 * Now I need to broadcast these rows to all other processes.
 */

 for(i=0; i<gridsize; i++){
     MPI_Bcast(&(oldgrid[i]), gridsize-2, MPI_DOUBLE, (i/rowsforeachprocess), MPI_COMM_WORLD);
 }

Part 2: The code above is part of a parallel solver for the laplace equation using 1D decomposition and I did not want to use a Master-worker model. Will my code be easier if I use a Master-worker model?


Solution

  • The crash-causing problem here is a 2d-array pointer issue -- &(oldgrid[i]) is a pointer-to-a-pointer to doubles, not a pointer to doubles, and it points to the pointer to row i of your array, not to row i of your array. You want MPI_Bcast(&(oldgrid[i][0]),.. or MPI_Bcast(oldgrid[i],....

    There's another way to do this, too, which only uses one expensive collective communicator instead of one per row; if you need everyone to have a copy of the whole array, you can use MPI_Allgather to gather the data together and distribute it to everyone; or, in the general case where the processes don't have the same number of rows, MPI_Allgatherv. Instead of the loop over broadcasts, this would look a little like:

    {
        int *counts = malloc(size*sizeof(int));
        int *displs = malloc(size*sizeof(int));
        for (int i=0; i<size; i++) {
            counts[i] = rowsforeachprocess*gridsize;
            displs[i] = i*rowsforeachprocess*gridsize;
        }
        counts[size-1] = (gridsize-(size-1)*rowsforeachprocess)*gridsize;
    
        MPI_Allgatherv(oldgrid[offset], mynumrows*gridsize, MPI_DOUBLE,
                       oldgrid[0],      counts, displs, MPI_DOUBLE, MPI_COMM_WORLD);
        free(counts);
        free(displs);
    }
    

    where counts are the number of items sent by each task, and displs are the displacements.

    But finally, are you sure that every process has to have a copy of the entire array? If you're just computing a laplacian, you probably just need neighboring rows, not the whole array.

    This would look like:

    int main(int argc, char**argv) {
    double **oldgrid;
    const int gridsize=10; // size of grid
    int rank, size;        // rank of current process and no. of processes
    int rowsforeachprocess; // to keep track of rows that need to be handled by each process
    int offset, mynumrows;
    MPI_Status status;
    
    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    
    rowsforeachprocess = (int)ceil((float)gridsize/size);
    offset = rank*rowsforeachprocess;
    mynumrows = rowsforeachprocess;
    if (rank == size-1)
        mynumrows = gridsize-offset;
    
    rowsforeachprocess = (int)ceil((float)gridsize/size);
    offset = rank*rowsforeachprocess;
    mynumrows = rowsforeachprocess;
    if (rank == size-1)
        mynumrows = gridsize-offset;
    
    malloc2ddouble(&oldgrid, mynumrows+2, gridsize);
    
    for (int i=0; i<mynumrows+2; i++)
        for (int j=0; j<gridsize; j++)
            oldgrid[i][j] = rank;
    
    /* exchange row data with neighbours */
    int highneigh = rank+1;
    if (rank == size-1) highneigh = 0;
    
    int lowneigh  = rank-1;
    if (rank == 0)  lowneigh = size-1;
    
    /* send data to high neibhour and receive from low */
    
    MPI_Sendrecv(oldgrid[mynumrows], gridsize, MPI_DOUBLE, highneigh, 1,
                 oldgrid[0],         gridsize, MPI_DOUBLE, lowneigh,  1,
                 MPI_COMM_WORLD, &status);
    
    /* send data to low neibhour and receive from high */
    
    MPI_Sendrecv(oldgrid[1],           gridsize, MPI_DOUBLE, lowneigh,  1,
                 oldgrid[mynumrows+1], gridsize, MPI_DOUBLE, highneigh, 1,
                 MPI_COMM_WORLD, &status);
    
    
    for (int proc=0; proc<size; proc++) {
        if (rank == proc) {
            printf("Rank %d:\n", proc);
            for (int i=0; i<mynumrows+2; i++) {
                for (int j=0; j<gridsize; j++) {
                    printf("%f ", oldgrid[i][j]);
                }
                printf("\n");
            }
            printf("\n");
        }
        MPI_Barrier(MPI_COMM_WORLD);
    }