Search code examples
cperformanceparallel-processingmpihpc

Storing values starting from a particular location in MPI_Recv


I am testing an example, where I am trying to send an array of 4 elements from process 0 to process 1 and I am doing so using MPI_Type_contiguous

This is the code for the same

#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include "mpi.h"

int main( int argc, char *argv[] )
{
    MPI_Init(&argc, &argv);
    
    int myrank, size; //size will take care of number of processes 
         
    MPI_Comm_rank(MPI_COMM_WORLD, &myrank) ;
    MPI_Comm_size(MPI_COMM_WORLD, &size);

    //declaring the matrix
    double mat[4]={1,   2,  3,  4};
    int r=4;

    double snd_buf[r];
    double recv_buf[r];
    double buf[r];
    
    int position=0;
    
    MPI_Status status[r];
    
    MPI_Datatype type;
    MPI_Type_contiguous( r, MPI_DOUBLE, &type );
    MPI_Type_commit(&type);
             
    
    //sending the data
    if(myrank==0)
    {
       MPI_Send (&mat[0], r , type, 1 /*dest*/ , 100 /*tag*/ , MPI_COMM_WORLD);
    }   
    //receiving the data
    
    if(myrank==1)
    {
       MPI_Recv(recv_buf, r, type, 0 /*src*/ , 100 /*tag*/, MPI_COMM_WORLD,&status[0]);
    }
    //printing
    if(myrank==1)
    {
       for(int i=0;i<r;i++)
       {
           printf("%lf ",recv_buf[i]);
       }
       printf("\n");
    }
    MPI_Finalize();
    return 0;

}

As one can see the recv_buf size is same as the size of the array. And the output that is being printed is 1 2 3 4

Now what I trying to do is that, say the recv_buf size is 10 and I want to store the elements starting from location 6 to 9. For which I have written this code, but to my surprise it is giving no output

#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include "mpi.h"

int main( int argc, char *argv[] )
{
    MPI_Init(&argc, &argv);
    
    int myrank, size; //size will take care of number of processes 
         
    MPI_Comm_rank(MPI_COMM_WORLD, &myrank) ;
    MPI_Comm_size(MPI_COMM_WORLD, &size);


    //declaring the matrix
    double mat[4]={1,   2,  3,  4};
    int r=4;
    double snd_buf[r];
    double recv_buf[10];   //declared it of size 10
    double buf[r];
    int position=0;
    MPI_Status status[r];
    
    MPI_Datatype type;
    MPI_Type_contiguous( r, MPI_DOUBLE, &type );
    MPI_Type_commit(&type);
    //packing and sending the data
    if(myrank==0)
    {
        MPI_Send (&mat[0], r , type, 1 /*dest*/ , 100 /*tag*/ , MPI_COMM_WORLD);
    }   
    //receiving the data
    if(myrank==1)
    {
        MPI_Recv(&recv_buf[6], r, type, 0 /*src*/ , 100 /*tag*/, MPI_COMM_WORLD,&status[0]);
    }
    //printing
    if(myrank==1)
    {
       for(int i=6;i<10;i++)
       {
           printf("%lf ",recv_buf[i]);
       }
       printf("\n");
    }
    MPI_Finalize();
    return 0;

}

Where am I going wrong?


Solution

  • From this SO Thread one can read:

    MPI_Type_contiguous is for making a new datatype which is count copies of the existing one. This is useful to simplify the processes of sending a number of datatypes together as you don't need to keep track of their combined size (count in MPI_send can be replaced by 1).

    That being said, in your MPI_Send call:

    MPI_Send (&mat[0], r , type, 1 /*dest*/ , 100 /*tag*/ , MPI_COMM_WORLD);
    

    you should not send an array with 'r' elements of type 'type', but rather send 1 element of type 'type' (which is equal to 4 doubles). One of the goals of the MPI_Type_contiguous is to abstract away the count to 1 instead of keeping track of the number of elements.

    The same applies to your recv call:

    MPI_Recv(&recv_buf[6], 1, type, 0 /*src*/ , 100 /*tag*/, MPI_COMM_WORLD,&status[0]);
    

    Finally, you should also free the custom type accordingly:

    MPI_Type_free(&type);
    

    The entire code:

    #include <string.h>
    #include <stdio.h>
    #include <stdlib.h>
    #include "mpi.h"
    
    int main( int argc, char *argv[] )
    {
        MPI_Init(&argc, &argv);
        int myrank, size;
        MPI_Comm_rank(MPI_COMM_WORLD, &myrank) ;
        MPI_Comm_size(MPI_COMM_WORLD, &size);
        double mat[4]={1, 2, 3, 4};
        int r=4;
        double snd_buf[r];
        double recv_buf[10];
      
        MPI_Status status;
        MPI_Datatype type;
        MPI_Type_contiguous( r, MPI_DOUBLE, &type );
        MPI_Type_commit(&type);
                 
        if(myrank==0)
           MPI_Send (&mat[0], 1 , type, 1, 100, MPI_COMM_WORLD);
        else if(myrank==1)
        {
           MPI_Recv(&recv_buf[6], 1, type, 0, 100, MPI_COMM_WORLD, &status);
           for(int i=6;i<10;i++)
              printf("%lf ",recv_buf[i]);
          printf("\n");
       }
    
       MPI_Type_free(&type);        
       MPI_Finalize();
       return 0;
    }
    

    The Output:

    1.000000 2.000000 3.000000 4.000000