Search code examples
c++arraysvectorparallel-processingmpi

MPI send/recv of a vector


Hello everyone I'm working on a project for university that involves using the MPI library, unfortunately I cannot share the entire code but i hope somebody will be able to give me some pointers regardless. I need to send an array from proc X to proc 0, however as I've read on google the only way to send a dynamically created array is to find out the size, send the size to Proc 0 and only then send the array (I'm not able to predetermine the size of it),so this is what I did on Proc X:

vector<int> tosend = aglib.ExtractEvaluationFunctions();
int tosend_size = tosend.size();
MPI_Send(&tosend_size, 1, MPI_INT, 0, 666, MPI_COMM_WORLD);
MPI_Send(&tosend, tosend_size, MPI_INT, 0, 777, MPI_COMM_WORLD);

This is what happens on Proc 0 (I cannot share the same buffer as the vector tosend is locally created in Proc1 each time):

 vector<int> results_and_rank;
 int results_rank_size;
 MPI_Recv(&results_rank_size, 1, MPI_INT, MPI_ANY_SOURCE, 666, MPI_COMM_WORLD, &status);
 MPI_Recv(&results_and_rank, results_rank_size, MPI_INT, MPI_ANY_SOURCE, 777, MPI_COMM_WORLD, &status); 
 cout << "size of results_and_rank is :"<< results_and_rank.size()endl;
 cout<< "last element of array is :"<< results_and_rank.back()<<endl;

I think communication works fine as I'm able to read the size of the received vector which is identical to the one I sent, however my code crashes whenever I try to access an element of the array results_and_rank thus crashing on the last print.

By the way I need to use blocking communication for the purpose of my project.

Am I missing something? Thank you for your time and help.


Solution

  • You want to send/receive the data to the vector, thus you need the data inside the vector, not the vector itself.

    Here you pass the address of std::vector itself to the MPI_Recv(), which will corrupt the std::vector.

    MPI_Recv(&results_and_rank, results_rank_size, MPI_INT, MPI_ANY_SOURCE, 777, MPI_COMM_WORLD, &status);
    

    Correct way:

    Sender:

    MPI_Send(&tosend[0], tosend_size, MPI_INT, 0, 777, MPI_COMM_WORLD);
    

    Receiver:

    results_and_rank.resize(results_rank_size);
    MPI_Recv(&results_and_rank[0], results_rank_size, MPI_INT, MPI_ANY_SOURCE, 777, MPI_COMM_WORLD, &status);