Search code examples
c++mpims-mpi

Passing and pushing into a vector in MPI_Reduce


I need the reducing node to get a copy of a list of elements (stored in a vector) from the other nodes. I defined my own reducing function but it is not working. The program terminates/crashes.

This is the code:

#include <iostream>
#include "mpi.h"
#include <vector>

using namespace std;

void pushTheElem(vector<int>* in, vector<int>* inout, int *len, MPI_Datatype *datatype)
{
    vector<int>::iterator it;
    for (it = in->begin(); it < in->end(); it++)
    {
        inout->push_back(*it);
    }
}

int main(int argc, char **argv)
{
    int numOfProc, procID;
    vector<int> vect, finalVect;

    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &numOfProc);
    MPI_Comm_rank(MPI_COMM_WORLD, &procID);

    MPI_Op myOp;
    MPI_Op_create((MPI_User_function*)pushTheElem, true, &myOp);

    for (int i = 0; i < 5; i++)
    {
        vect.push_back(procID);
    }

    MPI_Reduce(&vect, &finalVect, 5, MPI_INT, myOp, 0, MPI_COMM_WORLD);

    if (procID == 0)
    {
        vector<int>::iterator it;
        cout << "Final vector elements: " << endl;

        for (it = finalVect.begin(); it < finalVect.end(); it++)
            cout << *it << endl;
    }

    MPI_Finalize();
    return 0;
}

Solution

  • It seems you want to collect all elements from all processes. This is not a reduction, it is a gather operation. A reduction combines multiple arrays of the same length to an array of this particular length:

    MPI_Reduce

    This is not the case, when combining two arrays yields an array of length equal to the sum of input arrays. With MPI, you cannot simply operate with pointers like you try to do in your reduction operation. You cannot send around pointers with MPI, as the processes have separate address space. The MPI interface does use pointers, but only regions of data containing known types and a known size.

    You can easily do your task with MPI_Gather.

    MPI_Gather

    // vect.size() must be the same on every process, otherwise use MPI_Gatherv
    // finalVect is only needed on the root.
    if (procID == 0) finalVect.resize(numOfProc * vect.size());
    MPI_Gather(vect.data(), 5, MPI_INT, finalVect.data(), 5, MPI_INT, 0, MPI_COMM_WORLD);