Search code examples
c++parallel-processingmpidistributeddistributed-computing

What is equivalent of socket programming's select() in MPI?


In socket programming, we have select() function which allows us to simultaneously check multiple sockets. I want to know is there any such feature available in MPI library as well?

In the first for loop of the following code, I am sending multiple nonblocking send and receive requests from one to every other node. In the second for loop instead of waiting for each node in sequential order, I want to start processing the data of the node which sends its data first. I want to know is there any way to do that?

for(id=0; id<numtasks; id++){
        if(id == taskid) continue;
        if(sendCount[id] != 0) MPI_Isend(sendBuffer[id], N*sendCount[id], MPI_DOUBLE, id, tag, MPI_COMM_WORLD, &reqs[id]);
        if(recvCount[id] != 0) MPI_Irecv(recvBuffer[id], N*recvCount[id], MPI_DOUBLE, id, tag, MPI_COMM_WORLD, &reqs[id]);
}

for(id=0; id<numtasks; id++){
        if(id == taskid) continue;
        if(recvCount[id] != 0){
                MPI_Wait(&reqs[id], &status);
                for(i=0; i<recvCount[id]; i++)
                        splitData(N, recvBuffer[id] + N*i, U[toRecv[id][i]]);
        }
}       

According to the given answers, I have tried to modify my code but I am still getting segmentation fault error during run time. Please help me to figure out the error.

for(id=0; id<numtasks; id++){
        if(id == taskid) continue;
        if(sendCount[id] != 0) MPI_Isend(sendBuffer[id], N*sendCount[id], MPI_DOUBLE, id, tag, MPI_COMM_WORLD, &reqs[id]);
        if(recvCount[id] != 0) MPI_Irecv(recvBuffer[id], N*recvCount[id], MPI_DOUBLE, id, tag, MPI_COMM_WORLD, &reqs[id]);
}

reqs[taskid] = reqs[numtasks-1];
for(i=0; i<numtasks-1; i++){
        MPI_Waitany(numtasks-1, reqs, &id, &status); 
        if(id == taskid) id = numtasks-1;
        for(i=0; i<recvCount[id]; i++)
                splitData(N, recvBuffer[id] + N*i, U[toRecv[id][i]]);
}

Solution

  • The closest equivalent would be MPI_Waitsome, you provide a list of requests and it returns as soon as at least one request is completed. However, there is no timeout as in select. There is also MPI_Waitany, MPI_Waitall as well as MPI_Testany, MPI_Testall, MPI_Testsome.

    The any and some variants mainly differ in the way the interface informs you about one or multiple completed requests.

    Edit: You need to use a separate requests for each operation, specifically the send and receive operations.