Search code examples
fortranmpibroadcastnonblocking

Is MPI_IBcast guaranteed to send even if some ranks don't participate


I am creating an MPI program, where I am trying to send the same data to all processes as soon as they finish their calculation. The processes can have large differences in their computation time, so I don't want that one processor waits for another.

The root process is guaranteed to always send first.

I know that MPI_Bcast acts as a Barries, so I experimented with MPI_IBcast:

program main
   use mpi
   implicit none 


   integer rank, nprcos, ierror, a(10), req
   call MPI_INIT(ierror)
   call MPI_COMM_SIZE(MPI_COMM_WORLD, nprcos, ierror)
   call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)

   a = rank

   if(rank /= 2) then
      call MPI_IBCAST(a, size(a), MPI_INTEGER, 0, MPI_COMM_WORLD, req, ierror)
      call MPI_WAIT(req, MPI_STATUS_IGNORE, IERROR)
   endif

   write (*,*) 'Hello World from process: ', rank, 'of ', nprcos, "a = ", a(1)

   call MPI_FINALIZE(ierror)

end program main

From my experiments it seems, that irregardless of which rank is "boycotting" the MPI_IBcast it always works on all the others:

> $ mpifort test.f90 && mpiexec --mca btl tcp,self -np 4 ./a.out
 Hello World from process:            2 of            4 a =            2
 Hello World from process:            1 of            4 a =            0
 Hello World from process:            0 of            4 a =            0
 Hello World from process:            3 of            4 a =            0

Is this a guaranteed behavior or is this just specific to my OpenMPI implementation? How else could I implement this? I can only think of loop over MPI_Isends.


Solution

  • No, this is not guaranteed, all ranks in the communicator should participate. Within MPI this is the definition of a collective communication.