I have the following code:
real :: s_s, d_s, s_r(size), d_r(size)
integer :: k, k_r(size)
! - size = number of processors
! - Do something to initialise s_s, d_s, k
write(*,*) "SENDING >>>>"
write(*,*) s_s, d_s
call MPI_Allgather( s_s, 1, MPI_REAL,
& s_r, 1, MPI_REAL, MPI_COMM_PGM, mpi_err)
call MPI_Allgather( d_s, 1, MPI_REAL,
& d_r, 1, MPI_REAL, MPI_COMM_PGM, mpi_err)
call MPI_Allgather ( k, 1, MPI_INTEGER,
& k_r, 1, MPI_INTEGER, MPI_COMM_PGM, mpi_err)
write(*,*) "RECEIVED <<<<"
write(*,*) s_r, d_r, kr
This generates the following output:
SENDING >>>>
-1803.80339864908 0.616157856320407
RECEIVED <<<<
6.953077622513053E-310 3.565412685916647E-314 1.221334434576037E-314
1.498827614035474E-314 6.952991536467244E-310 6.953288052096687E-310
6.953108563966064E-310 2.350861403096908E-314 4 1
2 3
kr is being gathered correctly however, s_r and d_r seem to be receiving junk. Could this be because of the MPI datatypes? I tried with MPI_REAL
MPI_REAL8
and MPI_DOUBLE
but that didn't work. Furthermore, mpi_err = MPI_SUCCESS
What could I do to resolve this?
EDIT 1 I worked on the following prototype program:
program allgather
implicit none
include "mpif.h"
real a(4)
integer rank,size,ierr
real as(4)
real ar(16)
integer i, j, k,z
a=1
call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierr)
if(size.ne.4)then
write(*,*)'Error!:# of processors must be equal to 4'
write(*,*)'Programm aborting....'
call MPI_ABORT(ierr)
endif
do k=1,4
if ( rank == (mod(k, size))) then
a(k) = k
else
a(k) = 0.0
endif
enddo
write(*,*) "Rank :", rank
write(*,*) a
call MPI_Allgather(a, 4, MPI_REAL, ar,
& 4,
& MPI_REAL, MPI_COMM_WORLD, ierr)
write(*,*) "Recieved array"
write(*,*) ar
do i = 1, 16
if ( ar(i) /= 0.0 ) then
z = mod(i, size)
if ( z == 0 ) then
a( size ) = ar(i)
else
a ( z ) = ar(i)
endif
endif
enddo
write(*,*) "---------"
write(*,*) a
write(*,*) "---------"
call MPI_FINALIZE(ierr)
end
And this generates the expected results i.e. ar
doesn't gather junk. I'm unable to however tell the difference between the implementations.
It turns out that for the project, the data type to be used was MPI_FLT
. It is strange that MPI_FLT
works and not MPI_REALx
where x=4,8 also not MPI_FLOAT
. I grep-ed MPI_FLT
in the project to see what it is defined as but didn't turn up anywhere in the project.
The OpenMPI version I'm using is:
$ mpirun --version
mpirun (Open MPI) 3.0.0
The compiler I use is:
$ mpifort --version
ifort (IFORT) 19.0.1.144 20181018
In a future edit I will elaborate on the cause.