I have a program that is compiled either on Linux with gfortran-9, either on Windows with ifort. The windows compilation is kind of a black box on which I don't have much access.
Anyway, at some point both code have to do the same multiplication but the result is different at the 13th decimal.
Here the test code I wrote to test this multiplication on my Linux machine:
implicit none
double precision:: a,b,c,d
200 format(F35.20)
b=20.17865682672815452747d0
c=3.75000000000000000000d0
d=32.17399999999999948841d0
a=b*c*d
write(*,200)a
end program
On Linux with gfortran I have 2434.60539278681835639873 value. On Windows with ifort the same multiplication is done and gives 2434.60539278681881114608 . Both are compiled with -O2 option.
I can't think of a reason why it differs. Is it because the double precision can't be more precise and that I should move to the real(16)
?
Thanks for your insights.
There are 3 different ways that the multiplication can be ordered. Gfortran and ifort happen to choose different orders. Using brackets you can see what is going on:
ian@eris:~/work/stack$ cat mult.f90
implicit none
double precision:: a,b,c,d
200 format(F35.20)
b=20.17865682672815452747d0
c=3.75000000000000000000d0
d=32.17399999999999948841d0
a=(b*c)*d
write(*,200)a
b=20.17865682672815452747d0
c=3.75000000000000000000d0
d=32.17399999999999948841d0
a=b*(c*d)
write(*,200)a
b=20.17865682672815452747d0
c=3.75000000000000000000d0
d=32.17399999999999948841d0
a=c*(b*d)
write(*,200)a
end program
ian@eris:~/work/stack$ gfortran --version
GNU Fortran (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
ian@eris:~/work/stack$ gfortran -O2 mult.f90
ian@eris:~/work/stack$ ./a.out
2434.60539278681835639873
2434.60539278681881114608
2434.60539278681881114608
Both answers are perfectly correct - you are just seeing one of the effects of floating point maths.