I am not very familiar with Fortran code and want to compare it to some C code. I have the following definition for an array for which I want to have double precision (as I want to compare numbers between the Fortran and C code with double precision).
INTEGER :: n = 3
DOUBLE PRECISION, DIMENSION(n, 1) :: grid
Later I have
grid(1,1) = -2.85697001387280
grid(2,1) = -1.35562617997427
grid(3,1) = 0.00000000000000
Running the code through the debugger, To my surprise, debugger gives
grid = [-2.8569700717926025, -1.3556262254714966, 0]
which is only correct to single precision. I have also tried REAL*8, REAL(8) instead of DOUBLE PRECISION. I found it all the more surprising as the following code
INTEGER :: n = 3
REAL(8) :: mina = 0.5, maxa = 5.0
REAL(8) :: lmina, step
REAL(8), DIMENSION(n,1) :: a
lmina = log(mina)
lmaxa = log(maxa)
step = (lmaxa - lmina) / (n - 1)
DO i=1,n
a(i,1)=lmina+(i-1.0)*step
END DO
does create scalars and the array a that are accurate to double precision and exactly equal to the numbers of the C code. Why is this different in case of the hard-coded elements and how to fix it.
FYI, I compile the debug version as follows:
gfortran -g -o myfile myfile.f90
Fortran is like C in that numeric literals have a default data type, and no attempt is made to infer the type from the left-hand side of an assignment. Fortran is not like C in that the default for floating point literals is single precision (
REALin F77 and earlier,REALwith defaultKINDin F90 and later). This must be overridden to get a double precision constant (DOUBLE PRECISIONor in F90 and laterREALwith specific non-defaultKIND). Compare with C where you must override the default to get a single precision (float) constant.In the code that you subsequently cited:
Here it is unsurprising that the values end up being double precision without rounding.
minaandmaxaare double precision (using the implementation-specificKINDvalue of 8, also see the other answer for the risks of working this way). They are initialized from single precision literals, but both are exactly representable in binary floating point and thus there is no rounding.lminaandlmaxaare double precision and calculated using a double precisionLOGfunction.stepis double precision and calculated from double precision and integer values.lminaandstep) and a single precision literal1.0which is exactly representable and thus not rounded (and gets promoted to double precision in the computation).You can get a double precision constant by using
Dinstead ofEin scientific notation or by appending an underscore followed by the appropriateKINDparameter (which may be either a numeric literal or an integerPARAMETERdeclared elsewhere). A common idiom for doing this is to have aMODULEwhich isUSEd to define a common numericKINDtype to use throughout; an example of this style is given in the other answer.