How to set double precision for elements of an array in Fortran 90

112 Views Asked by At

I am not very familiar with Fortran code and want to compare it to some C code. I have the following definition for an array for which I want to have double precision (as I want to compare numbers between the Fortran and C code with double precision).

INTEGER :: n = 3
DOUBLE PRECISION, DIMENSION(n, 1) :: grid

Later I have

grid(1,1) = -2.85697001387280    
grid(2,1) = -1.35562617997427   
grid(3,1) =  0.00000000000000       

Running the code through the debugger, To my surprise, debugger gives

grid = [-2.8569700717926025, -1.3556262254714966, 0]

which is only correct to single precision. I have also tried REAL*8, REAL(8) instead of DOUBLE PRECISION. I found it all the more surprising as the following code

INTEGER :: n = 3
REAL(8) :: mina = 0.5, maxa = 5.0 
REAL(8) :: lmina, step
REAL(8), DIMENSION(n,1) :: a

lmina = log(mina)
lmaxa = log(maxa)
step = (lmaxa - lmina) / (n - 1)
DO i=1,n
   a(i,1)=lmina+(i-1.0)*step
END DO

does create scalars and the array a that are accurate to double precision and exactly equal to the numbers of the C code. Why is this different in case of the hard-coded elements and how to fix it.

FYI, I compile the debug version as follows:

gfortran -g -o myfile myfile.f90
2

There are 2 best solutions below

2
Craig On BEST ANSWER

Fortran is like C in that numeric literals have a default data type, and no attempt is made to infer the type from the left-hand side of an assignment. Fortran is not like C in that the default for floating point literals is single precision (REAL in F77 and earlier, REAL with default KIND in F90 and later). This must be overridden to get a double precision constant (DOUBLE PRECISION or in F90 and later REAL with specific non-default KIND). Compare with C where you must override the default to get a single precision (float) constant.

In the code that you subsequently cited:

REAL(8) :: mina = 0.5, maxa = 5.0 
REAL(8) :: lmina, lmaxa, step
REAL(8), DIMENSION(n,1) :: a

lmina = log(mina)
lmaxa = log(maxa)
step = (lmaxa - lmina) / (n - 1)
DO i=1,n
   a(i,1)=lmina+(i-1.0)*step
END DO

Here it is unsurprising that the values end up being double precision without rounding.

  1. mina and maxa are double precision (using the implementation-specific KIND value of 8, also see the other answer for the risks of working this way). They are initialized from single precision literals, but both are exactly representable in binary floating point and thus there is no rounding.
  2. lmina and lmaxa are double precision and calculated using a double precision LOG function.
  3. step is double precision and calculated from double precision and integer values.
  4. The value computed in the loop is from double precision variables (lmina and step) and a single precision literal 1.0 which is exactly representable and thus not rounded (and gets promoted to double precision in the computation).

You can get a double precision constant by using D instead of E in scientific notation or by appending an underscore followed by the appropriate KIND parameter (which may be either a numeric literal or an integer PARAMETER declared elsewhere). A common idiom for doing this is to have a MODULE which is USEd to define a common numeric KIND type to use throughout; an example of this style is given in the other answer.

5
Scientist On

@Mike The (or a) right way of assigning kind type parameter is something like the following,

use iso_fortran_env, only: IK => int32, RK => real64
integer(IK), parameter :: n = 3_IK
real(RK) :: grid(n, 1)
grid = [-2.8569700717926025_RK, -1.3556262254714966_RK, 0._RK]

Don't ever use REAL*8, REAL(8), or 0._8. These are all non-portable (and the first is not even part of the standard) and can crash your code or yield extremely hard-to-debug segmentation faults when linking with external libraries compiled with different settings. _8 is also hard to change across a codebase if used frequently everywhere. Instead, define a kind-agnostic label for the value like IK and RK in the above and use them everywhere. Without using kind designators, the values are converted to the default kind precision on the right-hand side and only then assigned to the left-hand side. Note that real64 does not guarantee a double precision. It only guarantees a container of 64bits for the real values. With this generic labeling, you can change the precision (or container) of all real values in your code by changing real64 to other possibilities like real32 or real128 or any values in the vector real_kinds from the eiso_fortran_env intrinsic module. If you want a guarantee of double precision (minimum 15 digits) at any time with any compiler, you may want to use the following line instead of real64 from the iso_fortran_env module,

use iso_fortran_env, only: IK => int32
integer, parameter :: RK = selected_real_kind(p = 15, r  = 308)
integer(IK), parameter :: n = 3_IK
real(RK) :: grid(n, 1)
grid = [-2.8569700717926025_RK, -1.3556262254714966_RK, 0._RK]

Consult a Fortran reference for details of selected_real_kind() intrinsic. In most cases, at least certainly with GNU and Intel compilers, the use of selected_real_kind is overkill, and iso_fortran_env offers a nice, readable, satisfactory result.

If you want precisely the same precision as you have with double in your C companion processor, then use c_double from iso_c_binding intrinsic module,

use iso_c_binding, only:  RK => c_double ! or c_float, or c_long_double