Python:
machine_epsilon = np.finfo(float).eps
first_variant = 1 + machine_epsilon + machine_epsilon/2
second_variant = 1 + machine_epsilon/2 + machine_epsilon
print ('%.20f' % first_variant)
print ('%.20f' % second_variant)
C:
double eps2 = DBL_EPSILON;
printf("1 + eps / 2 +eps %.20f \n", 1. + eps2 / 2. + eps2);
printf("1 + eps +eps/2 %.20f \n", 1.+ eps2 + eps2 / 2.);
It resulted in 1.00000000000000044409 for the first_variant and 1.00000000000000022204 for second_variant, i.e. fractional part is 2 more.
Who can explain this?
Floating-point types, like
doubleorfloatin C, have a limited precision (e.g. 53 significand bits for a double precision IEEE 754 binary64), so that they cannot represent all the real numbers and the mathematical operations in which they are involved don't have all the properties expected in exact computing, in particular they are not commutative.The OP is trying to evaluate
Which is challenging, because
That number is one of those that cannot be exactly represented by a
double(or a float), while both1 + machine_epsilonand1 + 2 * machine_epsiloncan.When an expression is evaluated, even if the hardware can perform the calculation and evaluate the intermediate values using a higher precision, the stored result has to be rounded to one of the representable values. There are many tipes of roundings to nearest, like ties away from zero or ties to even.
In the posted examples,
1 + machine_epsilon/2is rounded to1and then, addingmachine_epsilon, the result becomes1 + machine_epsilon, while1 + machine_epsilon + machine_epsilon/2is rounded to1 + 2 * machine_epsilon.