Is this statement correct? :
Computations with large numbers is less accurate due to the logarithmic distribution of floating point numbers on the computer.
So that means computing with values around 1 is more accurate (because of rounding errors) then the same computations where each number has been scaled with 1e20 for example?
Short answer:
Yes the statement is correct, larger floating point numbers are less precise than smaller ones.
Details:
Floating point numbers have a fixed number of bits assigned to the mantissa. If the number that is being represented requires more bits than are in the mantissa then it will be rounded. So a smaller number can be represented more precisely.
To make this more concrete I wrote the following program that adds progressively smaller values to a large floating point number and a small one. Also to show the difference I included a double precision floating point that does not have rounding. But the double would experience the same problem if the mantissa were even larger.
Running this program produces the following output:
As you can see the
large_floatvalue is less precise than thesmall_floatwhich can lead the final result to be less accurate as well.