Numerical Precision of doubles and decimals after division C#

275 Views Asked by At

I am working with a lot of mathematical operations, mostly with divisions, and its really important that after all calculations has been done, all the numbers match with the initial state. For example I am testing the following code for doubles in C#:

        double total = 1000;
        double numberOfParts = 6;

        var sumOfResults = 0.0;
        for (var i = 0; i < numberOfParts; i++)
        {
            var result = total / numberOfParts;
            sumOfResults += result;
        }

This code gives a result of sumOfResults: 999.9999999999999, and i expect 1000. The similar problem happens when using decimals, just it's more precise:

        decimal total = 1000m;
        decimal numberOfParts = 6m;

        var sumOfResults = decimal.Zero;
        for (var i = 0; i < numberOfParts; i++)
        {
            var result = total / numberOfParts;
            sumOfResults += result;
        }

Expected sumOfResults to be 1000M, but found 1000.0000000000000000000000001M. So even here when i need to compare with the initial state of 1000, I will never have the same state as before the division of numbers.

I am aware of the Numerical Analysis field and everything, but is there some library, that will help me to get the exact number of 1000 after sum of all the division results?

2

There are 2 best solutions below

4
Dmitry Bychenko On

If you are looking for exact result you have to work with rational numbers; there are plenty of assemblies which implement BigRational type; say, you can try using my own HigherArithmetics (it's for .Net 5):

  using HigherArithmetics.Numerics;

  ... 

  BigRational total = 1000;
  BigRational numberOfParts = 6;

  BigRational sumOfResults = 0;
  
  for (var i = 0; i < numberOfParts; i++) {
    var result = total / numberOfParts;
    sumOfResults += result;
  }

  Console.Write(sumOfResults);

Outcome:

  1000

If you want, however, to use standard double or decimal type you have to compare with tolerance:

  double tolerance = 1e-6; 

  double total = 1000;
  double numberOfParts = 6;

  double sumOfResults = 0;
  
  for (var i = 0; i < numberOfParts; i++) {
    var result = total / numberOfParts;
    sumOfResults += result;
  }

  sumOfResults = Math.Abs(total, sumOfResults) <= tolerance
    ? total
    : sumOfResults;

  Console.Write(sumOfResults);

Finally, yet another possibility is to round the answer:

  sumOfResults = Math.Round(sumOfResults, 6);
0
JAlex On

This is more of human perception than actually a numeric problem. Almost every floating point number is inaccurate due to machine precision. Mathematically the difference between 1000.0 and 999.99999999999997 is sufficient for most operations.

The solution might seem odd to you, but it works to solve the inaccuracy anxiety from computation readouts.

    double total = 1000;
    double numberOfParts = 6;

    var sumOfResults = 0.0;
    for (var i = 0; i < numberOfParts; i++)
    {
        var result = total / numberOfParts;
        sumOfResults += result;
    }

    Console.WriteLine((float)sumOfResults);
    // 1000

Simply reduce the precision for human readable output You saw how increasing precision makes things worse, so go the other way around. The system does this to extent as the double.ToString() function rounds the least significant bit off.

Or you can control the number of significant digits to show with the 'gXXX' format specifier.

    Console.WriteLine($"{sumOfResults:G5}");
    // 1000

In summary, the "issue" you see is common in all computers that have IEEE 754 floating point types and most numbers are not represented exactly.

For example Math.PI below is shown as defined and as shown in C#

Environment Value
π 3.141592653589793238462643383279502884...
.NET 5 3.141592653589793
Framework v4.8 3.141592653589791