So small question, I've been looking into moving part of my C# code to C++ for performance reasons. Now when I look at my float.Epsilon in C# its value is different from my C++ value.
In C# the value, as described by microsoft is 1.401298E-45.
In C++ the value, as described by cppreferences is 1.19209e-07;
How can it be that the smallest possible value for a float/single can be different between these languages?
If I'm correct, the binary values should be equal in terms of number of bytes an maybe even their binary values. Or am I looking at this the wrong way?
Hope someone can help me, thanks!
The second value you quoted is the machine epsilon for IEEE
binary32values.The first value you quoted is NOT the machine epsilon. From the documentation you linked:
From the wiki Variant Definitions section for machine epsilon:
...
The C# documentation is using that variant definition.
So the answer is that you are comparing two different types of Epsilon.