Today I found out that casting int to float have pretty interesting precision and rounding rules. I have couple of numbers which gives me unexpected results:
float(1641016854000000000) -> 1.641016854e+18 (as expected)
float(1641016854000000100) -> 1.641016854e+18 (expected 1.6410168540000001e18)
float(1641016854000000200) -> 1.6410168540000003e+18 (expected 1.6410168540000002e+18)
float(1641016854000000300) -> 1.6410168540000003e+18 (as expected)
float(1641016854000000400) -> 1.6410168540000005e+18 (expected 1.6410168540000004e+18)
So my questions are:
- How I can prevent rounding of the values?
- How to get expected results instead those I have now?
Python's default float type, as with most languages, can only be so precise to conserve memory. This is why you get floating point errors when working with extremely precise numbers. A good way to avoid this is to use python's decimal library.