Strange bitset output after cast from uint64_t to double and back to uint64_t

60 Views Asked by At

I have the following program:

#include <bitset>
#include <iostream>

int main() {
    uint64_t big = 0xFFFFFFFFFFFFFFFF;
    double d_big = static_cast<double>(big);
    uint64_t big_d_big = static_cast<uint64_t>(d_big);

    std::cout << big << std::endl;
    std::cout << std::bitset<sizeof(uint64_t) * 8>(big) << std::endl;

    std::cout << big_d_big << std::endl;
    std::cout << std::bitset<sizeof(uint64_t) * 8>(big_d_big) << std::endl;
}

which gives the following output:

18446744073709551615
1111111111111111111111111111111111111111111111111111111111111111
73896
1111111111111111111111111111111111111111111111111111111111111111

Although I was expecting a loss of information converting to a double and then converting back, how come 'big' and 'big_d_big', which have the same uint64_t type, have identical bitsets but different values? Shouldn't the bitsets also be different given the loss of information? If it helps I am using Clang 17.0.6 x86_64. Also, out of curiosity, where is '73896' specifically coming from in the conversion process?

0

There are 0 best solutions below