Squaring 2**32 into a 128-bit type fails

90 Views Asked by At

I would expect the following C code to produce 8, 16 for the first printf (it does), and 1, 0 for the second (however, it produces 0, 0). Why is that? (This is on MacOS on an Intel Core i7.)

typedef unsigned long long ll;
typedef unsigned __int128 bigint;

void main() {
  ll q;
  bigint s;
  q = 4294967296ULL; // 2**32
  s = q*q;
  printf("%d, %d\n", sizeof(ll), sizeof(unsigned __int128));
  printf("%lld, %lld\n", s / q, s % q);
}

Edit: After modifying the code to what is below, the results are the same (and note: in the original code, I was not trying to print any 128-bit values).

void main() {
  ll q;
  bigint s;
  q = 4294967296ULL; // 2**32
  s = q*q;
  printf("%zu, %zu\n", sizeof(ll), sizeof(unsigned __int128));
  printf("%llu, %llu\n", (ll)(s / q), (ll)(s % q)); 
}
4

There are 4 best solutions below

0
Eric Postpischil On BEST ANSWER

In s = q*q;, q is a 64-bit unsigned long long, and the multiplication is performed with 64-bit arithmetic. Since q has the value 232, the mathematical product, 264, does not fit in a 64-bit unsigned long long. It wraps modulo 264, so the computed result is 0.

Thus s is assigned the value 0, and this is what is printed (aside from the problems with the printf conversion specifiers noted in the comments).

You can request 128-bit arithmetic by converting at least one of the operands to the desired type:

s = (bigint) q * q;
0
Ted Lyngmo On

In s = q*q; the multiplication is done using unsigned long longs, not unsigned __int128s, so the result overflows. Either cast an operand to unsigned __int128 or make q an unsigned __int128 from start.

You are also using the wrong conversion specifiers for printf. Use %zu for sizeof. In addition to that, unless gcc has a printf extension for printing the 128 bit types, you'll need to find an alternative way of printing it.

Example:

#include <inttypes.h>
#include <limits.h>
#include <stdio.h>

typedef unsigned __int128 ubigint;

void print(ubigint x) {
    // print top 64 bits first and the lower 64 bits last:
    printf("%016" PRIx64 "%016" PRIx64, (uint64_t)(x >> 64), (uint64_t)x);
}

int main() {
    ubigint q;         // correct type
    ubigint s;

    q = 4294967296ULL;
    s = q * q;         // no overflow anymore

    printf("%zu, %zu\n", sizeof(long long), sizeof(ubigint));

    print(s);
    putchar('\n');
    print(s / q);
    putchar('\n');
    print(s % q);
    putchar('\n');
}

Output

8, 16
00000000000000010000000000000000
00000000000000000000000100000000
00000000000000000000000000000000
0
Bodo On

The calculation s = q*q; is done in the type of q which results in an overflow and truncation to the type of q. Then the truncated result is assigned to s which has a bigger type.

Cast s to the bigger type to force a calculation in the bigger type.

s = (bigint)q * q;

Or assign the value of q to a variable of the bigger type and use this for the calculation.

s = q;
s = s * s;
0
0___________ On

You need to cast:

typedef unsigned long long ll;
typedef unsigned __int128 bigint;

int main(void)
{
  ll q;
  bigint s,d;
  q = 4294967296ULL; // 2**32
  s = (bigint)q*q;
  d = q*q;
  printf("%lld, %lld\n", (ll)(s / q), (ll)(s % q));
  printf("%lld, %lld\n", (ll)(d / q), (ll)(d % q));
}   

https://godbolt.org/z/6EKc7WsxW