Why does unsigned 3-bit bitfield wrap around -7 to 1

96 Views Asked by At
#include <stdio.h>

struct test{
   unsigned int a:3;
};

int main (int argc, char *argv[])
{
  struct test b;
  b.a = -7; // implicit truncation to 1
  return 0;
}

-7 is a four bit number represented as 1111. Now my bitfield takes only 3 bits and has the unsigned qualifier. As per my knowledge unsigned 3 bit integers range from 0-7. Following this assumption, I first tried -7 = 1111 = 15. And that wraps around to +7. I even considered the possibility that the MSB was ignored for some reason but still no answer. The ONLY way I could get this answer was when i took the 2's complement of 1111 which is equal to +1(which is what the compiler also says). But I have no clue as to why this worked.

2

There are 2 best solutions below

3
12431234123412341234123 On BEST ANSWER

Here is what the C11 standard says, 6.3.1.3 Signed and unsigned integers paragraph 3:

Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type. 60)

The maximum of a is 7, so the value 7+1=8 is added or subtracted till it is in the range. -7+8=1 => 1 is stored.

-7 in 2 complement is (~7)+1 which is (~0b111)+0b1 = 0b11111000+0b1 = 0b111111001 Store that in 3 bit and you get the last 3 bit of: 0b001=1.

0b1111 in a 4 bit signed number as 2 complement would be -1. Applying the 2 complement to it: (~1)+1 = (~0b1)+0b1 = 0b1110+0b1 = 0b1111 = -1.

You are correct that 0b1111 in a 4 bit signed number as signed magnitude would be 7, but nobody uses signed magnitude anymore, AFAIK it gets removed from C23 and only 2 complement will be allowed. And even on a signed magnitude system, the result will still be 1 because the the C standard requires it (as by the above mentioned paragraph)

1
the busybee On

Disclaimer: This answer explains only why your system behaves as you observed. It does not claim to be correct for all systems. The standard requests no specific implementation for bit fields and on other systems the outcome can be completely different.

An unsigned integer of 3 bits width can take only these 8 different values:

decimal binary
0 000
1 001
2 010
3 011
4 100
5 101
6 110
7 111

Now you assign -7 to such an integer. As all integer expressions (except wider types) are implicitly calculated as int, most probably as two's complement and with 32 bits width on your machine, the binary value is 0b11111111111111111111111111111001.

Since the compiler realizes the assignment by truncating the width to the target type, only the lowest 3 bits are stored: 0b001. This is 1 as decimal.