I have been trying to understand the integer overflow in C-programming. I am confused about whether the final value output depends on the initial datatype given to the variable during declaration or the format specifier used to print the variable.
`
int main() {
short int a = 32771;
printf("%d\n", a); // O/P : -32765
printf("%u\n\n", a); // O/P : 4294934531
int b = 32771;
printf("%hd\n", b); // O/P : -32765
printf("%hu", b); // o/p : 32771
return 0;
}
`
a is declared a short-integer at the very start, but initialized with a value that overflows a short-integer range. The printf("%d\n", a) statement gives output considering a as a signed short-integer (2 byte or 16 bits), whereas the printf("%u\n\n", a) statement gives output considering it as an unsigned integer (4 bytes or 32 bits)
b is declared a integer (4 bytes or 32 bits) at the very start, and initialized with a value well within the integer range. The printf("%hd\n", b) statement gives output considering b as a signed short-integer (2 byte or 16 bits), whereas the printf("%hu", b) statement gives output considering it as an unsigned short-integer.
Please explain this discrepancy. What exactly determines the final output value?
The type of the variable is determines when you create it.
What append is when
printf()pars theconst char * formathe find%uin the string.That say to him, he need to print a
unsigned int, and he gonna read it with ava_arg()call. (see man)And he print the
unsigned inthe create, with awrite()call.Just for info :
Wikipedia
If you want to dig more about bits and overflow you can print bits :
That give us something like that :