I know that most significant bit means the sign of a number in signed types. But I found one strange (for me) thing: if the number is negative and we use short type, this number will look like 0xffff####. How can this be? Short contains only 2 bytes and in 0xffff#### we see 4 whole bytes. Why do 16 more bits become one in binary representation. Explain me, please, how does it works.
For example,
short s = 0x8008;
printf("%x", s);
Output:
>>> ffff8008
As @Pete Becker says, the problem is implicit conversion to
int. If you try the same thing with C++ iostreams you will get the output you expect though