So, here's my C code. While I was toying with my code in order to learn the format specifiers, especially %c:
#include <stdio.h>
void main()
{
double a = 0.21;
printf("%c", a);
}
I happened to encounter that even if I pass a floating value as an argument in printf() function for the format specifier %c, the compiler (my compiler is gcc) somehow converts the value 0.21 into decimal value 223 which corresponds to ASCII character ß.
For a = 0.22 the output is: ) whose decimal value in ASCII table is 29
I ran the code on both VS code and CLion IDEs, but the results were same. Now it is making me scratch my head for days and I can't figure it out.
I want to know how the values 0.21 and 0.22 are getting converted into the decimal 223 and 29 or how they correspond to the ASCII 223 and 29 i.e. ß and ) respectively.
Since the value 0.21, 0.22 does not corresponds to any of the ASCII, I was expecting the program to print nothing.
But based on the output I thought that this might have something to do with the binaries.
As 0.21 in binary is 0.00110101110000101000111101011100...
& 223 is 11011111
and 0.22 in binary is 0.00111000010100011110101110000101...
& 29 is 00011101
And I could not find any conversion pattern.
Passing a
float(which is automatically converted to adoublein this case) for a%chas undefined behavior.printflooks for the argument where you program would have passed it if its type wasintor a smaller type that promotes tointand uses that value. In your case the value happens to be 223 or 29 depending on circumstances and it could be something else on a different CPU, or after any unrelated change in the program or even just at a different time or place... The behavior is undefined, you could also get no output or a program crash (unlikely but not impossible).Use compiler warnings to try and detect such problems (gcc -Wall -Wextra -Werror) and avoid scratching your head trying to make sense of undefined behavior.