#include <stdio.h>
#include <conio.h>
int main() {
int a = 2;
float b = 4;
clrscr();
printf("%d \n", a + b);
printf("%f \n", a + b);
printf("%d \n", a * b);
printf("%f \n", a * b);
printf("%d \n", b / a);
printf("%f \n", b / a);
getch();
return 0;
}
Output :-
-1293228905
6.000000
0
8.000000
0
2.000000
Seems like its just taking the starting and ending range of the int datatype.
Your code has undefined behavior for all
printfstatements that use a%dconversion. The reason is you pass a value of typefloatwhereprintfwill expect a value of typeintto have been passed.To evaluate the expression
a+b, the compiler converts theintoperand to typefloatand evaluates the addition using floating point computation. The type of the result isfloattoo. The same applies to the other expressions.Types
intandfloathave a different representation and may be passed in different ways to vararg functions such asprintf. Passing afloatvalue whereprintfexpects anintvalue results in undefined behavior: anything can happen, and it is vain to try and make sense of the surprising output.If you mean to use
%d, you must convert the value to typeintexplicitly with a cast:Note that you can also use
%.0fto output thefloatargument with no decimals, but the output will differ asprintfwill round thefloatvalue to the nearest integer, whereas(int)(a + b)will truncate thefloatvalue toward0if it can be represented as anint.Note finally, that
floatvalues are implicitly promoted todoublewhen passed toprintfand other vararg functions. Using typedoublefor all floating types is recommended unless you target very specific applications such as computer graphics or embedded systems.Here is a modified version:
Output: