I have a Java programt o display the limits of primitive integer datatypes which is as follows:
import static java.lang.Math.*;
class HelloWorld {
public static void main(String[] args) {
System.out.println((byte)(pow(2,8-1));
System.out.println((byte)(pow(2,8-1)-1));
System.out.println((short)(pow(2,16-1)));
System.out.println((short)(pow(2,16-1)-1));
System.out.println((int)(pow(2,32-1)));
System.out.println((int)(pow(2,32-1)-1));
System.out.println((long)(pow(2,64-1)));
System.out.println((long)(pow(2,64-1)-1));
}
}
Its output is as follows:
-128
127
-32768
32767
2147483647
2147483647
9223372036854775807
9223372036854775807
Can you please explain why the output for int and long typecasting?
I was expecting something like
-128
127
-32768
32767
-2147483648
2147483647
-9223372036854775808
9223372036854775807
You are converting to the "Integer" types
afteryou subtract1. For integers and longs, the floating point value is adjusted for the subtraction of1beyond the precision that those types can hold. So the conversion to int or long does not include that particular difference.Here is an example for the long conversion.
prints
Notice the values are the same. The subtraction made no difference due to loss of precision. The is true for ints. This precision difference was such that it did not affect the shorter width types.