In VScode, I was using INT_MIN/INT_MAX, But today I got an error, saying "Unidentified Classifier". Instead it suggested me to use INT8_MIN.
After using this it worked perfectly.
But, what is the core difference between them ?
In VScode, I was using INT_MIN/INT_MAX, But today I got an error, saying "Unidentified Classifier". Instead it suggested me to use INT8_MIN.
After using this it worked perfectly.
But, what is the core difference between them ?
Copyright © 2021 Jogjafile Inc.
Your code might compile now, but it will not work as you expect. The
INT8_*macros have totally different values than theINT_*ones.INT_MIN/INT_MAXare the min/max values for anint(and are defined in<limits.h>header).It is typically 32 bit and therefore the values that you used before were probably –2147483648 and 2147483647 respectively (they could be larger for 64 bit integers, or smaller for 16 bit ones).
On the other hand
INT8_MIN/INT8_MAXare min/max values for a 8 bit signed integer (AKAint8_t), which are -128 and 127 respectively. BTW - they are also defined in a different header (<stdint.h>) which might explain why using it solved your compilation error.The bottom line:
In order to get the behavior you had before, you should use
std::numeric_limits<int>::min()andstd::numeric_limits<int>::max(), from the<limits>header.Note that
INT_MINand similar constants are actually macros "inherited" from C. In C++ we prefer to use thestd::numeric_limittemplate (mentioned above) which accepts the type as a template agrument. This makes is less likely to make mistakes. You could even usedecltype(variable)as the the template agrument.Finally - you also mentioned
INT16_MIN/INT16_MAXin your title: it is the correlative min/max values for 16 bit signed integer - i.e. -32768 and 32767 respectively. The same principle applies to similar constants. Again they have an equivalent instd::numeric_limitswhich is recommended in C++.