According to online documentation, there are differences in between these fixed width integer types. For int*_t we fixed the width to whatever the value of * is. Yet for the other two types, the adjectives fastest and smallest are used in the description to request the fastest or the smallest instances provided by the underlying data model.
What are the objective meanings of "the fastest" or the "smallest"? What is an example in where this would be advantageous or even necessary?
There is no objective meaning to "fastest"; it's basically a judgement call by the compiler writer. Typically, it means expanding smaller values to the native register width of the architecture, but that's not always fastest (e.g. a 1 billion entry array would probably be processed quicker if it were 8 bit values, but
uint_fast8_tmight be a 32 bit value because the CPU register manipulation goes faster for that size)."smallest" usually means "the same size as the bits requested", but on weird architectures with limited size values to choose from (e.g. old Crays had everything as a 64 bit type),
int_least16_twould work (and seamlessly become a 64 bit value), while the compiler would likely error out onint16_t(because it's impossible to make a true 16 bit integer value there).Point is, if you're relying on overflow behaviors, you need to use an exact fixed width type. Otherwise, you should probably default to
leasttypes for maximum portability, switching tofasttypes in hot code paths, but profiling would be needed to determine if it really makes any difference.