In C, many operations employ bit shifting, in which an integer literal is often used. For example, consider the following code snippet:
#define test_bit(n, flag) (1UL << (n) & (flag))
As far as I know, the integer literal suffix UL is supposed to suppress unwanted behavior in a shift, e.g. sign-extending a signed integer may result in multiple bits being set. However, if the case is doing a left shift only, as shown above, do we still need the integer literal suffix?
As a left shift won't cause unintended behavior, I can't figure what its purpose is. Code like the above often appears in projects such as Linux kernel, which makes me think that there must be a need for it. Does anyone know the purpose of the UL suffix in this case?
Sign extending only applies to right shifts, so that's not applicable.
<<is defined as follows:There are two ways in which left-shifting values can result in undefined behaviour based on
E1:[1]E1has a signed type and negative value.E1has a signed type and nonnegative value, andE1 × 2E2is unrepresentable.In our case,
E1is a positive value, so the former isn't applicable. However, the latter could apply depending on the type ofE1.Let's look at what results we get for different types on two systems.
intand a 64-bitlong(e.g. Linux on x86-64).intand a 32-bitlong(e.g. Windows on x86-64).1 << (n)test_bit( 31, flag )1L << (n)test_bit( 31, flag )1U << (n)test_bit( 31, flag )1U << (n)test_bit( 63, flag )1L << (n)test_bit( 63, flag )1UL << (n)test_bit( 63, flag )So, assuming you want to be able to test any of the bits of
flag1Uis needed ifflagcan be asigned intor anunsigned intor shorter.1ULis needed ifflagcan also be asigned longor anunsigned long.E2. This happens ifE2is negative, equal to the width ofE1, or greater than the width ofE1. This puts a constraint on the valid values fortest_bit's first argument.