Interpreting the format specifier in printf("%# 01.1g",9.8)

160 Views Asked by At

Consider the following printf instruction:

printf("%# 01.1g", 9.8);

what should it print?

I'm reading the description of the g specifier on cppreference.com, which says (text for G removed):

converts floating-point number to decimal or decimal exponent notation depending on the value and the precision.

For the g conversion style conversion with style e or f will be performed.
Let P equal the precision if nonzero, 6 if the precision is not specified, or 1 if the precision is ​0​. Then, if a conversion with style e would have an exponent of X:

  • if P > X ≥ −4, the conversion is with style f and precision P − 1 − X.
  • otherwise, the conversion is with style e and precision P − 1.

Unless alternative representation is requested the trailing zeros are removed, also the decimal point character is removed if no fractional part is left.

In our case,

  • P = 1 , specified explicitly.
  • X = 0, since a "conversion with style e", i.e. "%# 01.1e", yields 9.8e+00 (adjust the GodBolt program and you'll see).
  • 1 > 0 >= -4 holds.

consequently, the conversion should be with style f and precision P - 1 - X = 1 - 1 - 0 = 0, i.e. "%# 01.0f", which yields 10..

... but that is not what glibc produces: I get 1.e01, as can also be seen on GodBolt.

So,

  • Am I mis-reading the quoted text?
  • Is cppreference.com wrong?
  • Is this - perish the tought - a glib 2.36 bug?
2

There are 2 best solutions below

3
Eric Postpischil On BEST ANSWER

This is a lack of clarity about “a conversion with style E” and a discrepancy about what the precision means for E and g.

With E, the precision is the number of digits to appear after the decimal point, per C 2018 7.21.6.1 4. With g, the precision is the maximum number of significant digits. Those differ; E has an additional digit before the decimal point, giving a total of one more digit than its nominal “precision.”

Thus, in considering how to format 9.8 for %.1g, we first consider how it would be formatted for %.0E, not %.1E, as both %.1g and %.0E request one digit, whereas %.1E requests two digits. For %.0E, “1e+01” would be produced. So the X, the exponent, is 1, not 0.

0
vitaut On

According to the C standard (draft), precision means

the maximum number of significant digits for the g and G conversions

In "%# 01.1g" the precision is 1 which means that the value will be rounded to one significant digit (9.8 is rounded to 10).

Now let's look at the definition of g from the same standard draft:

A double argument representing a floating-point number is converted in style f or e (or in style F or E in the case of a G conversion specifier), depending on the value converted and the precision. Let P equal the precision if nonzero, 6 if the precision is omitted, or 1 if the precision is zero. Then, if a conversion with style E would have an exponent of X:

  • if P > X ≥ −4, the conversion is with style f (or F) and precision P − (X + 1).
  • otherwise, the conversion is with style e (or E) and precision P − 1.

In our case P = 1 (precision), X = 1 (exponent of 1e+01 which is 10 in exponential format)

P > X is false so the "otherwise" clause takes effect meaning that the exponential format should be used.

So both glibc and cppreference are correct and in fact the latter seems to be directly based on the standard.