atof("0") returns 2 in float variable

259 Views Asked by At

I write c embedded on STM32F437VI. At some point, I need to parse some strings that contain numbers, as float numbers. I use atof and it works always with the correct result, except for this one weird case. If the string is 0 or 0.0 it gives 2.

I have included stdlib.h and even tried (float)atof() to typecast, but for some weird reason, the float variable has always the value 2 from the atof("0") operation and the double value has the correct 0. Why is this happening? Thank you all in advance.

#include "stdio.h"
#include "stdlib.h"

int main(void)
{
    char test[] = "0";
    float val1;
    double val2;

    val1 = atof(test);
    val2 = atof(test);

    return 0;
}

I expect the obvious result of 0 to the float variable as well, but it keeps getting the fixed 2 value.

UPDATE: This code section is part of a much bigger project and there is no point in elaborating on the project. I have custom Makefile with LDFLAG option

"-mfloat-abi=hard -mfpu=fpv4-sp-d16 -u _printf_float".

Could this affect this issue?

As the MCV example is concerned, in main.c I have the above code section and I got the results mentioned. Can anybody think of any reason atof() behaves in this way? Of course I have used online c compiler as well with the exact same code and the result is of course 0. I guess if something was very very wrong with the stdlib library, then atof() would not work for all the other cases as well. But it fails only for "0" and only with the result 2 assigned to the float variable.

I watch the variables realtime with Ozone debugger software. Could the reason be the floating point implementation on STM32F4 mcu used? A missing parameter in the custom Makefile or something like that?

2

There are 2 best solutions below

0
Swordfish On

First, your question lacks a Minimal, Complete, and Verifiable example.

#include <stdio.h>
#include <stdlib.h>

int main(void)
{
    char test[] = "0";
    float val1;
    double val2;

    val1 = atof(test);
    printf("%f\n", val1);

    val2 = atof(test);
    printf("%f\n", val2);
}

Output:

0.000000
0.000000

<°)))<()

Or your standard library implementation is fubar.

0
R.. GitHub STOP HELPING ICE On

It looks like the cause of your problem is in the updated question text:

UPDATE: This code section is part of a much bigger project and there is no point in elaborating on the project. I have custom Makefile with LDFLAG option

"-mfloat-abi=hard -mfpu=fpv4-sp-d16 -u _printf_float".

Assuming these options are changes vs the defaults, what you're doing is telling the compiler to generate code for a different ABI from your toolchain's library ecosystem. The code generated for main expects the result of atof in a floating point register, but atof is using the standard ABI, which passes floating point arguments and return values in general-purpose registers. Thus, main is just reading whatever junk happens to be left in the floating point registers used for return values.

See if your problem goes away if you remove -mfloat-abi=hard. If so, you've probably found your problem. You need to either build a toolchain and libraries for the hardfloat ABI, or stick with the default non-hardfloat calling convention.