I wonder what is better to compute in C language:
if (x==0)
{
// Some code ...
}
of
if (0==x)
{
// Some code ...
}
I know the last is better in case the programmer forgets the second "=" and write "0 = x" instead "0 == x" because the compiler will throw an error.
But my question is:
- From the processor's perspective, in compute time.
- In general (not just for AMD/Intel CPU, also for embedded components with different processors.
I think that related to "lvalue" and "rvalue", and I try to simulate in my PC, without significant insights.
There is no general answer. The actual computing time depends on the target system and the compiler and its optimization.
If you disable all optimization, you might observe different machine code generated, and that could possibly have different computing time.
However, modern compilers optimize your code in many ways. You can expect that both variants generate the same machine code.
To be sure about your system, compile both variants and look at the generated machine code. Serious compilers have options or tools to produce a listing.
Conclusion: If you do not experience performance problems, do not micro-optimize. Write the source in the best way to be safe against human errors, and to be the easiest to understand. (BTW, Yoda conditions tend to be understood harder.)
The terms "rvalue" is only used twice in the standard, one time in a note:
And the second time in the index, pointing to this note. Apparently it is not used any more.
The equality operator does not differentiate between its operands. Both are of the same kind.