Floating point literals are of type double by default

suggest change

Care must be taken when initializing variables of type float to literal values or comparing them with literal values, because regular floating point literals like 0.1 are of type double. This may lead to surprises:

#include <stdio.h>
int main() {
    float  n;
    n = 0.1;
    if (n > 0.1) printf("Wierd\n");
    return 0;
}
// Prints "Wierd" when n is float

Here, n gets initialized and rounded to single precision, resulting in value 0.10000000149011612. Then, n is converted back to double precision to be compared with 0.1 literal (which equals to 0.10000000000000001), resulting in a mismatch.

Besides rounding errors, mixing float variables with double literals will result in poor performance on platforms which don’t have hardware support for double precision.

Feedback about page:

Feedback:
Optional: your email if you want me to get back to you:



Table Of Contents