Search code examples
floating-pointrustprecisionfloating-accuracyfloating-point-conversion

Why does the compiler parse a floating point number in a source file differently than at runtime?


I've been working on some Rust projects lately to learn the language and have some fun. I am writing something similar to libconfig in Rust, using the peg crate to generate my parser.

For the last hour, I've been fighting with a weird bug where some values that were parsed from a sample config file didn't compare equal to the expected values.

Ultimately, I narrowed down the bug to this:

fn main() {
    let my_flt = "10.4e-5".parse::<f32>().unwrap();
    let other_flt = 10.4e-5;
    println!("{} == {} -> {}", my_flt, other_flt, my_flt == other_flt);
}

Surprisingly, this prints:

0.000104 == 0.000104 -> false

See it in the playground

Now, I know this has to be something related to the old infamous floating point precision issues. I know that even though two floats might look the same when printed, they can compare differently for various reasons, but I would have guessed that getting a float from parse::<f32>("X") would be equivalent to explicitly declaring and initializing a float to X. Clearly, I'm mistaken, but why?

After all, if I declare and initialize a float to X, internally, the compiler will have to do the same job as parse() when generating the final executable.

Why does the compiler parse the float X in the source file in a different way from the runtime parse::<f32>() function? Shouldn't this be consistent? I'm having such a hard time getting this over with!


Solution

  • Note that the OP's specific example no longer fails (tested in Rust 1.31.0)


    It's a known issue that the standard library and the rustc lexer parse floating point values differently.

    The standard library ultimately calls down to from_str_radix, and you can see the implementation there. I'm not sure exactly where the compilers version of parsing floating-pint literals takes place, but this comment indicates it leverages LLVM:

    The compiler uses LLVM to parse literals and we can’t quite depend on LLVM for our standard library.