Question:
lets say we have some arbitrary decimal number (like 1.3456) which is exact in the (decimal) unit of the last place. How many places do we need that no two binary floating point numbers fall into the range of imprecision of the decimal number for:
A other way to ask might be (if I think correct): How many places are needed, that a round-2-nearest from the constant to a floating point results in the same floating point number for all decimal numbers within the imprecision range of the decimal number? But I am not sure if its clearer.
Background: when I get (or give) range requirements in form of decimal constants (say, for comparison to limits), there are always assumptions on the representation of these constants in the machine format. When it comes to floating points, I normally then just write in my specification something like: "The constant limits are assumed to be IEEE-754 single precision constants." But this doesn't help if one want to do exact testing on these limits.
Thanks to user asa commet it became quite clear to me, that the answer was right before my eyes in this picture in the relevant wikipedia article for IEEE 754 floating point numbers.
For my use case, I just go with 30 digits (because it is easy to remember as a common rule), or derive 30 of the from the mathematically exact number and tell my testers that I am assuming "rounding to nearest".
Also for more precise spotlight of this issue see the blog post mentioned in njuffa's comment