I am developing an application which provides an HMI that can show values read from remote sources. The application does not know what the values themselves mean -- this is configured by the end user.
One of the types supported is 32-bit floats (IEEE single precision). Since IEEE floats can only have limited precision, when representing very large or very small values (think of e.g. 12312984124000000000000000000000, .000000000000000000000032423894) there are many digits that don't convey any additional information. In those cases I would like to switch to scientific notation for clarity.
Question is: Given the precision and characteristics of IEEE 32-bit floats, are there any well defined algorithms to determine when to make this switch? Ideally this would happen when the decimal representation of the value contains non-significative digits.
Since float
has about 7 significant digits, you should switch to scientific notation if log10(abs(x)) > 7 or log10(abs(x)) < -7.
Update:
As the float still has binary format, it's better to focus on binary values. It has 23 significant binary digits, so you can check
abs(x) > 223 and abs(x) < 2-23.
In C you can use (1 << 23) to get the first value, and FLT_EPSILON
from float.h
to get the second.