I have a numpy.float32
object that I want to encode as JSON. The problem is that when I convert to a native python float
, I lose the precision of the value.
Example:
In [1]: import numpy as np
In [4]: np.float32(295.96).item()
Out[4]: 295.9599914550781
However, if I first convert to string, then to float, the precision is retained.
In [3]: float(str(np.float32(295.96)))
Out[3]: 295.96
Is there a way to retain my precision without having to go through a string first?
Why does str(np.float32(295.96))
seem to retain the precision but np.float32(295.96).item()
(or float(np.float32(295.96))
or np.asscalar(np.float32(295.96))
) does not?
Note: I cannot assume that the precision will always be .01
. I need to retain the native precision of the data.
It's not possible to store 64 bits of precision in a 32-bit value. In python, float
is 64 bit (what is referred to as a double
in C). As a demo, everything is OK with 64-bit floats:
>>> d = 295.6; dn = np.float64(d)
>>> (d, dn)
(295.6, 295.95999999999998) # numpy prints out more digits than python
>>> d == dn # but these are still the same
True
>>> d - dn
0.0
But if you try and use 32 bits, you drop precision
>>> d = 295.96; fn = np.float32(d)
>>> (d, fn)
(295.96, 295.95999)
>>> d == fn
False
>>> d - fn
8.5449218545363692e-06
Why does str(np.float32(295.96)) seem to retain the precision
str(np.float32(295.96))
looks like it retains precision because np.float32.__str__
rounds (in base 10) for convenience. It just so happens that when rounded, it exactly matches the text you typed in your code. As a result, it has exactly the same value.