I have the following array and function:
import numpy as np
a = np.array([24, 23, 4, 52, 34, 49, 59, 18, 19])
def normalize(a):
amin, amax = min(a), max(a)
for i, val in enumerate(a):
a[i] = (val-amin) / (amax-amin)
return a
I get the following result:
array([0, 0, 0, 0, 0, 0, 1, 0, 0])
How can I prevent Python from not revealing the decimals of the zeros?
I guess what happens is because a.dtype
is integer, so everything is converted to integer when you update a single position with a[i] =...
.
In general, you should avoid looping in numpy
:
a = np.array([24, 23, 4, 52, 34, 49, 59, 18, 19])
def normalize(a):
# np.min is vectorized. Python's `min` is not
amin, amax = np.min(a), np.max(a)
return (a-amin)/(amax-amin)
normalize(a)