I have a function that takes a numpy array. I know it to be either of shape (1,C), or (R,C)
What I need is to divide every entry by the sum of its according column. I read this question, and the accepted answer works if I get a array of (R,C). However, with a (1,C) array, I try to access a scalar with a dimension that does not exist, throwing an error. Is there a way to make this work independent of the dimension of the array? See my code below:
import numpy as np
def f(x):
sums = np.sum(x, axis = 0)
return (x / sums[None,:])
scores = np.array([[1.0, 2, 3, 6],
[2, 4, 5, 6],
[3, 8, 7, 6]])
print f(scores)
print f(np.array([1,2,3]))
I know why the error occurs (sums is just a scalar in the second function call), but how do I get this to work without a bunch of if-statements?
I'm pretty new to numpy, so forgive me, I don't really know what to google for.
Are you looking for x / np.sum(x, axis=0)
? [None,:]
has no useful effect here, and only serves to throw an error in the 1D case.