Search code examples
pythonnumpyclassificationnormalization

How to normalize an ndarray of matrices, such that each matrix equals itself divided by its l-2 norm?


So I have a an ndarray of shape (60000, 1, 28, 28) which is essentially 60000 28x28 matrices. They represent pixel values and I need to normalize each image by its l-2 norm. Essentially, I have to divide each 28x28 matrix by its own l-2 norm.

Any suggestions as to how I can do this efficiently without looping through the entire array?

I tried the following code but it didn't seem to work:

norms = np.linalg.norm(training_data, ord=2, axis=0)
normalized_training_data = training_data / norms

but I just got a numch of 0s and NaNs.


Solution

  • You need to specify the right axes to compute the norm over:

    norms = np.linalg.norm(training_data, ord=2, axis=(2, 3))