Here is an example of what I intend to do. I have input data, something like this:
data = array([0,1,2,3,4,5,6,7,8,9])
What I need to do is to so sum up the first two values, then next two values and so on. So the result should be as following.
result = array([1,5,9,13,17])
This is what I am terming as 'squeezing'
I have a numpy array of size 4096. I need to squeeze the array to size of 2048,1024 or 512. It is actually energy spectrum where the indices of array give ADC channel no. and the value gives photon counts. To do this, here is what I do.
# tdspec is input array of size 4096
spec_bin_size = 1024
spect = n.zeros(spec_bin_size)
i = 0
j=0
interval = 4096/spec_bin_size
while j < spec_bin_size:
a = n.sum(tdspec[i:i+interval])
spect[j] = a
i = i+interval
j = j+1
It works well, but now I need to run this loop upon a number of spectra and I fear that it will be slow. Can anyone please tell me if there is a numpy/scipy operation that does this work?
Let's define your array:
>>> import numpy as np
>>> data = np.array([0,1,2,3,4,5,6,7,8,9])
To get the same that you want:
>>> np.sum(data.reshape(5, 2), axis=1)
array([ 1, 5, 9, 13, 17])
Or, if we want reshape to calculate one of the dimensions for us, specify -1
for that dimension:
>>> np.sum(data.reshape(-1, 2), axis=1)
array([ 1, 5, 9, 13, 17])
>>> np.sum(data.reshape(5, -1), axis=1)
array([ 1, 5, 9, 13, 17])
Scipy has a variety available. To emulate the filter above, for example:
>>> import scipy.ndimage.filters as filters
>>> filters.convolve1d(data, [1,1])
array([ 1, 3, 5, 7, 9, 11, 13, 15, 17, 18])
Or, just selecting every other element to get the previous results:
>>> filters.convolve1d(data, [1,1])[::2]
array([ 1, 5, 9, 13, 17])
For other filters, see the scipy.ndimage.filters.
The top hat filter that you are using can result in spurious artifacts. A gaussian filter is often a better choice.