I have a (big) 1d array of measurement data. I want to calculate the average for every possible slice defined by a start-index and a stop-index. Like cutting at both ends of my data and average every possible cut. The result should be stored in a square 2D array (actually a triangle, as the start index must be smaller than the stop index).
Using loops works, but takes a long time.
Is there a way to speed this up?
I have this code:
N = 5
data = np.arange(N) # example
av = np.zeros((N, N))
for i in range(av.shape[0]):
for j in range(av.shape[1]):
av[j, i] = np.mean(data[i:j+1])
This works, but takes a long time. For a similar calculation (differences of elements instead of averages of slices), I found this very fast solution:
dist = np.subtract.outer(data, data)
But I did not figure out how this could be done with averages of slices.
One option with a summation, then division by the number of items:
a = np.arange(len(data))
av = (np.tril(np.repeat(data[:,None], len(data), axis=1)).cumsum(axis=0)
/np.tril((a[:,None]-a+1))
)
Output:
array([[0. , nan, nan, nan, nan],
[0.5, 1. , nan, nan, nan],
[1. , 1.5, 2. , nan, nan],
[1.5, 2. , 2.5, 3. , nan],
[2. , 2.5, 3. , 3.5, 4. ]])
Intermediates:
# repeat data to square shape and keep lower triangle
np.tril(np.repeat(data[:,None], len(data), axis=1))
array([[0, 0, 0, 0, 0],
[1, 1, 0, 0, 0],
[2, 2, 2, 0, 0],
[3, 3, 3, 3, 0],
[4, 4, 4, 4, 4]])
# get the cumulated sum
[…].cumsum()
array([[ 0, 0, 0, 0, 0],
[ 1, 1, 0, 0, 0],
[ 3, 3, 2, 0, 0],
[ 6, 6, 5, 3, 0],
[10, 10, 9, 7, 4]])
# compute the divider
a = np.arange(len(data))
array([0, 1, 2, 3, 4])
np.tril((a[:,None]-a+1))
array([[1, 0, 0, 0, 0],
[2, 1, 0, 0, 0],
[3, 2, 1, 0, 0],
[4, 3, 2, 1, 0],
[5, 4, 3, 2, 1]])
One a 1000 integer input (data = np.arange(1000)
)
# nested for loop
6.28 s ± 142 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# vectorized numpy
12.3 ms ± 427 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)