I essentially have a confusion matrix of size n x n
with all my diagonal elements being 1
.
For every row, I wish to calculate its mean, excluding the 1
, i.e. excluding the diagonal value. Is there a simple way to do it in numpy
?
This is my current solution:
mask = np.zeros(cs.shape, dtype=bool)
np.fill_diagonal(mask, 1)
print(np.ma.masked_array(cs, mask).mean(axis=1))
where cs
is my n x n
matrix
The code seems convoluted, and I certainly feel that there's a much more elegant solution.
A concise one using summation
-
(cs.sum(1)-1)/(cs.shape[1]-1)
For a general case of ignoring diagonal elements, use np.diag
in place of 1
offset -
(cs.sum(1)-np.diag(cs))/(cs.shape[1]-1)
Another with mean
-
n = cs.shape[1]
(cs.mean(1)-1./n)*(n/(n-1))