Search code examples
pythonscikit-learnheatmapseabornconfusion-matrix

reason for transposed confusion matrix in heatmap


I plot a heatmap which takes a confusion matrix as input data. The confusion matrix has the shape:

 [[37  0  0  0  0  0  0  0  0  0]
 [ 0 42  0  0  0  1  0  0  0  0]
 [ 1  0 43  0  0  0  0  0  0  0]
 [ 0  0  0 44  0  0  0  0  1  0]
 [ 0  0  0  0 37  0  0  1  0  0]
 [ 0  0  0  0  0 47  0  0  0  1]
 [ 0  0  0  0  0  0 52  0  0  0]
 [ 0  0  0  0  1  0  0 47  0  0]
 [ 0  1  0  1  0  0  0  1 45  0]
 [ 0  0  0  0  0  2  0  0  0 45]]

The code to plot the heatmap is:

fig2=plt.figure()
fig2.add_subplot(111)
sns.heatmap(confm.T,annot=True,square=True,cbar=False,fmt="d")
plt.xlabel("true label")
plt.ylabel("predicted label")

which yields:

enter image description here

As you can see, the input matrix "confm" is transposed (confm.T). What is the reason for this? Do I necessarily have to do that?


Solution

  • When I plot your data with the code you provided I get this: enter image description here

    Without the transpose and when swapping the x and y labels you get:

    fig2=plt.figure()
    fig2.add_subplot(111)
    sns.heatmap(confm,annot=True,square=True,cbar=False,fmt="d")
    plt.xlabel("predicted label")
    plt.ylabel("true label")
    

    enter image description here

    Which results in the same confusion matrix. What the transpose really does is swap which is the prediction and which is the ground truth (true label). What you need to use depends on how the data is formatted.