Search code examples
image-processingimage-segmentationfloating-accuracypytorch

How to calculate Pixel wise accuracy in pytorch?


My code looks like the following and I get accuracy from 0 to 9000, which means its clearly not working.

optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()


predicted = outputs.data
predicted = predicted.to('cpu')
predicted_img = predicted.numpy()

labels_data = labels.data
labels_data = labels_data.to('cpu')
labels_data = labels_data.numpy()
labels = labels.to(device)

_, predicted = torch.max(outputs.data, 1)
total = labels.size(0) * labels.size(1) * labels.size(2)
correct = (predicted_img == labels_data).sum().item()
accuracy += ( correct / total)
avg_accuracy = accuracy/(batch)

What am I doing wrong ?


Solution

  • I am assuming the following line accumulates accuracy over mini-batches.

    accuracy += (correct/total)
    

    And avg_accuracy = accuracy/batch gives average accuracy over the entire dataset where batch represents the total number of mini-batches representing the whole dataset.

    If you are getting accuracy greater than 100, then you should check if in any mini-batch, you get correct > total? Also check if total = labels_data.size gives you the same value as the following line.

    total = labels.size(0) * labels.size(1) * labels.size(2)