The predicted image and gt image are black and white images.
White dots in ground truth images represent ground truth values. White dots in predicted images represent predicted points.
Both images consists of multiple lines on images (many lines are present. They are all of one class)
I use the following method
n_gt_pixels = cv2.countNonZero(im_gt) #count number of white pixels in gt image
n_predicted_pixels = 0
for i in range(rows):
for j in range(cols):
if((im_predicted[i,j] == im_gt[i,j]) and im_gt[i,j]!=0): #ignore black pixels
n_predicted_pixels = n_predicted_pixels + 1
accuracy = n_predicted_pixels /n_gt_pixels
I then take average.
Can you please tell me if this is the correct way to evaluate the model? Are there any better ways to do this? (this approach takes lot of time)
Your task seems like a binary segmentation problem. You can use metrics like accuracy (percentage of pixels correctly classified), MCR i.e. Mis-classification rate which is essentially 1-accuracy or mean IOU about which you can read here.
Other than I would suggest if you want to calculate accuracy you use cv2.bitwise_xor
for this task.
EDIT
def cal_miou(pred_mask, sample_mask):
tp = np.sum(cv2.bitwise_and(pred_mask, sample_mask))
fp = np.sum(cv2.bitwise_and(pred_mask, cv2.bitwise_not(sample_mask)))
fn = np.sum(cv2.bitwise_and(cv2.bitwise_not(pred_mask), sample_mask))
return tp/(tp+fp+fn)
This function can be used to calculate MIOU where pred_mask and sample_mask are 2d arrays.