While training PixelNet, I have tor resize the annotated image(label) that has specific pixel value,before resizing ;the image pixels has specif values(annotated objects) ,np.unique(image)
gives [ 0 7 15]
However when i resize the image with openCV to fit it to my network definition,the pixel values range changes,where
image = cv2.resize(image,(cnn_input_size, cnn_input_size),cv2.INTER_NEAREST)
np.unique(bmask)
gives
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17]
This is a disaster for training images with annotated label as these values are specify for other classes,i am wondering if this is the expected behavior of OpenCV while resizing.
Use
image = cv2.resize(image, (cnn_input_size, cnn_input_size), interpolation = cv2.INTER_NEAREST)
or
image = cv2.resize(image, (cnn_input_size, cnn_input_size), 0, 0, cv2.INTER_NEAREST)
Right now you're using the value cv2.INTER_NEAREST
for the parameters fx
(scale x), and actually using the default interpolation method that is INTER_LINEAR
.