I've got a color jpg-image of a lion. I've drawn a white circle on the image, converted this image to greyscale and defined a mask. In the end, I want to have an image with only the original pixels within the white circle. I think I'm almost there, but I can't seem to figure out the last step to put all values outside the mask/white circle to black. Here is my code:
import cv2
img = cv2.imread('lion_original.jpg')
center_coordinates = (120,50)
radius = 20
color = (255, 255 , 255)
thickness = -1
img = cv2.circle(img, center_coordinates, radius, color, thickness)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow('try_mask', gray)
mask = gray>254
What you're doing, that is, adding the white circle to the original image, converting that to grayscale and then thresholding is a bad idea: there might be pixels outside of that circle that have values greater than your threshold, and then they will also be included in the mask itself. A quick fix is to create the white circle on a black image. The following snippet gives results that I think correspond to what you need:
img = cv2.imread('A.jpg')
center= (120,50)
radius = 20
color = (255, 255 , 255)
thickness = -1
final_image = cv2.circle(np.zeros_like(img), center, radius, color, thickness).astype("uint8")
final_image[final_image!=0]=img[final_image!=0]
Note: in case there are issues when you visualize final_image
, try normalizing it with
final_image=cv2.normalize(src=final_image, dst=None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
before calling cv2.imshow()
.