I'm trying to find out the image size of the important part of a medical image as seen below. I am using OpenCV and Python.
I have to crop the image first to get rid of the black space around the image. Here is my Code:
import cv2
import numpy as np
img = cv2.imread('image-000.png')
height, width, channels = img.shape #shape of original image
height_2 = height * 7/100
width_2 = width * 15/100
img[0:height_2, 0:width_2] = [0, 0, 0] #get rid of the watermark on the top left
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
_,thresh = cv2.threshold(gray,15,255,cv2.THRESH_BINARY)
_, contours, _= cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnt = contours[0]
x,y,w,h = cv2.boundingRect(cnt)
crop = img[y:y+h,x:x+w]
height_2, width_2, channels_2 = crop.shape
print "height: " + repr(height_2)
print "width: " + repr(width_2)
cv2.waitKey(0)
cv2.destroyAllWindows()
This code works fine for the image above:
However when I want to use the same code for other images it doesnt work. For example this doesnt work:
Do you know what I am doing wrong? Your help would be highly appreciated!
The problem is that you find more then 1 contour in the second image. Just replace your line cnt = contours[0]
with cnt = max(contours, key=cv2.contourArea)
and it will just get the biggest contour for you.