I try to use MSER algorithm to text detection. I use this code:
import cv2
import numpy as np
#Create MSER object
mser = cv2.MSER_create()
#Your image path i-e receipt path
img = cv2.imread('test.jpg')
#Convert to gray scale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
vis = img.copy()
#detect regions in gray scale image
regions, _ = mser.detectRegions(gray)
hulls = [cv2.convexHull(p.reshape(-1, 1, 2)) for p in regions]
cv2.polylines(vis, hulls, 1, (0, 255, 0))
cv2.imshow('img', vis)
cv2.waitKey(0)
mask = np.zeros((img.shape[0], img.shape[1], 1), dtype=np.uint8)
for contour in hulls:
cv2.drawContours(mask, [contour], -1, (255, 255, 255), -1)
#this is used to find only text regions, remaining are ignored
text_only = cv2.bitwise_and(img, img, mask=mask)
cv2.imshow("text only", text_only)
cv2.waitKey(0)
But I get very interesting results. MSER can't detect all text on image.
What am I doing wrong?
OpenCV text module contains a couple of methods for text detection. For your example the simplest method is the ERFilterNM - python example. See you on detection result for png screen: Parameters:
er1 = cv.text.createERFilterNM1(erc1,6,0.00005f,0.08f,0.2f,true,0.1f)
er2 = cv.text.createERFilterNM2(erc1,0.15)