Search code examples
opencvimage-processingthreshold

Fast image thresholding


What is a fast and reliable way to threshold images with possible blurring and non-uniform brightness?

Example (blurring but uniform brightness):

enter image description here

Because the image is not guaranteed to have uniform brightness, it's not feasible to use a fixed threshold. An adaptive threshold works alright, but because of the blurriness it creates breaks and distortions in the features (here, the important features are the Sudoku digits):

enter image description here

I've also tried using Histogram Equalization (using OpenCV's equalizeHist function). It increases contrast without reducing differences in brightness.

The best solution I've found is to divide the image by its morphological closing (credit to this post) to make the brightness uniform, then renormalize, then use a fixed threshold (using Otsu's algorithm to pick the optimal threshold level):

enter image description here

Here is code for this in OpenCV for Android:

Mat kernel = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new Size(19,19));
Mat closed = new Mat(); // closed will have type CV_32F
Imgproc.morphologyEx(image, closed, Imgproc.MORPH_CLOSE, kernel);
Core.divide(image, closed, closed, 1, CvType.CV_32F);
Core.normalize(closed, image, 0, 255, Core.NORM_MINMAX, CvType.CV_8U);
Imgproc.threshold(image, image, -1, 255, Imgproc.THRESH_BINARY_INV
    +Imgproc.THRESH_OTSU); 

This works great but the closing operation is very slow. Reducing the size of the structuring element increases speed but reduces accuracy.

Edit: based on DCS's suggestion I tried using a high-pass filter. I chose the Laplacian filter, but I would expect similar results with Sobel and Scharr filters. The filter picks up high-frequency noise in the areas which do not contain features, and suffers from similar distortion to the adaptive threshold due to blurring. it also takes about as long as the closing operation. Here is an example with a 15x15 filter:

enter image description here

Edit 2: Based on AruniRC's answer, I used Canny edge detection on the image with the suggested parameters:

double mean = Core.mean(image).val[0];
Imgproc.Canny(image, image, 0.66*mean, 1.33*mean);

I'm not sure how to reliably automatically fine-tune the parameters to get connected digits.

enter image description here


Solution

  • Using Vaughn Cato and Theraot's suggestions, I scaled down the image before closing it, then scaled the closed image up to regular size. I also reduced the kernel size proportionately.

    Mat kernel = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new Size(5,5));
    Mat temp = new Mat(); 
    
    Imgproc.resize(image, temp, new Size(image.cols()/4, image.rows()/4));
    Imgproc.morphologyEx(temp, temp, Imgproc.MORPH_CLOSE, kernel);
    Imgproc.resize(temp, temp, new Size(image.cols(), image.rows()));
    
    Core.divide(image, temp, temp, 1, CvType.CV_32F); // temp will now have type CV_32F
    Core.normalize(temp, image, 0, 255, Core.NORM_MINMAX, CvType.CV_8U);
    
    Imgproc.threshold(image, image, -1, 255, 
        Imgproc.THRESH_BINARY_INV+Imgproc.THRESH_OTSU);
    

    The image below shows the results side-by-side for 3 different methods:

    Left - regular size closing (432 pixels), size 19 kernel

    Middle - half-size closing (216 pixels), size 9 kernel

    Right - quarter-size closing (108 pixels), size 5 kernel

    enter image description here

    The image quality deteriorates as the size of the image used for closing gets smaller, but the deterioration isn't significant enough to affect feature recognition algorithms. The speed increases slightly more than 16-fold for the quarter-size closing, even with the resizing, which suggests that closing time is roughly proportional to the number of pixels in the image.

    Any suggestions on how to further improve upon this idea (either by further reducing the speed, or reducing the deterioration in image quality) are very welcome.