The following code finds the best-focus image within a set most of the time, but there are some images where it returns a higher value for the image that is way more blurry to my eye.
I am using OpenCV 3.4.2 on Linux and/or Mac.
import org.opencv.core.*;
import org.opencv.imgproc.Imgproc;
import static org.opencv.core.Core.BORDER_DEFAULT;
public class LaplacianExample {
public static Double calcSharpnessScore(Mat srcImage) {
/// Remove noise with a Gaussian filter
Mat filteredImage = new Mat();
Imgproc.GaussianBlur(srcImage, filteredImage, new Size(3, 3), 0, 0, BORDER_DEFAULT);
int kernel_size = 3;
int scale = 1;
int delta = 0;
Mat lplImage = new Mat();
Imgproc.Laplacian(filteredImage, lplImage, CvType.CV_64F, kernel_size, scale, delta, Core.BORDER_DEFAULT);
// converting back to CV_8U generate the standard deviation
Mat absLplImage = new Mat();
Core.convertScaleAbs(lplImage, absLplImage);
// get the standard deviation of the absolute image as input for the sharpness score
MatOfDouble median = new MatOfDouble();
MatOfDouble std = new MatOfDouble();
Core.meanStdDev(absLplImage, median, std);
return Math.pow(std.get(0, 0)[0], 2);
}
}
Here are two images using the same illumination (fluorescence, DAPI), taken from below a microscope slide while attempting to auto-focus on the coating/mask on the top surface of the slide.
I'm hoping someone can explain to me why my algorithm fails to detect the image that is less blurry. Thanks!
The main issue is that the laplacian kernel size is too small.
You are using kernel_size = 3
, and it's too small for the above scene.
In the above images, kernel_size = 3
is affected mostly by noise, because the edges (in the image that shows more details) are much larger than 3x3 pixels.
In other words, the "special frequency" of the details is low frequency, and the 3x3 kernel emphasizes much higher special frequency.
Possible solutions:
kernel_size = 11
for example. There is a small issue in your code:
Core.convertScaleAbs(lplImage, absLplImage)
computes absolute value of the laplacian result, and as a result the computed STD is incorrect.
I suggest the following fix:
Set Laplacian depth to CvType.CV_16S
(instead of CvType.CV_64F
):
Imgproc.Laplacian(filteredImage, lplImage, CvType.CV_16S, kernel_size, scale, delta, Core.BORDER_DEFAULT);
Don't execute Core.meanStdDev(absLplImage, median, std)
, compute tee STD on lplImage
:
Core.meanStdDev(lplImage, median, std);
I used the following Python code for testing:
import cv2
def calc_sharpness_score(srcImage):
""" Compute sharpness score for automatic focus """
filteredImage = cv2.GaussianBlur(srcImage, (3, 3), 0, 0)
kernel_size = 11
scale = 1
delta = 0
#lplImage = cv2.Laplacian(filteredImage, cv2.CV_64F, ksize=kernel_size, scale=scale, delta=delta)
lplImage = cv2.Laplacian(filteredImage, cv2.CV_16S, ksize=kernel_size, scale=scale, delta=delta)
# converting back to CV_8U generate the standard deviation
#absLplImage = cv2.convertScaleAbs(lplImage)
# get the standard deviation of the absolute image as input for the sharpness score
# (mean, std) = cv2.meanStdDev(absLplImage)
(mean, std) = cv2.meanStdDev(lplImage)
return std[0][0]**2
im1 = cv2.imread('im1.jpg', cv2.COLOR_BGR2GRAY) # Read input image as Grayscale
im2 = cv2.imread('im2.jpg', cv2.COLOR_BGR2GRAY) # Read input image as Grayscale
var1 = calc_sharpness_score(im1)
var2 = calc_sharpness_score(im2)
Result:
std1 = 668464355
std2 = 704603944