I'm a student trying to analyze binary images where almost the whole picture is black, but with a few white pixels evenly distributed in the image. I want to detect if the whole image has even density of white pixels. If areas in the image have low density of white pixels, I want to detect it.
In the following image, I have marked an area without white pixels as an example of what i want to detect:
In my program, I get the coordinates of the white pixels before I make the picture. I then create a black BufferedImage and write a white pixel on each coordinate to create the image I have attached. For me, the most important thing is to detect if the image contains completely black areas that are larger than an adjustable size (I have to experiment to find the right setting)
If it is possible to detect this in a good way simply by using the coordinates of the white pixels (without creating a black image and then add all the white pixels) it will also be of interest to me.
I use Java and OpenCV in my program, Does anyone have any suggestions on how to proceed to do this? Are there any features in OpenCV that can help me?
Appreciate all answers
Here's a crude way to solve this problem. I solved this using python, but all the same rules apply to Java.
I start off by getting a test set of points to work with that has a gap and some randomness to it.
w, h = 1000, 1000
spacing = 25
blast_size = 100
def distance(p1, p2):
return math.sqrt(math.pow(p1[0] - p2[0], 2) + math.pow(p1[1] - p2[1], 2))
def keep_point(p):
if p[0] < 0 or p[0] >= w or p[1] < 0 or p[1] >= h:
return False
d = distance(p, (w/2, h/2))
if d > blast_size:
return True
return False
grid = [
(i + random.randint(-spacing, spacing), j + random.randint(-spacing, spacing))
for i in range(spacing, w, spacing*2)
for j in range(spacing, h, spacing*2)
]
grid = list(filter(keep_point, grid))
initial = np.zeros((h, w), np.uint8)
for i, j in grid:
image[i, j] = 255
cv2.imshow("Initial", initial)
cv2.waitKey()
Next I calculate the minimum distance each point has to a neighbor. The largest minimum distance will be used as the radius of our convolution. After the convolution is complete, the gap will be very noticeable. To get the center of the gap after convolution, I take the average of the contours. If you can have multiple gaps, you'll want to do blob detection at this point.
# Don't include self as a neighbor
def distance_non_equal(p1, p2):
if p1 == p2:
return float('inf')
return distance(p1, p2)
min_distance = [
min(map(lambda p2: distance_non_equal(p1, p2), grid))
for p1 in grid
]
radius = int(max(min_distance))
kernel = np.zeros((2*radius+1, 2*radius+1), np.uint8)
y,x = np.ogrid[-radius:radius+1, -radius:radius+1]
mask = x**2 + y**2 <= radius**2
kernel[mask] = 255
convolution = cv2.filter2D(image, cv2.CV_8U, kernel)
contours = cv2.findContours(convolution, 0, 2)
avg = np.mean(contours[0],axis=1)
x = int(round(avg[0,0,0]))
y = int(round(avg[0,0,1]))
convolution[x, y] = 255
cv2.imshow("Convolution", convolution)
cv2.waitKey()
Now that we have the center of the gap, we can approximate the border. This is a very crude algorithm for detecting the border. I divide the dots into zones based on their angle to the center dot. For each zone I count the closest dot as part of the border. At the end I color the border dots differently.
def get_angle(p):
angle = math.degrees(math.atan2(y - p[1], x - p[0]))
if angle < 0:
angle += 360
return angle
angles = list(map(get_angle, grid))
zones = [
[
p
for angle, p in zip(angles, grid)
if i < angle < i + 360//12
]
for i in range(0,360,360//12)
]
closest = [
min(zone, key=lambda p2: distance((x,y), p2))
for zone in zones
]
final = np.zeros((h, w, 3), np.uint8)
for i, j in grid:
final[i, j] = [100,100,100]
for i, j in closest:
final[i, j] = [255,255,255]
cv2.imshow("final", final)
cv2.waitKey()