I am trying to find angles of a stockpile (on the left and right sides) by using Otsu threshold to segment the image. The image I have is like this:
In the code, I segment it and find the first black pixel in the image
The segmented photo doesn't seem to have any black pixels in the white background, but then it detects some black pixels even though I have used the morphology.opening
If I use it with different image, it doesn't seem to have this problem
How do I fix this problem? Any ideas? (The next step would be to find the angle on the left and right hand side)
The code is attached here
from skimage import io, filters, morphology, measure
import numpy as np
import cv2 as cv
from scipy import ndimage
import math
# Load the image
image = io.imread('mountain3.jpg', as_gray=True)
# Apply Otsu's thresholding to segment the image
segmented_image = image > filters.threshold_otsu(image)
# Perform morphological closing to fill small gaps
structuring_element = morphology.square(1)
closed_image = morphology.closing(segmented_image, structuring_element)
# Apply morphological opening to remove small black regions in the white background
structuring_element = morphology.disk(10) # Adjust the disk size as needed
opened_image = morphology.opening(closed_image, structuring_element)
# Fill larger gaps using binary_fill_holes
#filled_image = measure.label(opened_image)
#filled_image = filled_image > 0
# Display the segmented image after filling the gaps
io.imshow(opened_image)
io.show()
# Find the first row containing black pixels
first_black_row = None
for row in range(opened_image.shape[0]):
if np.any(opened_image[row, :] == False):
first_black_row = row
break
if first_black_row is not None:
edge_points = [] # List to store the edge points
# Iterate over the rows below the first black row
for row in range(first_black_row, opened_image.shape[0]):
black_pixel_indices = np.where(opened_image[row, :] == False)[0]
if len(black_pixel_indices) > 0:
# Store the first black pixel coordinates on the left and right sides
left_x = black_pixel_indices[0]
right_x = black_pixel_indices[-1]
y = row
# Append the edge point coordinates
edge_points.append((left_x, y))
edge_points.append((right_x, y))
if len(edge_points) > 0:
# Plotting the edge points
import matplotlib.pyplot as plt
edge_points = np.array(edge_points)
plt.figure()
plt.imshow(opened_image, cmap='gray')
plt.scatter(edge_points[:, 0], edge_points[:, 1], color='red', s=1)
plt.title('Edge Points')
plt.show()
else:
print("No edge points found.")
else:
print("No black pixels found in the image.")
Your issue is that the image has noise. You need to deal with the noise.
That is usually done with some kind of lowpassing, i.e. blurring. I'd recommend a median blur.
Here's the result of a median filter, kernel size 9:
And the per-pixel absolute differences to the source, magnified in amplitude by 20x:
(this suggests that you could do a bandpass to catch the "texture" of the pile vs the flatness of the background)
And here's the picture after Otsu thresholding (and inversion):
And your foreground barely contrasts against the background. If you had better contrasting background, this wouldn't be nearly that much of an issue.
Here's thresholding based on hue, because background and foreground slightly differ in hue:
With morphological closing:
To get your lines for the left and right slope of the stockpile, you need something that deals with contours or edge pixels anyway.
Both contour finding and connected components labeling will have trouble with this, which is why the answers recommending those also must recommend explicitly filtering the results to remove small debris (the noise).
Hence, those approaches (contours/CCs) don't solve the problem of noise, they just transform it into a different problem in which you still have to deal with the noise (by filtering it), just after processing the image.
I'd recommend dealing with the noise early.