Search code examples
python-3.xopencvimage-processingimage-segmentation

Image segmentation: how to represent the "dispersion" of detected areas?


I'm using python 3.9 to perform image segmentation (image => grey image => manual threshold on grey level => conversion from [0-255] to [0-1] image). At the end of the segmentation I obtained a binary image, with a white mask containing the areas I wanted to highlight with the value "1" (and the rest of the image is "0"):

Image 1 - areas dispersed

Now I was able to extract some properties from this mask, using "CV2", like the area of the detected zones:

cnts = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)

areas=[]
aspectRatios=[]
for (i, c) in enumerate(cnts):
    # compute the area of the contour along with the bounding box
    # to compute the aspect ratio
    area = cv2.contourArea(c)
    (x, y, w, h) = cv2.boundingRect(c)

    # compute the aspect ratio of the contour, which is simply the width
    # divided by the height of the bounding box
    aspectRatio = w / float(h)

        
    areas=np.append(areas,area)
    aspectRatios=np.append(aspectRatios,aspectRatio)

I have different kinds of images, and as you can see the zone detected can be very "localized" sometimes:

Image 2 - areas more localized

I would like to be able to represent the "dispersion" (not sure of the term) of detected areas on my image. I have multiple images and I want to have a value that represents if they are more localized in a place or dispersed everywhere. I did not find any value representative of that, but I think that might be feasible?


Solution

  • You can compute the average distance between a circle and all the other circles. If you do that for all circles and do the average of those averages it should give you a smaller value the more clumped the circles are. But this is just a suggestion not sure if it will work.