I am trying to replace opencv functions with a more pythonic approach by using similar functions in numpy
and skimage
. The raw moments are not the same but the centroids are similar. I was wondering if there are any implementation differences regarding moment computation between skimage
and cv2
?
from skimage import measure
contour = measure.find_countour(img)
m = cv2.moments(contour.astype(np.float32)) # only works with CV_32F
print(m["m00"], m["m10"] / m["m00"], m["m01"] / m["m00"])
results from cv2
:
9231.5 781.3878567946704 567.7414649118056
However if I use skimage
...
m = measure.moments_coords(contour)
print(m[0, 0], m[1, 0] / m[0, 0], m[0, 1] / m[0, 0])
results for skimage
:
513.0 781.7534113060428 567.4697855750487
from the opencv documentation, cv2.contourArea
should yield the same result as raw moment m["m00"]
(which I also verified to be true when the data type is np.float32
). And since there is no similar function in skimage I had to use m[0, 0]
but I am confused as to why it doesn't match the one in cv2.
Edit
Full code for preprocessing:
import cv2
import pathlib
import numpy as np
from skimage import measure
filepath = pathlib.Path("/path/to/test/image.png")
print(filepath)
img = cv2.imread(str(filepath))
# preprocessing steps
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, bi = cv2.threshold(inv, 128, 255, cv2.THRESH_BINARY)
# cv_contours, _ = cv2.findContours(bi, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# print(cv_contours[0], np.squeeze(cv_contours[0], 1))
sk_contour = measure.find_contours(bi)[0]
cv_contour = np.expand_dims(sk_contour.astype(np.int32), 1)
cv_m = cv2.moments(cv_contour)
print(cv_m["m00"], cv_m["m10"] / cv_m["m00"], cv_m["m01"] / cv_m["m00"])
sk_m = measure.moments_coords(sk_contour.astype(np.int32))
print(sk_m[0, 0], sk_m[1, 0] / sk_m[0, 0], sk_m[0, 1] / sk_m[0, 0])
results:
14341364.5 3022.5001979646586 3022.5001979646586
15149.0 3022.8749752458907 3022.874909234933
The issue is with your calling moments_coords()
.
It does not take a contour. It literally takes the set of all points in the connected component.
This is their example, from the docs. They build up a list of all points in a rectangle, not just the perimeter.
>>> coords = np.array([[row, col]
... for row in range(13, 17)
... for col in range(14, 18)], dtype=np.float64)
>>> M = moments_coords(coords)
>>> centroid = (M[1, 0] / M[0, 0], M[0, 1] / M[0, 0])
>>> centroid
(14.5, 15.5)
https://scikit-image.org/docs/stable/api/skimage.measure.html#skimage.measure.moments_coords
With a proper MRE you could make a lot more sense of the resulting values and come up with hypotheses.
bi = np.zeros((100, 100), dtype=np.uint8)
(x,y,w,h) = 20, 20, 50, 50
img[y:y+h, x:x+w] = 255
That gives you a 50 by 50 box, area 2500, perimeter 200, sum 2500*255 = 637500.
From any further analysis, you'll get various results. The measure.moments_coords(sk_contour)
returns 201 for m00
then. That is suspicious.