Search code examples
pythonimageopencvimage-processingimage-stitching

Сreate an adjacency matrix for image stitching


I have homographies built between pairs of images. How do I create an adjacency matrix to describe which images overlap with each other?

Here my code. I call function 'match' to get homography between two images

a = s.left_list[0]
N = len(s.left_list)
adjacency_matrix = np.zeros((N, N))
for i in range(N):
    for j in range(i + 1, N):
        for b in s.left_list[1:]:
            H = s.matcher_obj.match(a, b, 'left')
            print("Homography is : ", H)




def match(self, i1, i2, direction=None):
    imageSet1 = self.getSURFFeatures(i1)
    imageSet2 = self.getSURFFeatures(i2)
    print("Direction : ", direction)
    matches = self.flann.knnMatch(
        imageSet2['des'],
        imageSet1['des'],
        k=2
        )
    good = []
    for i , (m, n) in enumerate(matches):
        if m.distance < 0.9*n.distance:
            good.append((m.trainIdx, m.queryIdx))

    if len(good) > 4:
        pointsCurrent = imageSet2['kp']
        pointsPrevious = imageSet1['kp']

        matchedPointsCurrent = np.float32(
            [pointsCurrent[i].pt for (__, i) in good]
        )
        matchedPointsPrev = np.float32(
            [pointsPrevious[i].pt for (i, __) in good]
            )

        H, s = cv2.findHomography(matchedPointsCurrent,matchedPointsPrev, cv2.RANSAC, 4)
       return H

    return None

def getSURFFeatures(self, im):

    gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
    kp, des = self.surf.detectAndCompute(gray, None)
    return {'kp':kp, 'des':des}

Solution

  • To rephrase, you have a number of images and you want to determine which images overlap in order for you to put them into an image stitcher.

    One of the most common ways is to simply iterate through every unique pair of images, then calculate the local homography between these two images. Once you calculate the homography, you calculate the total fraction of keypoint pairs that were inliers for the homography. If it's above some threshold, say 50%, then you would consider that there is a good amount of overlap between these two images and you'd count these as a valid pair.

    The pseudocode for this is as follows, assuming your images are stored in some list lst:

    N = len(lst)
    adjacency_matrix = np.zeros((N, N))
    for i in range(N):
        for j in range(i + 1, N):
            1. Calculate homography between lst[i] and lst[j]
            2. Compute the total number of inlier keypoint pairs from the homography
            3. Take (2) and divide by the total number of keypoint pair matches
            4. If (3) is above a threshold (50% or 0.5), then:
                adjacency_matrix[i, j] = 1
                adjacency_matrix[j, i] = 1
    

    Using the code you've just shown me, note that cv2.findHomography returns not only a matrix but a mask to tell you which pairs of points were used as inliers to build the matrix. You can simply sum over the mask and divide by the total number of elements in this mask to give you that proportion. This will only change the return statement of your code. You've specified the reprojection threshold to be 4 pixels, which is quite large so you may get poor stitching results. Make this smaller when you get this running.

    def match(self, i1, i2, direction=None):
        imageSet1 = self.getSURFFeatures(i1)
        imageSet2 = self.getSURFFeatures(i2)
        print("Direction : ", direction)
        matches = self.flann.knnMatch(
            imageSet2['des'],
            imageSet1['des'],
            k=2
            )
        good = []
        for i , (m, n) in enumerate(matches):
            if m.distance < 0.9*n.distance:
                good.append((m.trainIdx, m.queryIdx))
    
        if len(good) > 4:
            pointsCurrent = imageSet2['kp']
            pointsPrevious = imageSet1['kp']
    
            matchedPointsCurrent = np.float32(
                [pointsCurrent[i].pt for (__, i) in good]
            )
            matchedPointsPrev = np.float32(
                [pointsPrevious[i].pt for (i, __) in good]
                )
    
            H, s = cv2.findHomography(matchedPointsCurrent,matchedPointsPrev, cv2.RANSAC, 4)
            return H, sum(s) / len(s)  #  Change here
    
        return None
    

    Finally, with the pseudocode here's what it is implemented with your particular setup:

    lst = s.left_list
    N = len(lst)
    adjacency_matrix = np.zeros((N, N)) 
    for i in range(N):
        for j in range(i + 1, N):
            # 1. Calculate homography between lst[i] and lst[j]
            out = s.matcher_obj.match(lst[i], lst[j], 'left')
    
            # 2. Compute the total number of inlier keypoint pairs from the homography - done in (1)
            # 3. Take (2) and divide by the total number of keypoint pair matches - done in (1)
    
            # 4. If (3) is above a threshold (50% or 0.5), then:
            #    adjacency_matrix[i, j] = 1
            #    adjacency_matrix[j, i] = 1
            if out is not None:
                H, s = out
                if s >= 0.5:
                    adjacency_matrix[i, j] = 1
                    adjacency_matrix[j, i] = 1
    

    Take note that your matching method outputs None if there are an insufficient number of inliers to provide confidence in a good homography. Therefore we need to check if the output is not None, then check the proportion accordingly. Finally, I would strongly recommend tuning the reprojection threshold and the similarity threshold (0.5 in the code just above) until you get a good enough stitch for your purposes. These are baked into your code for the moment, but consider making these tunable.