Search code examples
python-3.ximage-processingscikit-image

Question about skimage.transform.PolynomialTransform


I have two images (channel 1 and channel 2) and I'm trying to compute the polynomial transform that warps one image into the other image. First, I created an ORB object and computed the affine transform between the two images (post-affine). Then I decided to try to use skimage.transform.PolynomialTransform. However, when I try to compute the transform, the returned NumPy array has either NaN values or 0 values, even though the original image had a non-zero float value at that location (post-polynomial). What am I doing wrong? Code included below, images in following link. https://drive.google.com/drive/folders/1mWxUvLFLK5-rYCrxs3-uGKFxKq2wXDjS?usp=sharing Thanks in advance!

Note: I know that the question Image warping with scikit-image and transform.PolynomialTransform is similar, but in my opinion the two aren't duplicates. Although that user's problem is with the same function, the pixels in their transformed images have values, whereas by and large mine don't.

import cv2 
from ImageConversion import ImageConversion # self-written, irrelevant
import matplotlib
matplotlib.use('macosX')
import matplotlib.pyplot as plt
from scipy.ndimage import uniform_filter
from skimage.draw import circle_perimeter
from skimage.transform import PolynomialTransform, warp
    
    def affine_transform(self):

        channel1_u8 = self.channel1.astype('uint8') # necessary for detectAndCompute
        channel2_u8 = self.channel2.astype('uint8')
        orb = cv2.ORB_create(100)
        #kp1, des1 = orb.detectAndCompute(channel1_32, None)
        #kp2, des2 = orb.detectAndCompute(channel2_32, None)
        kp1, des1 = orb.detectAndCompute(channel1_u8, None)
        kp2, des2 = orb.detectAndCompute(channel2_u8, None)

        matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
        matches = matcher.match(des1, des2, None)
        matches = sorted(matches, key = lambda x:x.distance)

        points1 = np.zeros((len(matches), 2), dtype = np.float32)
        points2 = np.zeros((len(matches), 2), dtype = np.float32)

        for i, match in enumerate(matches):
            points1[i, :] = kp1[match.queryIdx].pt # index of descriptor in query descriptors, ie index of descriptor in channel 1 which is the image we wish to map to channel 2
            points2[i, :] = kp2[match.trainIdx].pt

        mat_coeff, inliers = cv2.estimateAffine2D(points1, points2) # inliers only here because estimateAffine2D returns both matrix coefficients and inliers
        print(mat_coeff)
        rows, cols = channel1_u8.shape
        #dst = cv2.warpAffine(channel1_u8, mat_coeff, (cols, rows))
        dst = cv2.warpAffine(self.channel1, mat_coeff, (cols, rows))
        return mat_coeff, dst


    tform = PolynomialTransform()
    tform.estimate(self.channel2, dst, order = 3)
    warped_1 = warp(dst, tform, mode = 'constant')

Solution

  • I found the error. I was trying to feed PolynomialTransform.estimate the entire image, rather than identified key points in the image.