Search code examples
pythonscipyconvolution

Python image processing: How do you align images that have been rotated and shifted?


Here I have some code that can vertically and horizontally shift images so that a specific feature can align (credits to https://stackoverflow.com/a/24769222/15016884):

def cross_image(im1, im2):
    im1_gray = np.sum(im1.astype('float'), axis=2)
    im2_gray = np.sum(im2.astype('float'), axis=2)

    im1_gray -= np.mean(im1_gray)
    im2_gray -= np.mean(im2_gray)

   
    return signal.fftconvolve(im1_gray, im2_gray[::-1,::-1], mode='same')

corr_img_null = cross_image(cloud1,cloud1)
corr_img = cross_image(cloud1,cloud2)

y0, x0 = np.unravel_index(np.argmax(corr_img_null), corr_img_null.shape)
y, x = np.unravel_index(np.argmax(corr_img), corr_img.shape)
  
ver_shift = y0-y
hor_shift = x0-x

print('horizontally shifted', hor_shift)
print('vertically shifted', ver_shift)

#defining the bounds of the part of the images I'm actually analyzing
xstart = 100
xstop = 310
ystart = 50
ystop = 200

crop_cloud1 = cloud1[ystart:ystop, xstart:xstop]
crop_cloud2 = cloud2[ystart:ystop, xstart:xstop]
crop_cloud2_shift = cloud2[ystart+ver_shift:ystop+ver_shift, xstart+hor_shift:xstop+hor_shift]

plot_pos = plt.figure(5)
plt.title('image 1')
plt.imshow(crop_cloud1)

plot_pos = plt.figure(6)
plt.title('image 2')
plt.imshow(crop_cloud2)

plot_pos = plt.figure(7)
plt.title('Shifted image 2 to align with image 1')
plt.imshow(crop_cloud2_shift)

Here are the results:

image1

image2

shifted image 2 to align with image 1

Now, I want to work with the example shown below, where rotations in addition to translations will be needed to align the features in my image.

bolt1

bolt2

Here is my code for that: The idea is to convolve each possible configuration of image 2 for every angle from -45 to 45 (for my application, this angle is not likely to be exceeded) and find at which coordinates and rotation angle the convolution is maximized.

import cv2

def rotate(img, theta):
    (rows, cols) = img.shape[:2]

    M = cv2.getRotationMatrix2D((cols / 2, rows / 2), theta, 1)
    res = cv2.warpAffine(img, M, (cols, rows))
    return res

#testing all rotations of image 2
corr_bucket = []
for i in range(-45,45):
    rot_img = rotate(bolt2,i)
    corr_img = cross_image(bolt1,rot_img)
    corr_bucket.append(corr_img)
corr_arr = np.asarray(corr_bucket)


corr_img_null = cross_image(bolt1,bolt1)


y0, x0 = np.unravel_index(np.argmax(corr_img_null), corr_img_null.shape)
r_index, y1, x1 = np.unravel_index(np.argmax(corr_arr), corr_arr.shape)

r = -45+r_index
ver_shift = y0-y
hor_shift = x0-x
ver_shift_r = y0-y1
hor_shift_r = x0-x1

#What parts of the image do you want to analyze
xstart = 200
xstop = 300
ystart = 100
ystop = 200

crop_bolt1 = bolt1[ystart:ystop, xstart:xstop]
crop_bolt2 = bolt2[ystart:ystop, xstart:xstop]
rot_bolt2 = rotate(bolt2,r)
shift_rot_bolt2 = rot_bolt2[ystart+ver_shift_r:ystop+ver_shift_r, xstart+hor_shift_r:xstop+hor_shift_r]

plot_1 = plt.figure(9)
plt.title('image 1')
plt.imshow(crop_bolt1)

plot_2 = plt.figure(10)
plt.title('image 2')
plt.imshow(crop_bolt2)

plot_3 = plt.figure(11)
plt.title('Shifted and rotated image 2 to align with image 1')
plt.imshow(shift_rot_bolt2)

Unfortunately, from the very last line, I get the error ValueError: zero-size array to reduction operation minimum which has no identity. I'm kind of new to python so I don't really know what this means or why my approach isn't working. I have a feeling that my error is somewhere in unraveling corr_arr because the x, y and r values it returns I can already see, just by estimating, would not make the lightning bolts align. Any advice?


Solution

  • The issue came from feeding in the entire rotated image into scipy.signal.fftconvolve. Crop a part of image2 after rotating to use as a "probe image" (crop your unrotated image 1 in the same way), and the code I have written in my question works fine.