I have two pictures of the same nerve cut at slightly different depths where a different dye was used for staining on each slice. I would like to overlay the two images but they are not perfectly aligned on the slide/photo to do this simply. What I want to do is write code that detects similar shapes (i.e. the same cells) between the two slices and then overlay the pictures based on the positioning of those cells. Is there a way to do this?
The code I have so far is:
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as nb
from skimage import data, io, filters
import skimage.io
from PIL import Image
from scipy import misc
import numpy as np
from skimage.transform import resize
%matplotlib inline
picture1 = "Images/294_R_C3_5" # define your image pathway
i1 = Image.open(picture1 + ".jpg").convert('L') # open your first image and convert it to greyscale
i1 = i1.point(lambda p: p * 5) # brighten the image
region=i1.crop((600,0, 4000, 4000)) # crop the image
region.save(picture1 + ".png", "PNG") # save the cropped image as a PNG
i1 = matplotlib.image.imread(picture1 + ".png", format=None) # print the new cropped image
io.imshow(i1)
io.show()
I1 = Image.open(picture1 + ".png") # reopen your image using a different module
I1
picture2 = "Images/294_R_B3_6" #define your image pathway
i2 = Image.open(picture2 + ".jpg").convert('L') # open your second image and convert it to greyscale
i2 = i2.point(lambda p: p * 5)
region=i2.crop((600,0, 4000, 4000)) # crop the image
region.save(picture2 + ".png", "PNG") # save the cropped image as a PNG
i2 = matplotlib.image.imread(picture2 + ".png", format=None) # print the new cropped image
io.imshow(i2)
io.show()
I2 = Image.open(picture2 + ".png") # open your image using a different module
I2
I've tried using skimage but it seems like it is picking up too many points. Also, I do not know how to stack the images based on these points. Here is my code:
from skimage.feature import ORB
orb = ORB(n_keypoints=800, fast_threshold=0.05)
orb.detect_and_extract(i1)
keypoints1 = orb.keypoints
descriptors1 = orb.descriptors
orb.detect_and_extract(i2)
keypoints2 = orb.keypoints
descriptors2 = orb.descriptors
from skimage.feature import match_descriptors
matches12 = match_descriptors(descriptors1, descriptors2, cross_check=True)
from skimage.feature import plot_matches
fig, ax = plt.subplots(1, 1, figsize=(12, 12))
plot_matches(ax, i1, i2, keypoints1, keypoints2, matches12)
ax.axis('off');
I then tried to clean it up a bit, but this removed a lot more points than I would have liked:
from skimage.transform import ProjectiveTransform
from skimage.measure import ransac
src = keypoints1[matches12[:, 0]][:, ::-1]
dst = keypoints2[matches12[:, 1]][:, ::-1]
module_robust12, inliers12 = ransac((src, dst), ProjectiveTransform, min_samples=4, residual_threshold=1, max_trials=300)
fig, ax = plt.subplots(1, 1, figsize=(12, 12))
plot_matches(ax, i1, i2, keypoints1, keypoints2, matches12[inliers01])
ax.axis('off');
Any ideas? Thank you.
This kind of question comes up quite often in computer vision. To do it automatically is the exact same problem as panorama stitching. What you basically need to do is what you've nearly finished:
I have never used skimage for feature extraction / processing, but your pipeline looks good. I also found this lovely (written-by-the-authors-of-skimage) guide for image stitching that you will find very useful! https://github.com/scikit-image/scikit-image-paper/blob/master/skimage/pano.txt
It basically does half of what you did, and walks through the next steps!