Search code examples
opencvimage-processingsimilarityscikit-image

Determining Image similarity when images have varying factors. Image Analysis


Greetings for the past week (or more) I've been struggling with a problem.

Scenario:

I am developing an app which will allow an expert to create a recipe using a provided image of something to be used as a base. The recipe consists of areas of interests. The program's purpose is to allow non experts to use it, providing images similar to that original and the software cross checks these different areas of interest from the Recipe image to the Provided image.

One use-case scenario could be banknotes. The expert would select an area on an a good picture of a banknote that is genuine, and then the user would provide the software with images of banknotes that need to be checked. So illumination, as well as capturing device could be different.
I don't want you guys to delve into the nature of comparing banknotes, that's another monster to tackle and I got it covered for the most part.

My Problem:

Initially I shrink one of the two pictures to the size of the smaller one. So now we are dealing with pictures having the same size. (I actually perform the shrinking to the areas of interest and not the whole picture, but that shouldn't matter.)

I have tried and used different methodologies compare these parts but each one had it's limitations due to the nature of the images. Illumination might be different, provided image might have some sort of contamination etc.

What have I tried:

Simple image similarity comparison using RGB difference.

Problem is provided image could be totally different but colours could be similar. So I would get high percentages on "totally" different banknotes.

SSIM on RGB Images.

Would give really low percentage of similarity on all channels.

SSIM after using sobel filter.

Again low percentage of similarity. I used SSIM from both Scikit in python and SSIM from OpenCV

Feature matching with Flann.

Couldn't find a good way to use detected matches to extract a similarity.

Basically I am guessing that I need to use various methods and algorithms to achieve the best result. My gut tells me that I will need to combine RGB comparison results with a methodology that will:

  • Perform some form of edge detection like sobel.
  • Compare the results based on shape matching or something similar.

I am an image analysis newbie and I also tried to find a way to compare, the sobel products of the provided images, using mean and std calculations from openCV, however I either did it wrong, or the results I got were useless anyway. I calculated the eucledian distance between the vectors that resulted from mean and std calculation, however I could not use the results mainly because I couldn't see how they related between images.

I am not providing code I used, firslty because I scrapped some of it, and secondly because I am not looking for a code solution but a methodology or some direction to study-material. (I've read shitload of papers already). Finally I am not trying to detect similar images, but given two images, extract the similarity between them, trying to bypass small differences created by illumination or paper distortion etc.

Finally I would like to say that I tested all the methods by providing the same image twice and I would get 100% similarity, so I didn't totally fuck it up.

Is what I am trying even possible without some sort of training sets to teach the software what are the acceptable variants of the image? (Again I have no idea if that even makes sense :D )


Solution

  • Ok after some digging around, this is what I came with :

    #!/usr/bin/env
    import numpy as np
    import cv2
    import sys
    import matplotlib.image as mpimg
    from skimage import io
    from skimage import measure
    import time
    
    s = 0
    imgA = cv2.imread(sys.argv[1])
    imgB = cv2.imread(sys.argv[2])
    #imgA = cv2.imread('imageA.bmp')
    #imgB = cv2.imread('imageB.bmp')
    
    imgA = cv2.cvtColor(imgA, cv2.COLOR_BGR2GRAY)
    imgB = cv2.cvtColor(imgB, cv2.COLOR_BGR2GRAY)
    
    ret,imgA = cv2.threshold(imgA,127,255,0)
    ret,imgB = cv2.threshold(imgB,127,255,0)
    
    imgAContours, contoursA, hierarchyA = cv2.findContours(imgA, cv2.RETR_TREE , cv2.CHAIN_APPROX_NONE)
    imgBContours, contoursB, hierarchyB = cv2.findContours(imgB, cv2.RETR_TREE , cv2.CHAIN_APPROX_NONE)
    
    imgAContours = cv2.drawContours(imgAContours,contoursA,-1,(0,0,0),1)
    imgBContours = cv2.drawContours(imgBContours,contoursB,-1,(0,0,0),1)
    imgAContours = cv2.medianBlur(imgAContours,5)
    imgBContours = cv2.medianBlur(imgBContours,5)
    
    
    
    #s = 100 * 1/(1+cv2.matchShapes(imgAContours,imgBContours,cv2.CONTOURS_MATCH_I2,0.0))
    #s = measure.compare_ssim(imgAContours,imgBContours)
    #equality = np.equal(imgAContours,imgBContours)
    total = 0.0
    sum = 0.0
    
    for x in range(len(imgAContours)):
        for y in range(len(imgAContours[x])):
            total +=1
            t = imgAContours[x,y] == imgBContours[x,y]
            if t:
                sum+=1
    
    s = (sum/total) * 100
    
    print(s)
    

    Basically I preprocess the two images as simply as possible, then I find the contours. Now the matchShapes function from openCV was not giving me the results I wanted. So I create two images using the information from the contours, and then I apply a median blur filter.

    Currently, I am doing a simply boolean check pixel to pixel. However I am planning to change this in the future, making it smarter. Probably with some array math. If anyone has any suggestions, they are welcome.