Search code examples
pythonalgorithmgaussiansift

DoG images displaying more extremes than expected in python SIFT implementation


I am working on translating Lowe’s paper on SIFT in python. I am struggling with the difference of Gaussian results, which demonstrate very extreme images, i.e. they are not evenly distributed on the grey scale. I am constructing the scalespace as follows

def L(sig,I):
    return cv2.GassianBlur(I,(25,25),sig)

sig0 = sqrt(2)
sig = sig0
k=math.sqrt(2)
o=[]
Li=[L(sig,I0)]
for i in range(nspo):
    Li.append(L(k*sig,I0))
    Di = np.subtract(Li[i+1],Li[i])
    sig = k*sig
    o.append(Di)

Taking inspiration from Dr. Weitz' tutorial, I notice that my results differ from his in that his resulting DoG image is evenly distributed over the grey scale and mine tend to take more extreme space. Below is the example frame used in Dr. Weitz' tutorial, the resulting upsampled DoG, and the DoG that I derived using the above algorithm. Thanks in advance for any tips or suggestions or solutions to this conundrum.

Original image
Ideal DoG example
Too extreme DoG example (algorithm above)


Solution

  • It seems your input image has type uint8, so when you call np.subtract(Li[i+1], Li[i]), you're subtracting unsigned integers. In the difference image, because any negative values will wrap around to 255, you'll see bright white regions like you have here.

    You can cast your input image to float32 when you first load it, or you can force numpy to do the subtraction using float32, like this:

    np.subtract(Li[i+1], Li[i], dtype='float32')