Search code examples
pythonscikit-imageaffinetransform

Warp with skimage too slow


(Taking a chance at not posting an example here but I think the question is general enough that one is not necessary.)

I am using skimage.transform.warp to warp a 200x2000 image given 500 source and destination control points calculated with skimage.transform.PiecewiseAffineTransform. When I run this on a single image, it takes about 3 seconds. Is this a reasonable runtime for this calculation in everyone's experience?

The reason I ask is that I have potentially hundreds of images of the same dimensions that I want to apply the same inverse transform to but this will take waaaaay too long. If I use Python's multiprocessing module, the calculation hangs and never completes.

What I would like to do is run warp on a single image and then calculate a polynomial that defines the value of each pixel in the warped image given the values of all 400000 pixels in the input image. Mathematically:

f'(x,y) = a0_0*f(0,0) + a0_1*f(0,1) + ... + a200_1999*f(200,1999) + a200_2000*f(200,2000)

Does anyone have a recommendation as to how would I go about doing this or something similar or implementing something faster?

Thank you!


Solution

  • I ran into a similar issue when I had to correct some images from a spectroscopic camera. I ended up using sp.ndimage.map_coordinates. You have to build a function that transforms your source point coordinates into destination coordinates (dummy function in the example below). I understand from the question this transformation is the same for a bunch of images, and that you already have this function.

    Then you generate a full grid of coordinates, and map_coordinates will map your original image onto these new coordinates trough spline interpolation.

    from scipy.ndimage import map_coordinates
    
    # stack of 10 images
    imgs=np.random.normal(size=[10,200,2000])
    x, y = np.arange(imgs.shape[1]), np.arange(imgs.shape[2])
    ini_coord=np.meshgrid(x, y)
    
    # dummy function transforms source points into destination points
    def dummy(ini_coord):
        return [0.9*x.T for x in ini_coord]
    out_coord=dummy(ini_coord)
    
    import time
    tt=time.clock()
    out_img=np.zeros(imgs.shape)
    for i, img in enumerate(imgs):   
        out_img[i]=map_coordinates(img, out_coord, mode='nearest')
    
    print('{:3f} s'.format(time.clock()-tt))
    

    This runs in less than 1 sec. on my computer