Search code examples
pythonopencvopticalflow

Optical flow by Lucas-Kanade method?


The method given in OpenCV tutorials-python has some delay in processing,it is like playing the video at 0.5 speed, Can you suggest any other method where in the optical flow feature(displacement vector fields) can be obtained with a negligible delay?

import cv2
import numpy as np
cap = cv2.VideoCapture("vtest.avi")

ret, frame1 = cap.read()
prvs = cv2.cvtColor(frame1,cv2.COLOR_BGR2GRAY)
hsv = np.zeros_like(frame1)
hsv[...,1] = 255

while(1):
    ret, frame2 = cap.read()
    next = cv2.cvtColor(frame2,cv2.COLOR_BGR2GRAY)

    flow = cv2.calcOpticalFlowFarneback(prvs,next, None, 0.5, 3, 15, 3, 5, 1.2, 0)

    mag, ang = cv2.cartToPolar(flow[...,0], flow[...,1])
    hsv[...,0] = ang*180/np.pi/2
    hsv[...,2] = cv2.normalize(mag,None,0,255,cv2.NORM_MINMAX)
    rgb = cv2.cvtColor(hsv,cv2.COLOR_HSV2BGR)

    cv2.imshow('frame2',rgb)
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break
    elif k == ord('s'):
        cv2.imwrite('opticalfb.png',frame2)
        cv2.imwrite('opticalhsv.png',rgb)
    prvs = next

cap.release()
cv2.destroyAllWindows()

Solution

  • First, the method you are using in your code is not Lucas-Kanade. You are using calcOpticalFlowFarneback function which is the Farneback method for motion estimation.

    In general, optical flow is quite a heavy algorithm, and it really depends on your needs. You have mainly two types of methods - sparse and dense:

    • calcOpticalFlowFarneback is a dense algorithm, which means that it generates a flow matrix as the size of your frame, it actually calculates the flow for every pixel.
    • calcOpticalFlowPyrLK (Lucas-Kanade) method is a sparse method that takes only specified number of pixels and calculates the flow on them.

    You might want to try Lucas-Kanade method if you want better performance.
    Take a look at this OpenCV Optical Flow Tutorial, you have there both examples for Farneback and Lucas-Kanade.

    In the Lucas-Kanade example they are using the goodFeaturesToTrack method which generates number of pixels that are good to track in such motion estimation algorithms. Depends on what your needs are, you might want to use this method or define some pixels yourself.
    Note that you can of course change the number of processed pixels and by that change the processing time of the algorithm.

    You might want to checkout this answer as well, even though it is for DualTVL1 method, it might also apply to other methods.