The Pyramidal Lucas-Kanade (LK) Optical Flow algorithm estimates the 2D translation of sparse feature points from a previous frame to the next. Image pyramids are used to improve the performance and robustness of tracking over larger translations.
Inputs are previous image pyramid, the next image pyramid, and the feature points on the previous image.
Outputs are the feature points on the next image and the tracking status of each feature point.
Frame #10
Implementation
Each feature point defines their location in the image with x, y coordinates. These points are then tracked in the next image. The tracking status will inform whether the feature point is being tracked successfully or not. For more information, see [1] and [2].
C API functions
For list of limitations, constraints and backends that implements the algorithm, consult reference documentation of the following functions:
Create the Pyramidal Optical Flow LK object, feeding it the initial frame and the VPI array with the keypoints to track. The CUDA backend will be used to execute the algorithm.
Fetch a new frame from input video sequence into a VPI image.
while inVideo.read(input)[0]:
Feed this VPI image into the OptFlow object. I'll return the estimated keypoint positions in the passed frame, along with a vector that informs the keypoint state, i.e., whether it's being tracked or not.
curFeatures, status = optflow(input)
Initialization phase
Include the header that defines the needed functions and structures.
Start of the processing loop from the second frame. The previous frame is where the algorithm fetches the feature points from, the current frame is where these feature points are estimated on.
for (int idframe = 1; idframe < frame_count; ++idframe)
{
Fetch new frame from the input video.
curImage = /* "new frame from video sequence */;
Generate image pyramid for the current image using the CUDA backend.
Submit the algorithm to be executed by the CUDA backend. It will go through all input feature points, and find the estimated points and tracking status in the next image. The user will decide whether to continue using the tracked feature points or re-generate a new set of feature points. In this example the tracked feature points are reused as input for the next frame.
B. D. Lucas and T. Kanade (1981), "An iterative image registration technique with an application to stereo vision."
Proceedings of Imaging Understanding Workshop, pages 121–130
J. Y. Bouguet, (2000), "Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm."
Intel Corporation, Microprocessor Research Labs