Lucas–Kanade method

In computer vision, the Lucas–Kanade method (pronounced /lukɑs kɑnɑdɪ/) is a widely used differential method for optical flow estimation developed by Bruce D. Lucas and Takeo Kanade. It assumes that the flow is essentially constant in a local neighbourhood of the pixel under consideration, and solves the basic optical flow equations for all the pixels in that neighbourhood, by the least squares criterion.[1][2]

By combining information from several nearby pixels, the Lucas–Kanade method can often resolve the inherent ambiguity of the optical flow equation. It is also less sensitive to image noise than point-wise methods. On the other hand, since it is a purely local method, it cannot provide flow information in the interior of uniform regions of the image.

Concept

The Lucas–Kanade method assumes that the displacement of the image contents between two nearby instants (frames) is small and approximately constant within a neighborhood of the point p under consideration. Thus the optical flow equation can be assumed to hold for all pixels within a window centered at p. Namely, the local image flow (velocity) vector (V_x,V_y) must satisfy

I_x(q_1) V_x + I_y (q_1) V_y = -I_t(q_1)
I_x(q_2) V_x + I_y (q_2) V_y = -I_t(q_2)
\vdots
I_x(q_n) V_x + I_y (q_n) V_y = -I_t(q_n)

where q_1,q_2,\dots,q_n are the pixels inside the window, and I_x(q_i),I_y(q_i),I_t(q_i) are the partial derivatives of the image I with respect to position x, y and time t, evaluated at the point q_i and at the current time.

These equations can be written in matrix form A v = b, where

A = \begin{bmatrix}
I_x(q_1) & I_y(q_1) \\[10pt]
I_x(q_2) & I_y(q_2) \\[10pt]
\vdots  & \vdots  \\[10pt]
I_x(q_n) & I_y(q_n) 
\end{bmatrix},
\quad\quad
v = 
\begin{bmatrix}
V_x\\[10pt]
V_y
\end{bmatrix},
\quad \mbox{and}\quad
b = 
\begin{bmatrix}
-I_t(q_1) \\[10pt]
-I_t(q_2) \\[10pt]
\vdots  \\[10pt]
-I_t(q_n)
\end{bmatrix}

This system has more equations than unknowns and thus it is usually over-determined. The Lucas–Kanade method obtains a compromise solution by the least squares principle. Namely, it solves the 2×2 system

A^T A v=A^T b or
	\mathrm{v}=(A^T A)^{-1}A^T b

where A^T is the transpose of matrix A. That is, it computes

\begin{bmatrix}
V_x\\[10pt]
V_y
\end{bmatrix} 
=
\begin{bmatrix}
\sum_i I_x(q_i)^2      & \sum_i I_x(q_i)I_y(q_i) \\[10pt]
\sum_i I_y(q_i)I_x(q_i) & \sum_i I_y(q_i)^2 
\end{bmatrix}^{-1}
\begin{bmatrix}
-\sum_i I_x(q_i)I_t(q_i) \\[10pt]
-\sum_i I_y(q_i)I_t(q_i)
\end{bmatrix}

where the central matrix in the equation is an Inverse matrix. The sums are running from i=1 to n.

The matrix A^T A is often called the structure tensor of the image at the point p.

Weighted window

The plain least squares solution above gives the same importance to all n pixels q_i in the window. In practice it is usually better to give more weight to the pixels that are closer to the central pixel p. For that, one uses the weighted version of the least squares equation,

A^T W A v=A^T W b

or

	\mathrm{v}=(A^T W A)^{-1}A^T W b

where W is an n×n diagonal matrix containing the weights W_{i i} = w_i to be assigned to the equation of pixel q_i. That is, it computes

\begin{bmatrix}
V_x\\[10pt]
V_y
\end{bmatrix} 
=
\begin{bmatrix}
\sum_i w_i I_x(q_i)^2      & \sum_i w_i I_x(q_i)I_y(q_i) \\[10pt]
\sum_i w_i I_x(q_i)I_y(q_i) & \sum_i w_i I_y(q_i)^2      
\end{bmatrix}^{-1}
\begin{bmatrix}
-\sum_i w_i I_x(q_i)I_t(q_i) \\[10pt]
-\sum_i w_i I_y(q_i)I_t(q_i)
\end{bmatrix}

The weight w_i is usually set to a Gaussian function of the distance between q_i and p.

Use conditions and techniques

In order for equation A^T A v=A^T b to be solvable, A^T A should be invertible, or A^T A's eigenvalues satisfy \lambda_1 \ge \lambda_2 > 0. To avoid noise issue, usually \lambda_2 is required not too small. Also, if \lambda_1/\lambda_2 is too large, this means the point p is on an edge, and this method suffers from the aperture problem. So for this method to work properly, the condition is \lambda_1 and \lambda_2 are large enough and have similar magnitude. This condition is also the one for Corner detection. This observation shows that one can easily tell which pixel is suitable for Lucas–Kanade method to work on by inspecting a single image.

One main assumption for this method is that the motion is small (less than 1 pixel between two images for example). If the motion is large and violates this assumption, one technique is to reduce the resolution of images first and then apply the Lucas-Kanade method. [3]

Improvements and extensions

The least-squares approach implicitly assumes that the errors in the image data have a Gaussian distribution with zero mean. If one expects the window to contain a certain percentage of "outliers" (grossly wrong data values, that do not follow the "ordinary" Gaussian error distribution), one may use statistical analysis to detect them, and reduce their weight accordingly.

The Lucas–Kanade method per se can be used only when the image flow vector V_x,V_y between the two frames is small enough for the differential equation of the optical flow to hold, which is often less than the pixel spacing. When the flow vector may exceed this limit, such as in stereo matching or warped document registration, the Lucas–Kanade method may still be used to refine some coarse estimate of the same, obtained by other means; for example, by extrapolating the flow vectors computed for previous frames, or by running the Lucas-Kanade algorithm on reduced-scale versions of the images. Indeed, the latter method is the basis of the popular Kanade-Lucas-Tomasi (KLT) feature matching algorithm.

A similar technique can be used to compute differential affine deformations of the image contents.


See also

References

External links

This article is issued from Wikipedia - version of the 3/29/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.