VPI provides functions that, together with Remap algorithm, perform image rectification. The input image can have some level of distortion caused by the camera lens. The end result is an undistorted image that can optionally be reprojected into a second camera to allow, for instance, realignment of input camera's optical axis. This makes it an important stage in certain computer stereo vision applications, such as depth estimation, where two cameras must have their optical axis level and parallel.
The following types of distortion models are included:
Polynomial distortion - encompasses a broad set of common lens distortions, such as barrel, pincushion, a mix of these, etc.
Fisheye distortion - commonly found in fisheye lenses, can be seen as an exaggerated form of barrel distortion.
For other distortions models, users can always resort to creating their own output-to-input mapping, as shown here.
Generates a mapping that corrects image using polynomial lens distortion model.
Implementation
The Lens Distortion Correction algorithm is implemented by warping the distorted input image into a rectified, undistorted output image. It does so by performing the inverse transformation; i.e., for every pixel \((u,v)\) in the destination image, calculate the corresponding coordinate \((\check{u},\check{v})\) in the input image.
For each pixel \((u,v)\) in the destination image, calculate its corresponding 3D point \(\mathsf{P_{out}}\), in output camera space using its intrinsics matrix \(\mathsf{K_{out}}\).
\[ \mathsf{P_{out}} = \mathsf{K_{out}}^{-1} \begin{bmatrix} u \\ v \\ 1 \end{bmatrix} \]
Transform the 3D point \(\mathsf{P_{out}}\) from output camera space to input camera space using the \([\mathsf{R}|\mathsf{t}]^{-1}\) matrix.
Apply lens distortion model \(L\) on the ideal (non-distorted) projected point \((\tilde{x},\tilde{y})\), in focal-length units, resulting in distorted point \((x_d,y_d)\). \(s\) is just a scale factor.
\begin{align*} s \begin{bmatrix} \tilde{x} \\ \tilde{y} \\ 1 \end{bmatrix} &= \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \mathsf{P_{in}} \\ (x_d,y_d) &= L(\tilde{x}, \tilde{y}) \end{align*}
Project distorted point \((\tilde{x},\tilde{y})\) onto the input image space using its intrinsics matrix \(\mathsf{K_{in}}\), resulting in coordinate \((\check{u},\check{v})\). Again, \(s\) is just another scale factor.
The equations above assume a Pinhole Camera Model. In the diagram shown in the link, the input camera is assumed to be aligned with world coordinate frame, with origin at \(O = (0,0,0)\) and optical axis colinear with world's \(Z_w\) axis. The output camera's origin is located at \(F_c\) and optical axis along \(Z_c\). Taken together, this makes the matrix \([R|t]\) transform points from input's camera space into output's.
Lens Distortion Models
These equations above assume that projection is a linear operation. In reality, this is hardly the case. Lens distortions make straight lines in the real world appear projected as bent in the captured image. In order to take this into account, the distortion model is applied to the ideal, distortion-free coordinates in input camera space corresponding to the output image pixel coordinate being rendered. The resulting coordinates are the actual projected position on the input image of the rendered pixel in the output image.
VPI comes with functions that handle both polynomial and fisheye distortion models. These models are characterized by distortion coefficients and, in the case of fisheye lenses, the mapping type. The coefficients are unique for each lens and can either be supplied by the manufacturer or estimated by a lens calibration process.
Polynomial Distortion Model
Polynomial distortion model, also known as Brown-Conrady model, allows representing a broad range of lens distortions, such as barrel, pincushion, mustache, etc.
Tangential distortion is defined by parameters \(p_1\) and \(p_2\) and is due to imperfect centering of the lens components and other manufacturing defects.
The distortion model is defined by a mapping function \(M_f(\theta)\) that depends on fisheye lens type, and coefficients \(k_1,k_2,k_3\) and \(k_4\) as follows:
\(\theta\) is the incident light angle with respect to camera's optical axis.
\(\theta_d\) is the distorted incident light angle, usually due to lens manufacturing defects.
\(r_d\) is the distance from principal point where the incident light is recorded on the image.
Fisheye lenses can be classified depending on the relationship between the angle of incident light and where it is recorded on the image, established by the mapping function \(M_f(\theta)\).
Note
In these formulas \(f=1\) as this is the focal length related to the projected \((\tilde{x},\tilde{y})\) coordinates.
VPI supports the following mapping functions, each one with some desirable characteristics:
Create a dense warp map for warping the distorted image into the corrected output.
grid = vpi.WarpGrid(input.size)
Define the intrinsic and extrinsic camera parameters. The input image was recorded by an APS-C sensor and the lens has focal length of 7.5mm. The principal point is right on image center. Finally, since this is a monocular setup, extrinsic parameters are identity, meaning that input and output cameras are in the same position with optical axis aligned.
sensorWidth = 22.2 # APS-C sensor
focalLength = 7.5
f = focalLength * input.width / sensorWidth
K = [[f, 0, input.width/2 ],
[0, f, input.height/2 ]]
X = np.eye(3,4)
Create the undistortion warp map from the camera parameters and fisheye lens distortion model.
Execute the remap operation on the input image to undistort it. We're using a cubic interpolator for maximum quality, and mapped pixels that fall outside source image boundaries are considered black.
Define the intrinsic and extrinsic camera parameters. The input image was recorded by an APS-C sensor and the lens has focal length of 7.5mm. The principal point is right on image center. Finally, since this is a monocular setup, extrinsic parameters are identity, meaning that input and output cameras are in the same position with optical axis aligned.
Generates a mapping that corrects image distortions caused by fisheye lenses.
Create a payload for the remap algorithm that will perform the correction. The payload is created on the CUDA backend, that eventually will execute the algorithm.
Submit the algorithm to the stream, along with all parameters. We're using a cubic interpolator for maximum quality, and mapped pixels that fall outside source image boundaries are considered black.
Deallocates the warp map control points allocated by vpiWarpMapAllocData.
For a complete example, consult the sample application Fisheye Distortion Correction. It implements the whole process of rectifying images captured by a fisheye lens, including the calibration process.
For more information, see Lens Distortion Correction in the "C API Reference" section of VPI - Vision Programming Interface.
Performance
The main loop of Lens Distortion Correction uses Remap, therefore performance is dominated by it. Refer to Remap's performance tables.