DriveWorks SDK Reference
4.0.0 Release
For Test and Development only

Disparity Computation Workflow

This section explains how to use NVIDIA® DriveWorks Stereo Rectifier and Stereo to rectify two images of a stereo pair and compute a disparity map from them. The stereo disparity algorithm is based on a pair of Gaussian pyramids represented by two dwPyramidImage objects that are built from a pair of rectified dwImageCUDA images.

Step 1: Obtain the intrinsic and extrinsic camera parameters for the left and right camera

Before you can initialize the stereo rectifier, you must first obtain the intrinsic and extrinsic camera parameters for the left and right camera.

The camera intrinsics encompass data such as focal length and principal point. The intrinsic parameters of each camera are represented by a dwCameraModelHandle_t object. They can be obtained from the rig configuration as follows:

// Initialize left and right calibrated cameras containing the camera intrinsic parameters
dwCameraModelHandle_t cameraModelLeft, cameraModelRight;
dwCameraModel_initialize(&cameraModelLeft, leftCamera_ID, rigConfigurationHandle);
dwCameraModel_initialize(&cameraModelRight, rightCamera_ID, rigConfigurationHandle);

where rigConfigurationHandle is a previously initialized dwRigHandle_t; for more information on rig configuration see Rig Configuration. leftCamera_ID and rightCamera_ID are the corresponding known camera IDs in the rig file.

The camera extrinsics encompass the misalignment of the camera with respect to the rig in terms of rotation and translation. The extrinsic parameters of each camera are represented by a dwTransformation object. They can be obtained from the rig configuration as follows:

// Get left and right camera extrinsic parameters from rig configuration
dwTransformation3f left2Rig, right2Rig;
dwRig_getSensorToRigTransformation(&left2Rig, leftCamera_ID , rigConfigurationHandle);
dwRig_getSensorToRigTransformation(&right2Rig, rightCamera_ID, rigConfigurationHandle);

Step 2: Initialize the stereo rectifier and the stereo handle

Once intrinsic and extrinsic parameters of both cameras are available, you can initialize the stereo rectifier handle dwStereoRectifierHandle_t as follows:

// initialize stereo rectifier with intrinsic and extrinsic camera parameters
dwStereoRectifierHandle_t stereoRectifier;
dwStereoRectifier_initialize(stereoRectifier, cameraModelLeft, cameraModelRight, left2Rig, right2Rig, contextHandle);

where contextHandle is a previously initialized dwContextHandle_t.

To initialize a dwStereoHandle_t object, only dwStereoParams is required. For example, the following example initializes stereo parameters with default values:

dwStereoParams stereoParams{};
dwStereo_initParams(stereoParams);

And the following modifies the parameters according to application requirements:

// initialize some stereo parameters
stereoParams.side = side;
stereoParams.levelCount = levelCount;
stereoParams.levelStop = levelStop;

For more information regarding all available parameters, please refer to the dwStereoParams struct.

Note
The effects due to different settings of the following dwStereoParams parameters might be of special interest for the user:
  • levelCount: A larger value leads the algorithm to work at higher pyramid levels, which implies a larger context size but possibly more matching ambiguity due to loss of detail.
  • maxDisparityRange: Smaller values can speed up the algorithm.
  • refinementLevel: A high refinement level will lead to a smoother disparity map.
  • initType: In general, it is advisable to always try several cost types, as their performance is dependent on the image content.

After initializing the stereo parameters, initialize the module as follows:

// initialize stereo module
dwStereoHandle_t stereoAlgorithm;
dwStereo_initialize(stereoAlgorithm, imageWidth, imageHeight, stereoParams, contextHandle);

where contextHandle is assumed to be a previously initialized dwContextHandle_t.

Step 3: Retrieve the rectangular ROI shared by both images

The rectification process applies a geometric transformation that projects the two images onto a common image plane. This plane contains a rectangular region that is shared by both images after rectification, graphically represented by the magenta boxes in the module diagram. It is represented by a dwBox2D object, which can be retrieved as follows:

// Get resolution of output rectified images
dwBox2D roi;
dwStereoRectifier_getCropROI(&roi, stereoRectifier);

Step 4: Create output images for the rectification process

The knowledge of the region roi shall be used when creating the output images that will be passed to the rectification process. An example of this is given in the following snippet:

// Define image properties
dwImageProperties propsInput = imageProperties;
propsInput.type = DW_IMAGE_CUDA;
// Set width and height according to roi
propsInput.width = roi.width;
propsInput.height = roi.height;
// Create output images
dwImageHandle_t outputRectifiedLeft, outputRectifiedRight;
dwImage_create(&outputRectifiedLeft, propsInput, contextHandle);
dwImage_create(&outputRectifiedRight, propsInput, contextHandle);

where imageProperties are known input image properties. Furthermore, you could use roi when creating image streamers for visualization purposes.

Step 5: At runtime, call the stereo rectification process

Execute the rectification process by calling dwStereoRectifier_rectify(). In the following snippet, stereoImageLeft and stereoImageRight must have been obtained from two input videos or a single stereo camera.

// CODE: Get left and right frames, stereoImageLeft and stereoImageRight, as dwImageCUDA from two input videos or from a stereo camera
// Rectify pair of images
dwStereoRectifier_rectify(outputRectifiedLeft, outputRectifiedRight, stereoImageLeft, stereoImageRight, stereoRectifier));

where outputRectifiedLeft and outputRectifiedRight now contain the output rectified images.

Step 6: At runtime, fill the Gaussian pyramids with the rectified frames

At runtime, fill the left and right Gaussian pyramids with the rectified frames as follows:

dwImageFilter_computePyramid(&leftPyramid, &outputRectifiedLeft, 0, context);
dwImageFilter_computePyramid(&rightPyramid, &outputRectifiedRight, 0, context);

where leftPyramid and rightPyramid are dwPyramidImage objects that have already been initialized.

Note
The Stereo algorithm works with greyscale images only. Therefore, outputRectifiedLeft and outputRectifiedRight must have only one plane. Otherwise, additional steps may be required before building the pyramids. For example, in the case of RGBA images, you would have first to convert to YUV and then extract the Y-plane (by calling dwImageCUDA_getPlaneAsImage()) before building the pyramids.

Step 7: Compute the disparity map

Call dwStereo_computeDisparity() to compute the disparity map based on the parameters set in dwStereoParams.

// Compute disparity given left and right pyramid
dwStereo_computeDisparity(leftPyramid, rightPyramid, stereoAlgorithm));

dwStereo_getDisparity() can be called to retrieve the disparity map of the desired side.

// get disparity maps
const dwImageCUDA *disparityLeft, *disparityRight;
dwStereo_getDisparity(disparityLeft, DW_STEREO_SIDE_LEFT, stereoAlgorithm);
dwStereo_getDisparity(disparityRight, DW_STEREO_SIDE_RIGHT, stereoAlgorithm));

Step 8 (Optional): Get an estimation of the confidence associated with each map

In cases where further processing is performed, it can be useful to have an estimation of the confidence associated to each disparity map.

To retrieve the confidence map of the corresponding disparity map, call dwStereo_getConfidence() as follows:

// get confidence maps
const dwImageCUDA *confidenceLeft, *confidenceRight;
dwStereo_getConfidence(confidenceLeft, DW_STEREO_SIDE_LEFT, stereoAlgorithm);
dwStereo_getConfidence(confidenceRight, DW_STEREO_SIDE_RIGHT, stereoAlgorithm);

Confidence is defined as an 8-bit value, where:

  • 0 represents occlusion
  • 1 represents an invalid disparity
  • Other values represent a confidence score where 255 is maximum and 2 is minimum

An occlusion is a region of the image that is visible in one stereo image but not in the other. An invalid disparity is found in regions containing no information, and its value has no confidence.

To obtain a real occlusion in the confidence map, both disparity maps must be computed. If only one is computed, then the occlusion can be roughly found among low confidence pixels in the confidence map.

Extracting a disparity map with a resolution lower than that of the input images can greatly improve the runtime and smoothness of the result. However, this improvement comes at the expense of edge precision and overall estimation accuracy.

Step 9: Release the stereo rectifier and stereo algorithm

Finally, release the stereo rectifier and stereo algorithm as follows:

// Release stereo rectifier
dwStereoRectifier_release(stereoRectifier);
// Release stereo module
dwStereo_release(stereoAlgorithm);

For more details see Stereo Disparity Sample, where it is shown how to perform image rectification and disparity estimation on a video pair. In the sample, horizontal lines are rendered to show how the pixels in both rectified images lay on the same horizontal line and the resulting disparity maps are visualized.