Overview
Given a pair of rectified images from a stereo camera, the Stereo Disparity algorithm uses high-quality dense stereo matching to produce an output image of the same resolution as the input with left-right disparity information. This allows for inferring the depth of the scene captured by the left and right images.
Left image | Right image |
| |
Disparity map |
|
Implementation
The stereo disparity estimator uses semi-global matching algorithm to compute the disparity. We deviate from the original algorithm by using as cost function the hamming distance of the census transforms of the stereo pair.
Usage
- Initialization phase
- Include the header that defines the needed functions and structures.
- Define the stream on which the algorithm will be executed, the input stereo pair, composed of two images, and the output disparity image.
- Create the payload that will contain all temporary buffers needed for processing.
- Processing phase
- Define the configuration parameters needed for algorithm execution.
- Submit the payload for execution on the stream associated with it.
- Optionally, wait until the processing is done.
- Cleanup phase
- Free resources held by the payload.
Consult the Stereo Disparity Sample for a complete example.
Limitations and Constraints
Constraints for specific backends superceed the ones specified for all backends.
All Backends
- Left and right input images must have same type and dimensions.
- Output image dimensions must match input's.
- Left and right input images must have type VPI_IMAGE_TYPE_Y16.
- Output disparity images must have type VPI_IMAGE_TYPE_Y16.
- Maximum disparity parameter passed to algorithm submission must be the same as defined during payload creation.
- Input image dimensions must match what is defined during payload creation.
PVA
- Input and output image dimensions must be 480x270.
- windowSize must be 5.
- maxDisparity must be 64.
References