Coarse-to-Fine Stereo Depth

Isaac provides coarse-to-fine stereo depth estimation, an algorithm that uses GPU acceleration to compute disparity based on a stereo image pair in real-time.

Stereo disparity refers to the difference in coordinates of similar features within two stereo images, resulting from the horizontal distance between two cameras (parallax). By comparing these two stereo images, the relative depth information can be obtained in the form of a disparity map, which encodes the difference in horizontal coordinates of corresponding image points. The values in this disparity map are inversely proportional to the scene depth at the corresponding pixel location.

Coarse-to-fine stereo depth can accurately infer the depth of 90% of the pixels up to 5 m with an overall D1 score of 8.3%, tested on a dataset of 80 synthetic stereo images with 512 x 256 resolution per eye. On Jetson Xavier, it can process stereo images at VGA resolution (672 x 376 each) from the Zed Camera at 27 frames per second (FPS).

The images below are the result of coarse-to-fine stereo depth estimates of depth/disparities with the footage captured from the stereo ZED Camera.

c2f1.jpg

c2f2.jpg

c2f3.jpg

Isaac uses stereo depth detection code in the form of a static library.

Coarse-to-fine stereo depth is wrapped as an Isaac codelet, and is available in the Isaac repository.

The Isaac codelet wrapping coarse-to-fine stereo depth detection takes a pair of rectified stereo images, and publishes a dense depth map (in meters) where each pixel in the map corresponds to the depth value of that pixel in the input image from the left camera. The codelet uses camera intrinsics and extrinsics to compute the stereo depth. The minimum and maximum values for the output depth can be provided as input parameters via a configuration file.

The coarse-to-fine stereo depth sample application uses a ZED stereo camera. First connect the ZED camera to the host system or the Jetson platform you are using. Then use one of the following procedures to run the included sample application.

To Run the Sample Application on the Host System

  1. Build the sample application with the following command:

    Copy
    Copied!
                

    bob@desktop:~/isaac$ bazel build //apps/samples/stereo_matching_depth

  2. Run the sample application with the following command:

    Copy
    Copied!
                

    bob@desktop:~/isaac$ bazel run //apps/samples/stereo_matching_depth/stereo_matching_depth

To Run the Application on Jetson

  1. Build the sample application with the following command:

    Copy
    Copied!
                

    bob@desktop:~/isaac$ bazel build //apps/samples/stereo_matching_depth

  2. Deploy //apps/samples/stereo_matching_depth:stereo_matching_depth-pkg to the robot as explained in Deploying and Running on Jetson.

  3. Log on to the Jetson system and run the application with the following commands:

    Copy
    Copied!
                

    bob@jetson:~/$ cd deploy/bob/stereo_matching_depth-pkg bob@jetson:~/deploy/bob/stereo_matching_depth-pkg$ ./apps/samples/stereo_matching_depth/stereo_matching_depth

    Where “bob” is your user name on the host system.

To View Output from the Application in Websight

While the application is running, open Isaac Sight in a browser by navigating to http://localhost:3000. If running the application on a Jetson platform, make sure to use the IP address of the Jetson system instead of localhost.

In Websight, a window called LeftCamera shows the left input image and a window called Depth shows the depth estimated by the algorithm. The depth values are normalized between the min and max depth depth values provided to the Depth Visualizer codelet. Each pixel in the depth map corresponds to the depth, in meters, of that pixel in the left input image.

c2f4.png


© Copyright 2018-2020, NVIDIA Corporation. Last updated on Feb 1, 2023.