Stereo Depth Estimation#
Stereo depth estimation is the task of predicting depth information from a pair of calibrated stereo images. TAO Toolkit provides advanced stereo depth estimation capabilities through the DepthNet model using the FoundationStereo architecture, which combines transformer and CNN architectures for high-accuracy disparity prediction in industrial and robotic applications.
The stereo depth estimation models in TAO support the following tasks:
trainevaluateinferenceexportgen_trt_engine
These tasks can be invoked from the TAO Launcher using the following convention on the command-line:
SPECS=$(tao-client depth_net_stereo get-spec --action <sub_task> --job_type experiment --id $EXPERIMENT_ID)
JOB_ID=$(tao-client depth_net_stereo experiment-run-action --action <sub_task> --id $EXPERIMENT_ID --specs "$SPECS")
Required arguments:
--id: The unique identifier of the experiment from which to train the model
See also
For information on how to create an experiment using the FTMS client, refer to the Creating an experiment section in the Remote Client documentation.
tao model depth_net <sub_task> <args_per_subtask>
Where args_per_subtask are the command-line arguments required for a given subtask. Each subtask is explained in detail in the following sections.
Supported Model Architecture#
TAO Toolkit supports the FoundationStereo model for stereo depth estimation:
- FoundationStereo
A hybrid transformer-CNN architecture designed for stereo depth estimation. This model takes a pair of rectified stereo images (left and right) as input and produces a disparity map. The architecture combines:
Vision Transformer Encoder: Based on DepthAnythingV2 for rich feature extraction
EdgeNext CNN Encoder: Efficient convolutional feature extractor
Iterative Refinement Module: GRU-based refinement for accurate disparity prediction
Correlation Volume: Computes feature similarities between left and right images
FoundationStereo is optimized for:
High zero-shot accuracy on unseen domains
Real-time performance with NVIDIA® TensorRT™ optimization
Industrial and robotic 3D perception tasks
Autonomous navigation and obstacle detection
Encoder Options#
The FoundationStereo model supports multiple Vision Transformer encoder sizes:
vits(small): 22M parameters, fastest inference, suitable for edge deploymentvitl(large): 304M parameters, higher accuracy for challenging scenes
Data Input for Stereo Depth Estimation#
Dataset Preparation#
Stereo depth estimation requires stereo image pairs with disparity ground truth. The dataset should be organized as follows:
Left images: Rectified left stereo images in standard formats (PNG, JPEG, etc.)
Right images: Rectified right stereo images aligned with left images
Disparity ground truth: Disparity maps in PFM or PNG format
Data split files: Text files listing the paths to stereo pairs and disparity
Data split file format:
Each line in the data split file should contain paths to the left image, right image, and disparity map, separated by spaces:
/path/to/left/image_001.png /path/to/right/image_001.png /path/to/disp/image_001.pfm
/path/to/left/image_002.png /path/to/right/image_002.png /path/to/disp/image_002.pfm
...
For inference without ground truth:
/path/to/left/image_001.png /path/to/right/image_001.png
/path/to/left/image_002.png /path/to/right/image_002.png
...
Stereo calibration requirements:
For accurate stereo depth estimation, ensure:
Images are rectified (epipolar lines are horizontal)
Stereo baseline and focal length are known
Image pairs are temporally synchronized
Minimal lens distortion after rectification
Supported Datasets#
TAO Toolkit supports the following stereo depth datasets:
FSD (Foundation Stereo Dataset): NVIDIA’s proprietary surround-view stereo dataset
IsaacRealDataset: NVIDIA Isaac real-world stereo data
Crestereo: Large-scale stereo dataset with diverse scenes
Middlebury: Classic stereo benchmark dataset with high-quality ground truth
Eth3d: Low-resolution gray-scale outdoor stereo evaluation dataset
KITTI: Autonomous driving stereo dataset
GenericDataset: Generic format for custom stereo datasets
For custom datasets, use the GenericDataset format by creating appropriate data split files
with the format shown above.
Creating an Experiment Specification File#
The experiment specification file is a YAML configuration that defines all parameters for training, evaluation, and inference.
Configuration for FoundationStereo#
Here is an example specification file for training a FoundationStereo model:
Retrieve the specifications:
TRAIN_SPECS=$(tao-client depth_net_stereo get-spec --action train --job_type experiment --id $EXPERIMENT_ID)
Get specifications from $TRAIN_SPECS. You can override values as needed.
results_dir: /results/foundation_stereo/
encryption_key: tlt_encode
dataset:
dataset_name: StereoDataset
baseline: 0.193001
focal_x: 1998.842
max_disparity: 416
train_dataset:
data_sources:
- dataset_name: FSD
data_file: /data/datasets/split/train_fsdv3.txt
- dataset_name: FSD
data_file: /data/datasets/split/train_fsdv2.txt
- dataset_name: Crestereo
data_file: /data/datasets/split/train_crestereo.txt
- dataset_name: IsaacRealDataset
data_file: /data/datasets/split/train_real.txt
batch_size: 1
workers: 8
augmentation:
crop_size: [320, 672]
input_mean: [0.485, 0.456, 0.406]
input_std: [0.229, 0.224, 0.225]
min_scale: -0.2
max_scale: 0.4
yjitter_prob: 1.0
color_aug_prob: 0.2
eraser_aug_prob: 0.5
spatial_aug_prob: 1.0
stretch_prob: 0.8
h_flip_prob: 0.5
v_flip_prob: 0.5
hshift_prob: 0.5
crop_min_valid_disp_ratio: 0.0
val_dataset:
data_sources:
- dataset_name: FSD
data_file: /data/datasets/split/val_fsdv3.txt
- dataset_name: FSD
data_file: /data/datasets/split/val_fsdv2.txt
- dataset_name: Crestereo
data_file: /data/datasets/split/val_crestereo.txt
- dataset_name: IsaacRealDataset
data_file: /data/datasets/split/val_real.txt
batch_size: 1
workers: 4
augmentation:
crop_size: [320, 672]
test_dataset:
data_sources:
- dataset_name: Middlebury
data_file: /data/datasets/stereo_evaluation/split/middlebury_test.txt
batch_size: 1
infer_dataset:
data_sources:
- dataset_name: Eth3d
data_file: /data/datasets/stereo_evaluation/Eth3d/test.txt
batch_size: 1
model:
model_type: FoundationStereo
encoder: vits
stereo_backbone:
depth_anything_v2_pretrained_path: /models/depth_anything_v2_vits.pth
edgenext_pretrained_path: null
use_bn: False
use_clstoken: False
hidden_dims: [128, 128, 128]
corr_radius: 4
cv_group: 8
train_iters: 22
valid_iters: 22
volume_dim: 32
low_memory: 0
mixed_precision: False
n_gru_layers: 3
corr_levels: 2
n_downsample: 2
max_disparity: 416
train:
num_gpus: 2
gpu_ids: [0, 1]
num_nodes: 1
num_epochs: 6
seed: 1234
checkpoint_interval: 1
checkpoint_interval_unit: epoch
validation_interval: 1
resume_training_checkpoint_path: null
pretrained_model_path: null
clip_grad_norm: 0.1
dataloader_visualize: True
vis_step_interval: 500
is_dry_run: False
precision: fp32
distributed_strategy: ddp
activation_checkpoint: False
verbose: False
log_every_n_steps: 500
optim:
optimizer: AdamW
lr: 0.00001
momentum: 0.9
weight_decay: 0.0001
lr_scheduler: PolynomialLR
lr_decay: 0.90
warmup_steps: 20
lr_step_size: 1000
min_lr: 1e-07
cudnn:
benchmark: False
deterministic: True
evaluate:
num_gpus: 1
gpu_ids: [0]
num_nodes: 1
checkpoint: /results/foundation_stereo/train/dn_model_latest.pth
batch_size: 1
input_width: 736
input_height: 320
inference:
num_gpus: 1
gpu_ids: [0]
num_nodes: 1
checkpoint: /results/foundation_stereo/train/dn_model_latest.pth
batch_size: -1
input_width: 736
input_height: 320
save_raw_pfm: True
export:
results_dir: /results/foundation_stereo/export
gpu_id: 0
checkpoint: /results/foundation_stereo/train/dn_model_latest.pth
onnx_file: /results/foundation_stereo/export/dn_model_latest.onnx
on_cpu: False
input_channel: 3
input_width: 416
input_height: 768
opset_version: 16
batch_size: -1
verbose: False
format: onnx
valid_iters: 22
gen_trt_engine:
results_dir: /results/foundation_stereo/trt
gpu_id: 0
onnx_file: /results/foundation_stereo/export/dn_model_latest.onnx
trt_engine: /results/foundation_stereo/trt/dn_model.engine
timing_cache: null
batch_size: -1
verbose: False
tensorrt:
workspace_size: 1024
min_batch_size: 1
opt_batch_size: 2
max_batch_size: 4
data_type: FP16
Key Configuration Parameters#
The following sections provide detailed configuration tables for all parameters.
Dataset Configuration#
Field |
value_type |
description |
default_value |
valid_min |
valid_max |
valid_options |
automl_enabled |
|---|---|---|---|---|---|---|---|
|
categorical |
Dataset name |
StereoDataset |
MonoDataset,StereoDataset |
|||
|
bool |
Whether to normalize depth |
FALSE |
||||
|
float |
Maximum depth in meters in MetricDepthAnythingV2 |
1.0 |
inf |
|||
|
float |
Minimum depth in meters in MetricDepthAnythingV2 |
0.0 |
inf |
|||
|
int |
Maximum allowed disparity for which we compute losses during training |
416 |
1 |
416 |
||
|
float |
Baseline for stereo datasets |
0.193001 |
0.0 |
inf |
||
|
float |
Focal length along x-axis |
1998.842 |
0.0 |
inf |
||
|
collection |
Configurable parameters to construct the train dataset for a DepthNet experiment |
FALSE |
||||
|
collection |
Configurable parameters to construct the val dataset for a DepthNet experiment |
FALSE |
||||
|
collection |
Configurable parameters to construct the test dataset for a DepthNet experiment |
FALSE |
||||
|
collection |
Configurable parameters to construct the infer dataset for a DepthNet experiment |
FALSE |
Model Configuration#
Field |
value_type |
description |
default_value |
valid_min |
valid_max |
valid_options |
automl_enabled |
|---|---|---|---|---|---|---|---|
|
categorical |
Network name |
MetricDepthAnythingV2 |
FoundationStereo,MetricDepthAnything,RelativeDepthAnything |
|||
|
collection |
Network defined paths for Monocular DepthNet Backbone |
FALSE |
||||
|
collection |
Network defined paths for Edgenext and Depthanythingv2 |
FALSE |
||||
|
list |
Hidden dimensions |
[128, 128, 128] |
FALSE |
|||
|
int |
Width of the correlation pyramid |
4 |
1 |
TRUE |
||
|
int |
cv group |
8 |
1 |
TRUE |
||
|
int |
Train iteration |
22 |
1 |
TRUE |
||
|
int |
Validation iteration |
22 |
1 |
|||
|
int |
Volume dimension |
32 |
1 |
TRUE |
||
|
int |
reduce memory usage |
0 |
0 |
4 |
||
|
bool |
Whether to use mixed precision training |
FALSE |
||||
|
int |
Number of hidden GRU levels |
3 |
1 |
3 |
||
|
int |
Number of levels in the correlation pyramid |
2 |
1 |
2 |
||
|
int |
Resolution of the disparity field (1/2^K) |
2 |
1 |
2 |
||
|
categorical |
DepthAnythingV2 Encoder options |
vitl |
vits,vitl |
|||
|
int |
Maximum disparity of the model used in the training of a stereo model |
416 |
Stereo Backbone Configuration#
Field |
value_type |
description |
default_value |
valid_min |
valid_max |
valid_options |
automl_enabled |
|---|---|---|---|---|---|---|---|
|
string |
Path to load DepthAnythingv2 as an encoder for Stereo DepthNet (FoundationStereo) |
|||||
|
string |
Path to load edgenext encoder for Stereo DepthNet (FoundationStereo) |
|||||
|
bool |
Whether to use batch normalization in DepthAnythingV2 |
FALSE |
||||
|
bool |
Whether to use class token |
FALSE |
Training Configuration#
Field |
value_type |
description |
default_value |
valid_min |
valid_max |
valid_options |
automl_enabled |
|---|---|---|---|---|---|---|---|
|
int |
Number of GPUs to run the train job. |
1 |
1 |
|||
|
list |
List of GPU IDs to run the training on. The length of this list must be equal to the number of gpus in train.num_gpus. |
[0] |
FALSE |
|||
|
int |
Number of nodes to run the training on. If > 1, then multi-node is enabled. |
1 |
1 |
|||
|
int |
Seed for the initializer in PyTorch. If < 0, disable fixed seed. |
1234 |
-1 |
inf |
||
|
collection |
FALSE |
|||||
|
int |
Number of epochs to run the training. |
10 |
1 |
inf |
||
|
int |
Interval (in epochs) at which a checkpoint is to be saved; helps resume training. |
1 |
1 |
|||
|
categorical |
Unit of the checkpoint interval. |
epoch |
epoch,step |
|||
|
int |
Interval (in epochs) at which a evaluation will be triggered on the validation dataset. |
1 |
1 |
|||
|
string |
Path to the checkpoint from which to resume training. |
|||||
|
string |
Path to where all the assets generated from a task are stored. |
|||||
|
int |
Number of steps to save the checkpoint. |
|||||
|
string |
Path to a pretrained DepthNet model from which to initialize the current training. |
|||||
|
float |
Amount to clip the gradient by L2 Norm. A value of 0.0 specifies no clipping. |
0.1 |
||||
|
bool |
Whether to visualize the dataloader. |
FALSE |
TRUE |
|||
|
int |
Visualization interval in step. |
10 |
TRUE |
|||
|
bool |
Whether to run the trainer in Dry Run mode. This serves as a good means to validate the specification file and run a sanity check on the trainer without actually initializing and running the trainer. |
FALSE |
||||
|
collection |
Hyperparameters to configure the optimizer. |
FALSE |
||||
|
categorical |
Precision on which to run the training. |
fp32 |
bf16,fp32,fp16 |
|||
|
categorical |
Multi-GPU training strategy. DDP (Distributed Data Parallel) and Fully Sharded DDP are supported. |
ddp |
ddp,fsdp |
|||
|
bool |
Whether train is to recompute in backward pass to save GPU memory (TRUE) or store activations (FALSE). |
TRUE |
||||
|
bool |
Whether to display verbose logs to console. |
FALSE |
||||
|
bool |
Whether to use tiled inference, particularly for transformers which expect fixed size of sequences. |
FALSE |
||||
|
string |
Use tiled inference weight type. |
gaussian |
||||
|
list |
Minimum overlap for tile. |
[16, 16] |
FALSE |
|||
|
int |
Interval steps of logging training results and running validation numbers within one epoch. |
500 |
Optimizer Configuration#
Field |
value_type |
description |
default_value |
valid_min |
valid_max |
valid_options |
automl_enabled |
|---|---|---|---|---|---|---|---|
|
categorical |
Type of optimizer used to train the network |
AdamW |
AdamW,SGD |
|||
|
categorical |
Metric value to be monitored for the |
val_loss |
val_loss,train_loss |
|||
|
float |
Initial learning rate for training the model, excluding the backbone |
0.0001 |
TRUE |
|||
|
float |
Momentum for the AdamW optimizer |
0.9 |
TRUE |
|||
|
float |
Weight decay coefficient |
0.0001 |
TRUE |
|||
|
categorical |
Learning scheduler:
|
MultiStepLR |
MultiStep,StepLR,CustomMultiStepLRScheduler,LambdaLR,PolynomialLR,OneCycleLR,CosineAnnealingLR |
|||
|
list |
Steps at which the learning rate must be decreased This is applicable only with the MultiStep LR |
[1000] |
FALSE |
|||
|
int |
Number of steps to decrease the learning rate in the StepLR |
1000 |
TRUE |
|||
|
float |
Decreasing factor for the learning rate scheduler |
0.1 |
TRUE |
|||
|
float |
Minimum learning rate value for the learning rate scheduler |
1e-07 |
TRUE |
|||
|
int |
Number of steps to perform linear learning rate” warm-up before engaging a learning rate scheduler |
20 |
0 |
inf |
Evaluation Configuration#
Field |
value_type |
description |
default_value |
valid_min |
valid_max |
valid_options |
automl_enabled |
|---|---|---|---|---|---|---|---|
|
int |
Number of GPUs to run the evaluation job. |
1 |
1 |
|||
|
list |
List of GPU IDs to run the evaluation on. The length of this list
must be equal to the number of |
[0] |
FALSE |
|||
|
int |
Number of nodes to run the evaluation on. If > 1, then multi-node is enabled. |
1 |
1 |
|||
|
string |
Path to the checkpoint used for evaluation. |
??? |
||||
|
string |
Path to the TensorRT engine to be used for evaluation.
This only works with |
|||||
|
string |
Path to where all the assets generated from a task are stored. |
|||||
|
int |
Batch size of the input Tensor. This is important if |
-1 |
-1 |
|||
|
int |
Width of the input image tensor. |
736 |
1 |
|||
|
int |
Height of the input image tensor. |
320 |
1 |
Inference Configuration#
Field |
value_type |
description |
default_value |
valid_min |
valid_max |
valid_options |
automl_enabled |
|---|---|---|---|---|---|---|---|
|
int |
Number of GPUs to run the inference job. |
1 |
1 |
|||
|
list |
List of GPU IDs to run the inference on. The length of this list
must be equal to the number of gpus in |
[0] |
FALSE |
|||
|
int |
Number of nodes to run the inference on. If > 1, then multi-node is enabled. |
1 |
1 |
|||
|
string |
Path to the checkpoint used for inference. |
??? |
||||
|
string |
Path to the TensorRT engine to be used for inference.
This only works with |
|||||
|
string |
Path to where all the assets generated from a task are stored. |
|||||
|
int |
Batch size of the input Tensor. This is important if batch_size > 1 for a large dataset. |
-1 |
-1 |
|||
|
float |
Value of the confidence threshold to be used when filtering out the final list of boxes. |
0.5 |
||||
|
int |
Width of the input image tensor. |
1 |
||||
|
int |
Height of the input image tensor. |
1 |
||||
|
bool |
Whether to save the raw pfm output during inference. |
FALSE |
Export Configuration#
Field |
value_type |
description |
default_value |
valid_min |
valid_max |
valid_options |
automl_enabled |
|---|---|---|---|---|---|---|---|
|
string |
Path to where all the assets generated from a task are stored. |
|||||
|
int |
Index of the GPU to build the TensorRT engine. |
0 |
||||
|
string |
Path to the checkpoint file to run export. |
??? |
||||
|
string |
Path to the onnx model file. |
??? |
||||
|
bool |
Whether to export CPU compatible model. |
FALSE |
||||
|
ordered_int |
Number of channels in the input Tensor. |
3 |
1 |
1,3 |
||
|
int |
Width of the input image tensor. |
960 |
32 |
|||
|
int |
Height of the input image tensor. |
544 |
32 |
|||
|
int |
Operator set version of the ONNX model used to generate TensorRT engine. |
17 |
1 |
|||
|
int |
Batch size of the input Tensor for the engine.
A value of |
-1 |
-1 |
|||
|
bool |
Whether to enable verbose TensorRT logging. |
FALSE |
||||
|
categorical |
File format to export to. |
onnx |
onnx,xdl |
|||
|
int |
Number of GRU iterations to export the model. |
22 |
1 |
TensorRT Engine Configuration#
Field |
value_type |
description |
default_value |
valid_min |
valid_max |
valid_options |
automl_enabled |
|---|---|---|---|---|---|---|---|
|
string |
Path to where all the assets generated from a task are stored. |
|||||
|
int |
Index of the GPU to build the TensorRT engine. |
0 |
0 |
|||
|
string |
Path to the ONNX model file. |
??? |
||||
|
string |
Path to the TensorRT engine generated should be stored.
This only works with |
??? |
||||
|
string |
Path to a TensorRT timing cache that speeds up engine generation. This will be created/read/updated. |
|||||
|
int |
Batch size of the input tensor for the engine.
A value of |
-1 |
-1 |
|||
|
bool |
Whether to enable verbose TensorRT logging. |
FALSE |
||||
|
collection |
Hyperparameters to configure the TensorRT Engine builder. |
FALSE |
Augmentation Configuration#
Field |
value_type |
description |
default_value |
valid_min |
valid_max |
valid_options |
automl_enabled |
|---|---|---|---|---|---|---|---|
|
list |
Input mean for RGB frames |
[0.485, 0.456, 0.406] |
FALSE |
|||
|
list |
Input standard deviation per pixel for RGB frames |
[0.229, 0.224, 0.225] |
FALSE |
|||
|
list |
Crop size for input RGB images [height, width] |
[518, 518] |
FALSE |
|||
|
float |
Minimum scale in data augmentation |
-0.2 |
0.2 |
1 |
||
|
float |
Maximum scale in data augmentation |
0.4 |
-0.2 |
1 |
||
|
bool |
Whether to perform flip in data augmentation |
FALSE |
||||
|
float |
Probability for y jitter |
1.0 |
0.0 |
1.0 |
TRUE |
|
|
list |
Gamma range in data augmentation |
[1, 1, 1, 1] |
FALSE |
|||
|
float |
Probability for asymmetric color augmentation |
0.2 |
0.0 |
1.0 |
TRUE |
|
|
float |
Color jitter brightness |
0.4 |
0.0 |
1.0 |
||
|
float |
Color jitter contrast |
0.4 |
0.0 |
1.0 |
||
|
list |
Color jitter saturation |
[0.0, 1.4] |
FALSE |
|||
|
list |
Hue range in data augmentation |
[-0.027777777777777776, 0.027777777777777776] |
FALSE |
|||
|
float |
Probability for eraser augmentation |
0.5 |
0.0 |
1.0 |
TRUE |
|
|
float |
Probability for spatial augmentation |
1.0 |
0.0 |
1.0 |
TRUE |
|
|
float |
Probability for stretch augmentation |
0.8 |
0.0 |
1.0 |
TRUE |
|
|
float |
Maximum stretch augmentation |
0.2 |
0.0 |
1.0 |
||
|
float |
Probability for horizontal flip augmentation |
0.5 |
0.0 |
1.0 |
TRUE |
|
|
float |
Probability for vertical flip augmentation |
0.5 |
0.0 |
1.0 |
TRUE |
|
|
float |
Probability for horizontal shift augmentation |
0.5 |
0.0 |
1.0 |
TRUE |
|
|
float |
Probability for minimum crop valid disparity ratio |
0.0 |
0.0 |
1.0 |
TRUE |
Training the Model#
To train a stereo depth estimation model:
# Get the training spec
TRAIN_SPECS=$(tao-client depth_net_stereo get-spec --action train --job_type experiment --id $EXPERIMENT_ID)
# Modify TRAIN_SPECS as needed, then run training
JOB_ID=$(tao-client depth_net_stereo experiment-run-action --action train --id $EXPERIMENT_ID --specs "$TRAIN_SPECS")
tao model depth_net train \
-e /path/to/experiment_spec.yaml \
-k $KEY \
results_dir=/path/to/results
Required arguments:
-e: Path to the experiment specification file-k: Encryption key for model checkpoints
Optional arguments:
results_dir: Overrides the results directory from the specification filetrain.num_gpus: Overrides number of GPUstrain.num_epochs: Overrides number of training epochsdataset.train_dataset.batch_size: Overrides batch sizemodel.train_iters: Overrides number of refinement iterations
Training Output#
The training process generates the following outputs in the results directory:
train/dn_model_latest.pth: Latest model checkpointtrain/dn_model_epoch_XXX_step_YYY.pth: Periodic checkpointstrain/events.out.tfevents.*: TensorBoard log filestrain/status.json: Training status and metricstrain/visualizations/: Sample disparity predictions (if enabled)
You can monitor training progress using TensorBoard:
tensorboard --logdir=/path/to/results/train
Evaluating the Model#
To evaluate a trained stereo depth estimation model:
EVAL_SPECS=$(tao-client depth_net_stereo get-spec --action evaluate --job_type experiment --id $EXPERIMENT_ID)
JOB_ID=$(tao-client depth_net_stereo experiment-run-action --action evaluate --id $EXPERIMENT_ID --specs "$EVAL_SPECS")
tao model depth_net evaluate \
-e /path/to/experiment_spec.yaml \
-k $KEY \
evaluate.checkpoint=/path/to/checkpoint.pth
Required arguments:
-e: Path to the experiment specification file-k: Encryption key
Optional arguments:
evaluate.checkpoint: Path to model checkpoint to evaluateevaluate.batch_size: Batch size for evaluationevaluate.input_width: Input width for evaluationevaluate.input_height: Input height for evaluationdataset.test_dataset.data_sources: Override test dataset
Evaluation Metrics#
For stereo depth estimation, TAO computes the following metrics:
- End-Point-Error (EPE)
Mean absolute difference between predicted and ground truth disparity. Lower is better.
- D1-All Error
Percentage of pixels with disparity error > 1 pixel. Lower is better.
- Bad Pixel Rates (BP1, BP2, BP3)
Percentage of pixels with errors exceeding 1, 2, and 3 pixels respectively. Lower is better.
- Absolute Relative Error (abs_rel)
Mean of |predicted - ground_truth| / ground_truth. Lower is better.
- Squared Relative Error (sq_rel)
Mean of (predicted - ground_truth)² / ground_truth. Lower is better.
- RMSE
Root mean square error of disparity. Lower is better.
- RMSE Log
RMSE in log space. Lower is better.
These metrics are saved to a JSON file in the results directory and displayed in the console output.
Running Inference#
To run inference on stereo image pairs using a trained model:
INFER_SPECS=$(tao-client depth_net_stereo get-spec --action inference --job_type experiment --id $EXPERIMENT_ID)
JOB_ID=$(tao-client depth_net_stereo experiment-run-action --action inference --id $EXPERIMENT_ID --specs "$INFER_SPECS")
tao model depth_net inference \
-e /path/to/experiment_spec.yaml \
-k $KEY \
inference.checkpoint=/path/to/checkpoint.pth
Required arguments:
-e: Path to the experiment specification file-k: Encryption key
Optional arguments:
inference.checkpoint: Path to model checkpointinference.save_raw_pfm: Saves disparity maps in PFM format (default: False)inference.batch_size: Batch size for inferenceinference.input_width: Input width for inferenceinference.input_height: Input height for inferencedataset.infer_dataset.data_sources: Overrides inference dataset
Inference Output#
The inference process generates:
Disparity map visualizations (colored disparity images) in PNG format
Raw disparity values in PFM format (if
save_raw_pfmis True)Depth maps (if baseline and focal length are provided)
Inference results, saved in
results_dir/inference/
The disparity can be converted to depth (in meters) using:
depth = (baseline * focal_x) / disparity
Exporting the Model#
To export a trained model to ONNX format:
EXPORT_SPECS=$(tao-client depth_net_stereo get-spec --action export --job_type experiment --id $EXPERIMENT_ID)
JOB_ID=$(tao-client depth_net_stereo experiment-run-action --action export --id $EXPERIMENT_ID --specs "$EXPORT_SPECS")
tao model depth_net export \
-e /path/to/experiment_spec.yaml \
-k $KEY \
export.checkpoint=/path/to/checkpoint.pth \
export.onnx_file=/path/to/output.onnx
Required arguments:
-e: Path to the experiment specification file-k: Encryption keyexport.checkpoint: Path to trained model checkpointexport.onnx_file: Output path for ONNX model
Optional arguments:
export.input_channel: Number of input channels (default: 3)export.input_width: Input image width (default: 416)export.input_height: Input image height (default: 768)export.opset_version: ONNX opset version (default: 16)export.batch_size: Batch size, -1 for dynamic (default: -1)export.on_cpu: Export CPU-compatible model (default: False)export.format: Export format -onnxorxdl(default: onnx)export.valid_iters: Number of refinement iterations to export (default: 22)
Generating TensorRT Engine#
To generate a TensorRT engine from the exported ONNX model for optimized inference:
GEN_TRT_SPECS=$(tao-client depth_net_stereo get-spec --action gen_trt_engine --job_type experiment --id $EXPERIMENT_ID)
JOB_ID=$(tao-client depth_net_stereo experiment-run-action --action gen_trt_engine --id $EXPERIMENT_ID --specs "$GEN_TRT_SPECS")
tao deploy depth_net gen_trt_engine \
-e /path/to/experiment_spec.yaml \
gen_trt_engine.onnx_file=/path/to/model.onnx \
gen_trt_engine.trt_engine=/path/to/output.engine
Required arguments:
-e: Path to the experiment specification filegen_trt_engine.onnx_file: Path to ONNX modelgen_trt_engine.trt_engine: Output path for TensorRT engine
Optional arguments:
gen_trt_engine.gpu_id: GPU index for engine generation (default: 0)gen_trt_engine.batch_size: Batch size, -1 for dynamic (default: -1)gen_trt_engine.verbose: Enables verbose logging (default: False)gen_trt_engine.timing_cache: Path to timing cache filegen_trt_engine.tensorrt.workspace_size: TensorRT workspace size in MB (default: 1024)gen_trt_engine.tensorrt.data_type: Precision -FP32orFP16(default: FP16)gen_trt_engine.tensorrt.min_batch_size: Minimum batch size (default: 1)gen_trt_engine.tensorrt.opt_batch_size: Optimal batch size (default: 2)gen_trt_engine.tensorrt.max_batch_size: Maximum batch size (default: 4)
TensorRT Engine Benefits#
Performance: 3-10x faster inference compared to PyTorch
Memory efficiency: Reduced memory footprint
Optimization: Layer fusion, kernel auto-tuning, and precision calibration
Deployment: Production-ready inference engine for real-time applications
For stereo depth estimation, TensorRT optimization is particularly beneficial for:
Real-time robotic vision (30+ FPS on modern GPUs)
Autonomous navigation systems
Industrial inspection and quality control
AR/VR applications requiring low latency
Model Configuration Reference#
For a complete reference to all configuration parameters, refer to the configuration tables in the TAO Toolkit documentation or the experiment specification files provided with the toolkit. Many parameters are shared with monocular depth estimation.
Best Practices#
Training Recommendations#
Dataset diversity: Mix multiple datasets (FSD, Crestereo, Isaac) for better generalization
Encoder selection:
Use
vitsfor real-time applications (fastest, 22M parameters)Use
vitlfor maximum accuracy (304M parameters)
Batch size: Start with batch size 1-2 per GPU for FoundationStereo
Learning rate: Use small learning rates (1e-5) with PolynomialLR scheduler
Multi-GPU training: Use 2-8 GPUs with DDP strategy for faster training
Activation checkpointing: Enable for larger encoders (
vitl) to reduce memoryRefinement iterations:
Use 22 iterations during training for best accuracy
You can reduce to 10-15 for faster inference with minimal accuracy loss
Augmentation: Use strong augmentation for robustness across domains
Data Preparation#
Stereo rectification: Ensure images are properly rectified before training
Calibration accuracy: Accurate baseline and focal length are critical for metric depth
Disparity range: Set
max_disparitybased on your camera setup and scene depthImage resolution: Higher resolution (e.g., 768x1280) improves accuracy but requires more memory
Mixed datasets: Combine indoor and outdoor datasets for domain generalization
Data quality: Filter out poorly calibrated or misaligned stereo pairs
Performance Optimization#
TensorRT deployment: Always use TensorRT engines for production (3-10x speedup)
FP16 precision: Use FP16 for TensorRT engines (2x faster with minimal accuracy loss)
Dynamic batching: Use dynamic batch sizes for variable workloads
Timing cache: Reuse timing cache to speed up subsequent engine builds
Input resolution: Balance resolution and speed based on application needs
Multi-stream inference: Use multiple CUDA streams for maximum throughput
Troubleshooting#
Common Issues#
Out of memory (OOM):
Reduce batch size to 1
Enable
activation_checkpoint: TrueUse a smaller encoder (
vitsinstead ofvitl)Reduce
crop_sizeor input resolutionSet
low_memory: 1or higher (0-4) in model configReduce
train_itersto 10-15
Poor disparity quality:
Check stereo rectification - images must be properly rectified
Verify
baselineandfocal_xmatch your camera calibrationEnsure
max_disparityis appropriate for your depth rangeIncrease training epochs (6-10 epochs recommended)
Use stronger augmentation
Mix multiple datasets for better generalization
Check for occluded regions and textureless areas in your data
Training instability:
Reduce learning rate (try 5e-6 to 1e-5)
Enable gradient clipping (
clip_grad_norm: 0.1)Use a PolynomialLR scheduler with
lr_decay: 0.9Check for NaN or inf values in disparity ground truth
Ensure disparity maps are in correct format (PFM or PNG)
Use
cudnn.deterministic: Truefor reproducible training
Slow training:
Increase
batch_sizeif memory allowsUse multiple GPUs (2-8) with DDP strategy
Reduce
log_every_n_stepsandvis_step_intervalUse
fp16precision (2x speedup)Increase number of data loading
workers(8-16)Disable
dataloader_visualizeduring long training runsUse smaller
train_iters(15 instead of 22)
Poor zero-shot performance:
Train on diverse datasets (mix FSD, Crestereo, Isaac)
Use strong augmentation (color, eraser, spatial)
Increase training epochs
Use larger encoder (
vitl)Ensure training data covers target domain characteristics
Fine-tune on a small sample of target domain data
Inference speed issues:
Use TensorRT engine instead of PyTorch model
Enable FP16 precision in TensorRT
Reduce input resolution if acceptable
Reduce
valid_itersto 10-15 for faster inferenceUse
vitsencoder for edge deploymentOptimize batch size for your GPU
Additional Resources#
TAO Toolkit documentation: https://docs.nvidia.com/tao/
Sample notebooks: NVIDIA/tao_tutorials
NGC pretrained models: https://catalog.ngc.nvidia.com/
FoundationStereo paper: NVIDIA Technical Reports
For more information about monocular depth estimation, go to Monocular Depth Estimation.