Stereo Depth with TAO Deploy#

To generate an optimized NVIDIA® TensorRT engine, a FoundationStereo .onnx file, which is first generated using tao model depth_net export, is taken as an input to tao deploy depth_net gen_trt_engine. For more information about training a FoundationStereo model, refer to the Stereo Depth Estimation.

Converting ONNX File into TensorRT Engine#

To convert the .onnx file, you can reuse the specification file from the tao model depth_net export command.

gen_trt_engine#

The gen_trt_engine parameter defines TensorRT engine generation.

Use the following command to get an experiment specification file for FoundationStereo:

SPECS=$(tao-client depth_net_stereo get-spec --action gen_trt_engine --job_type experiment --id $EXPERIMENT_ID)

Field

value_type

Description

default_value

valid_min

valid_max

valid_options

automl_enabled

results_dir

string

Path to where all the assets generated from a task are stored.

FALSE

gpu_id

int

Index of the GPU to build the TensorRT engine.

0

FALSE

onnx_file

string

Path to the ONNX model file.

???

FALSE

trt_engine

string

Path where the generated TensorRT engine from gen_trt_engine is stored. This only works with tao-deploy.

FALSE

input_channel

int

Number of channels in the input tensor.

3

3

FALSE

opset_version

int

Operator set version of the ONNX model used to generate the TensorRT engine.

17

1

FALSE

batch_size

int

Batch size of the input tensor for the engine. A value of -1 implies dynamic tensor shapes.

-1

-1

FALSE

verbose

bool

Flag to enable verbose TensorRT logging.

False

FALSE

tensorrt

collection

Hyperparameters to configure the TensorRT Engine builder.

FALSE

tensorrt#

The tensorrt parameter defines the TensorRT engine generation.

Field

value_type

Description

default_value

valid_min

valid_max

valid_options

automl_enabled

data_type

string

Precision to be set for building the TensorRT engine.

FP32

FP32,FP16

FALSE

workspace_size

int

Size in megabytes of the workspace TensorRT has to run its optimization tactics and generate the TensorRT engine.

1024

FALSE

min_batch_size

int

Minimum batch size in the optimization profile for the input tensor of the TensorRT engine.

1

FALSE

opt_batch_size

int

Optimum batch size in the optimization profile for the input tensor of the TensorRT engine.

1

FALSE

max_batch_size

int

Maximum batch size in the optimization profile for the input tensor of the TensorRT engine.

1

FALSE

Use the following command to run FoundationStereo engine generation:

GTE_JOB_ID=$(tao-client depth_net_stereo experiment-run-action --action gen_trt_engine --id $EXPERIMENT_ID --parent_job_id $EXPORT_JOB_ID --specs "$SPECS")

See also

The Export job ID is the job ID of the tao-client depth_net_stereo experiment-run-action --action export command.

Running Evaluation through a TensorRT Engine#

You can reuse the TAO evaluation spec file for evaluation through a TensorRT engine. The following is a sample specification file:

evaluate:
  trt_engine: /path/to/engine/file
  input_width: 736
  input_height: 320
dataset:
  dataset_name: StereoDataset
  test_dataset:
    data_sources:
      - dataset_name: GenericDataset
        data_file: /data/depth_net/annotations_test.txt
    batch_size: 4
    workers: 4

Use the following command to run FoundationStereo engine evaluation:

EVAL_JOB_ID=$(tao-client depth_net experiment-run-action --action evaluate --id $EXPERIMENT_ID --parent_job_id $GTE_JOB_ID --specs "$SPECS")

Running Inference through a TensorRT Engine#

You can reuse the TAO inference spec file for inference through a TensorRT engine. This is a sample specification file:

inference:
  input_width: 736
  input_height: 320
  trt_engine: /path/to/engine/file
dataset:
  dataset_name: StereoDataset
  infer_dataset:
    data_sources:
      - dataset_name: GenericDataset
        data_file: /data/depth_net/annotations_test.txt
  workers: 4
  batch_size: 4

Use this command to run FoundationStereo engine inference:

INFER_JOB_ID=$(tao-client depth_net experiment-run-action --action inference --id $EXPERIMENT_ID --parent_job_id $GTE_JOB_ID --specs "$SPECS")

The visualization is stored in $RESULTS_DIR/images_annotated, and the predictions are stored under $RESULTS_DIR/labels.