Monocular Depth with TAO Deploy#

To generate an optimized NVIDIA® TensorRT engine, a NvDepthAnythingV2 .onnx file, which is first generated using tao model depth_net export, is taken as an input to tao deploy depth_net gen_trt_engine. For more information about training a NvDepthAnythingV2 model, refer to the section Monocular Depth Estimation.

Converting the NvDepthAnythingV2 .onnx file into TensorRT Engine#

To convert the .onnx file, you can reuse the specification file from the tao model depth_net export command.

gen_trt_engine#

The gen_trt_engine parameter defines TensorRT engine generation.

Use the following command to get an experiment specification file for NvDepthAnythingV2:

SPECS=$(tao-client depth_net_mono get-spec --action gen_trt_engine --job_type experiment --id $EXPERIMENT_ID)

Field

value_type

Description

default_value

valid_min

valid_max

valid_options

automl_enabled

results_dir

string

Path to where all the assets generated from a task are stored.

FALSE

gpu_id

int

Index of the GPU to build the TensorRT engine.

0

FALSE

onnx_file

string

Path to the ONNX model file.

???

FALSE

trt_engine

string

Path where the generated TensorRT engine from gen_trt_engine is stored. This only works with tao-deploy.

FALSE

input_channel

int

Number of channels in the input tensor.

3

3

FALSE

opset_version

int

Operator set version of the ONNX model used to generate the TensorRT engine.

17

1

FALSE

batch_size

int

The batch size of the input tensor for the engine. A value of -1 implies dynamic tensor shapes.

-1

-1

FALSE

verbose

bool

Flag to enable verbose TensorRT logging.

False

FALSE

tensorrt

collection

Hyperparameters to configure the TensorRT Engine builder.

FALSE

tensorrt#

The tensorrt parameter defines the TensorRT engine generation.

Field

value_type

Description

default_value

valid_min

valid_max

valid_options

automl_enabled

data_type

string

The precision to be set for building the TensorRT engine.

FP32

FP32,FP16

FALSE

workspace_size


int


The size in megabytes of the workspace TensorRT has
to run its optimization tactics and generate the
TensorRT engine.
1024











FALSE


min_batch_size

int

The minimum batch size in the optimization profile for
the input tensor of the TensorRT engine.
1







FALSE

opt_batch_size

int

The optimum batch size in the optimization profile for
the input tensor of the TensorRT engine.
1







FALSE

max_batch_size

int

The maximum batch size in the optimization profile for
the input tensor of the TensorRT engine.
1







FALSE

Use the following command to run NvDepthAnythingV2 engine generation:

GTE_JOB_ID=$(tao-client depth_net_mono experiment-run-action --action gen_trt_engine --id $EXPERIMENT_ID --parent_job_id $EXPORT_JOB_ID --specs "$SPECS")

See also

The Export job ID is the job ID of the tao-client depth_net_mono experiment-run-action --action export command.

Running Evaluation through a TensorRT Engine#

You can reuse the TAO evaluation spec file for evaluation through a TensorRT engine. This is a sample specification file:

evaluate:
  trt_engine: /path/to/engine/file
  input_width: 924
  input_height: 518
dataset:
  test_dataset:
    data_sources:
      - dataset_name: RelativeMonoDataset
        data_file: /data/depth_net/annotation_test.txt
    workers: 4
    batch_size: 4

Use the following command to run NvDepthAnythingV2 engine evaluation:

EVAL_JOB_ID=$(tao-client depth_net_mono experiment-run-action --action evaluate --id $EXPERIMENT_ID --parent_job_id $GTE_JOB_ID --specs "$SPECS")

Running Inference through a TensorRT Engine#

You can reuse the TAO inference spec file for inference through a TensorRT engine. This is a sample specification file:

inference:
  conf_threshold: 0.5
  input_width: 960
  input_height: 544
  trt_engine: /path/to/engine/file
  color_map:
    "black cat": green
    car: red
    person: blue
dataset:
  infer_dataset:
    data_sources:
      - dataset_name: RelativeMonoDataset
        data_file
  workers: 4
  batch_size: 4

Use the following command to run NvDepthAnythingV2 engine inference:

INFER_JOB_ID=$(tao-client depth_net_mono experiment-run-action --action inference --id $EXPERIMENT_ID --parent_job_id $GTE_JOB_ID --specs "$SPECS")

The visualization is stored in $RESULTS_DIR/images_annotated, and the predictions are stored under $RESULTS_DIR/labels.