Monocular Depth with TAO Deploy#
To generate an optimized NVIDIA® TensorRT™ engine, a NvDepthAnythingV2 .onnx file, which is first generated using tao model depth_net export,
is taken as an input to tao deploy depth_net gen_trt_engine. For more information about training a NvDepthAnythingV2 model,
refer to the section Monocular Depth Estimation.
Converting the NvDepthAnythingV2 .onnx file into TensorRT Engine#
To convert the .onnx file, you can reuse the specification file from the tao model depth_net export command.
gen_trt_engine#
The gen_trt_engine parameter defines TensorRT engine generation.
Use the following command to get an experiment specification file for NvDepthAnythingV2:
SPECS=$(tao-client depth_net_mono get-spec --action gen_trt_engine --job_type experiment --id $EXPERIMENT_ID)
gen_trt_engine:
onnx_file: /path/to/onnx_file
trt_engine: /path/to/trt_engine
tensorrt:
data_type: fp16
workspace_size: 1024
min_batch_size: 1
opt_batch_size: 4
max_batch_size: 8
Field |
value_type |
Description |
default_value |
valid_min |
valid_max |
valid_options |
automl_enabled |
|---|---|---|---|---|---|---|---|
|
string |
Path to where all the assets generated from a task are stored. |
FALSE |
||||
|
int |
Index of the GPU to build the TensorRT engine. |
0 |
FALSE |
|||
|
string |
Path to the ONNX model file. |
??? |
FALSE |
|||
|
string |
Path where the generated TensorRT engine from |
FALSE |
||||
|
int |
Number of channels in the input tensor. |
3 |
3 |
FALSE |
||
|
int |
Operator set version of the ONNX model used to generate the TensorRT engine. |
17 |
1 |
FALSE |
||
|
int |
The batch size of the input tensor for the engine.
A value of |
-1 |
-1 |
FALSE |
||
|
bool |
Flag to enable verbose TensorRT logging. |
False |
FALSE |
|||
|
collection |
Hyperparameters to configure the TensorRT Engine builder. |
FALSE |
tensorrt#
The tensorrt parameter defines the TensorRT engine generation.
Field |
value_type |
Description |
default_value |
valid_min |
valid_max |
valid_options |
automl_enabled |
|---|---|---|---|---|---|---|---|
|
string |
The precision to be set for building the TensorRT engine. |
FP32 |
FP32,FP16 |
FALSE |
||
workspace_size |
int
|
The size in megabytes of the workspace TensorRT has
to run its optimization tactics and generate the
TensorRT engine.
|
1024
|
FALSE
|
|||
min_batch_size |
int
|
The minimum batch size in the optimization profile for
the input tensor of the TensorRT engine.
|
1
|
FALSE
|
|||
opt_batch_size |
int
|
The optimum batch size in the optimization profile for
the input tensor of the TensorRT engine.
|
1
|
FALSE
|
|||
max_batch_size |
int
|
The maximum batch size in the optimization profile for
the input tensor of the TensorRT engine.
|
1
|
FALSE
|
Use the following command to run NvDepthAnythingV2 engine generation:
GTE_JOB_ID=$(tao-client depth_net_mono experiment-run-action --action gen_trt_engine --id $EXPERIMENT_ID --parent_job_id $EXPORT_JOB_ID --specs "$SPECS")
See also
The Export job ID is the job ID of the tao-client depth_net_mono experiment-run-action --action export command.
tao deploy depth_net gen_trt_engine -e /path/to/spec.yaml \ gen_trt_engine.onnx_file=/path/to/onnx/file \ gen_trt_engine.trt_engine=/path/to/engine/file \ gen_trt_engine.tensorrt.data_type=<data_type>
Required arguments:
-e, --experiment_spec: Experiment specification file to set up TensorRT engine generation
Optional arguments:
gen_trt_engine.onnx_file:.onnxmodel to be convertedgen_trt_engine.trt_engine: Path where the generated engine will be storedgen_trt_engine.tensorrt.data_type: Precision to be exported
Sample usage:
This is an example of using the gen_trt_engine command to generate an FP16 TensorRT engine:
tao deploy depth_net gen_trt_engine -e $DEFAULT_SPEC
gen_trt_engine.onnx_file=$ONNX_FILE \
gen_trt_engine.trt_engine=$ENGINE_FILE \
gen_trt_engine.tensorrt.data_type=FP16
Running Evaluation through a TensorRT Engine#
You can reuse the TAO evaluation spec file for evaluation through a TensorRT engine. This is a sample specification file:
evaluate:
trt_engine: /path/to/engine/file
input_width: 924
input_height: 518
dataset:
test_dataset:
data_sources:
- dataset_name: RelativeMonoDataset
data_file: /data/depth_net/annotation_test.txt
workers: 4
batch_size: 4
Use the following command to run NvDepthAnythingV2 engine evaluation:
EVAL_JOB_ID=$(tao-client depth_net_mono experiment-run-action --action evaluate --id $EXPERIMENT_ID --parent_job_id $GTE_JOB_ID --specs "$SPECS")
tao deploy depth_net evaluate -e /path/to/spec.yaml \ evaluate.trt_engine=/path/to/engine/file
Required arguments:
-e, --experiment_spec: Experiment specification file for evaluation; should be the same as thetao evaluatespecification file
Optional arguments:
evaluate.trt_engine: Engine file for evaluation
Sample Usage
This is an example of using the evaluate command to run evaluation with a TensorRT engine:
tao deploy depth_net evaluate -e $DEFAULT_SPEC
evaluate.trt_engine=$ENGINE_FILE
Running Inference through a TensorRT Engine#
You can reuse the TAO inference spec file for inference through a TensorRT engine. This is a sample specification file:
inference:
conf_threshold: 0.5
input_width: 960
input_height: 544
trt_engine: /path/to/engine/file
color_map:
"black cat": green
car: red
person: blue
dataset:
infer_dataset:
data_sources:
- dataset_name: RelativeMonoDataset
data_file
workers: 4
batch_size: 4
Use the following command to run NvDepthAnythingV2 engine inference:
INFER_JOB_ID=$(tao-client depth_net_mono experiment-run-action --action inference --id $EXPERIMENT_ID --parent_job_id $GTE_JOB_ID --specs "$SPECS")
tao deploy depth_net inference -e /path/to/spec.yaml \ inference.trt_engine=/path/to/engine/file
Required arguments:
-e, --experiment_spec: Experiment specification file for inference; should be the same as thetao inferencespecification file
Optional arguments:
inference.trt_engine: Engine file for inference
Sample usage:
This is an example of using the inference command to run inference with a TensorRT engine:
tao deploy depth_net inference -e $DEFAULT_SPEC
results_dir=$RESULTS_DIR \
evaluate.trt_engine=$ENGINE_FILE
The visualization is stored in $RESULTS_DIR/images_annotated, and the predictions are stored
under $RESULTS_DIR/labels.