Stereo Depth with TAO Deploy#
To generate an optimized NVIDIA® TensorRT™ engine, a FoundationStereo .onnx file, which is first generated using tao model depth_net export,
is taken as an input to tao deploy depth_net gen_trt_engine. For more information about training a FoundationStereo model,
refer to the Stereo Depth Estimation.
Converting ONNX File into TensorRT Engine#
To convert the .onnx file, you can reuse the specification file from the tao model depth_net export command.
gen_trt_engine#
The gen_trt_engine parameter defines TensorRT engine generation.
Use the following command to get an experiment specification file for FoundationStereo:
SPECS=$(tao-client depth_net_stereo get-spec --action gen_trt_engine --job_type experiment --id $EXPERIMENT_ID)
gen_trt_engine:
onnx_file: /path/to/onnx_file
trt_engine: /path/to/trt_engine
batch_size: -1
tensorrt:
data_type: fp16
workspace_size: 1024
min_batch_size: 1
opt_batch_size: 2
max_batch_size: 4
Field |
value_type |
Description |
default_value |
valid_min |
valid_max |
valid_options |
automl_enabled |
|---|---|---|---|---|---|---|---|
|
string |
Path to where all the assets generated from a task are stored. |
FALSE |
||||
|
int |
Index of the GPU to build the TensorRT engine. |
0 |
FALSE |
|||
|
string |
Path to the ONNX model file. |
??? |
FALSE |
|||
|
string |
Path where the generated TensorRT engine from |
FALSE |
||||
|
int |
Number of channels in the input tensor. |
3 |
3 |
FALSE |
||
|
int |
Operator set version of the ONNX model used to generate the TensorRT engine. |
17 |
1 |
FALSE |
||
|
int |
Batch size of the input tensor for the engine.
A value of |
-1 |
-1 |
FALSE |
||
|
bool |
Flag to enable verbose TensorRT logging. |
False |
FALSE |
|||
|
collection |
Hyperparameters to configure the TensorRT Engine builder. |
FALSE |
tensorrt#
The tensorrt parameter defines the TensorRT engine generation.
Field |
value_type |
Description |
default_value |
valid_min |
valid_max |
valid_options |
automl_enabled |
|---|---|---|---|---|---|---|---|
|
string |
Precision to be set for building the TensorRT engine. |
FP32 |
FP32,FP16 |
FALSE |
||
|
int |
Size in megabytes of the workspace TensorRT has to run its optimization tactics and generate the TensorRT engine. |
1024 |
FALSE |
|||
|
int |
Minimum batch size in the optimization profile for the input tensor of the TensorRT engine. |
1 |
FALSE |
|||
|
int |
Optimum batch size in the optimization profile for the input tensor of the TensorRT engine. |
1 |
FALSE |
|||
|
int |
Maximum batch size in the optimization profile for the input tensor of the TensorRT engine. |
1 |
FALSE |
Use the following command to run FoundationStereo engine generation:
GTE_JOB_ID=$(tao-client depth_net_stereo experiment-run-action --action gen_trt_engine --id $EXPERIMENT_ID --parent_job_id $EXPORT_JOB_ID --specs "$SPECS")
See also
The Export job ID is the job ID of the tao-client depth_net_stereo experiment-run-action --action export command.
tao deploy depth_net gen_trt_engine -e /path/to/spec.yaml \ gen_trt_engine.onnx_file=/path/to/onnx/file \ gen_trt_engine.trt_engine=/path/to/engine/file \ gen_trt_engine.tensorrt.data_type=<data_type>
Required arguments:
-e, --experiment_spec: The experiment specification file to set up TensorRT engine generation.
Optional arguments:
gen_trt_engine.onnx_file: The.onnxmodel to be convertedgen_trt_engine.trt_engine: The path where the generated engine will be storedgen_trt_engine.tensorrt.data_type: The precision to be exported
Sample usage:
Here’s an example of using the gen_trt_engine command to generate an FP16 TensorRT engine:
tao deploy depth_net gen_trt_engine -e $DEFAULT_SPEC
gen_trt_engine.onnx_file=$ONNX_FILE \
gen_trt_engine.trt_engine=$ENGINE_FILE \
gen_trt_engine.tensorrt.data_type=FP16
Running Evaluation through a TensorRT Engine#
You can reuse the TAO evaluation spec file for evaluation through a TensorRT engine. The following is a sample specification file:
evaluate:
trt_engine: /path/to/engine/file
input_width: 736
input_height: 320
dataset:
dataset_name: StereoDataset
test_dataset:
data_sources:
- dataset_name: GenericDataset
data_file: /data/depth_net/annotations_test.txt
batch_size: 4
workers: 4
Use the following command to run FoundationStereo engine evaluation:
EVAL_JOB_ID=$(tao-client depth_net experiment-run-action --action evaluate --id $EXPERIMENT_ID --parent_job_id $GTE_JOB_ID --specs "$SPECS")
tao deploy depth_net evaluate -e /path/to/spec.yaml \ evaluate.trt_engine=/path/to/engine/file
Required arguments:
-e, --experiment_spec: The experiment specification file for evaluation. This should be the same as thetao evaluatespecification file.
Optional arguments:
evaluate.trt_engine: The engine file for evaluation.
Sample usage:
This is an example of using the evaluate command to run evaluation with a TensorRT engine:
tao deploy depth_net evaluate -e $DEFAULT_SPEC
evaluate.trt_engine=$ENGINE_FILE
Running Inference through a TensorRT Engine#
You can reuse the TAO inference spec file for inference through a TensorRT engine. This is a sample specification file:
inference:
input_width: 736
input_height: 320
trt_engine: /path/to/engine/file
dataset:
dataset_name: StereoDataset
infer_dataset:
data_sources:
- dataset_name: GenericDataset
data_file: /data/depth_net/annotations_test.txt
workers: 4
batch_size: 4
Use this command to run FoundationStereo engine inference:
INFER_JOB_ID=$(tao-client depth_net experiment-run-action --action inference --id $EXPERIMENT_ID --parent_job_id $GTE_JOB_ID --specs "$SPECS")
tao deploy depth_net inference -e /path/to/spec.yaml \ inference.trt_engine=/path/to/engine/file
Required arguments:
-e, --experiment_spec: Experiment specification file for inference; should be the same as thetao inferencespecification file
Optional arguments:
inference.trt_engine: Engine file for inference
Sample usage:
This is an example of using the inference command to run inference with a TensorRT engine:
tao deploy depth_net inference -e $DEFAULT_SPEC
results_dir=$RESULTS_DIR \
evaluate.trt_engine=$ENGINE_FILE
The visualization is stored in $RESULTS_DIR/images_annotated, and the predictions are stored
under $RESULTS_DIR/labels.