CenterPose with TAO Deploy#
To generate an optimized TensorRT engine:
Generate a CenterPose
.onnx
file usingtao model centerpose export
.Specify the
.onnx
file as the input totao deploy centerpose gen_trt_engine
.
For more information about training a CenterPose model, refer to the CenterPose training documentation.
Converting an ONNX File into TensorRT Engine#
SPECS=$(tao-client centerpose get-spec --action gen_trt_engine --job_type experiment --id $EXPERIMENT_ID)
See also
For information on how to create an experiment using the remote client, refer to the Creating an experiment section in the Remote Client documentation.
To convert the .onnx
file, you can reuse the spec file from the tao model centerpose export
command.
The gen_trt_engine
parameter defines TensorRT engine generation.
gen_trt_engine:
onnx_file: /path/to/onnx_file
trt_engine: /path/to/trt_engine
input_channel: 3
input_width: 512
input_height: 512
tensorrt:
data_type: fp32
workspace_size: 1024
min_batch_size: 1
opt_batch_size: 2
max_batch_size: 4
calibration:
cal_image_dir: /path/to/cal/images
cal_cache_file: /path/to/cal.bin
cal_batch_size: 10
cal_batches: 1000
Parameter |
Datatype |
Default |
Description |
Supported Values |
|
string |
The precision to be used for the TensorRT engine |
||
|
string |
The maximum workspace size for the TensorRT engine |
||
|
unsigned int |
3 |
The input channel size. Only the value 3 is supported. |
3 |
|
unsigned int |
512 |
The input width |
>0 |
|
unsigned int |
512 |
The input height |
>0 |
|
unsigned int |
-1 |
The batch size of the ONNX model |
>=-1 |
tensorrt#
The tensorrt
parameter defines TensorRT engine generation.
Parameter |
Datatype |
Default |
Description |
Supported Values |
|
string |
fp32 |
The precision to be used for the TensorRT engine |
fp32/fp16/int8 |
|
unsigned int |
1024 |
The maximum workspace size for the TensorRT engine |
>1024 |
|
unsigned int |
1 |
The minimum batch size used for the optimization profile shape |
>0 |
|
unsigned int |
1 |
The optimal batch size used for the optimization profile shape |
>0 |
|
unsigned int |
1 |
The maximum batch size used for the optimization profile shape |
>0 |
calibration#
The calibration
parameter defines TensorRT engine generation with PTQ INT8 calibration.
Parameter |
Datatype |
Default |
Description |
Supported Values |
|
string |
The list of paths that contain images used for calibration |
||
|
string |
The path to the calibration cache file to be dumped |
||
|
unsigned int |
1 |
The batch size per batch during calibration |
>0 |
|
unsigned int |
1 |
The number of batches to calibrate |
>0 |
Use the following command to run CenterPose engine generation:
GEN_TRT_ENGINE_JOB_ID=$(tao-client centerpose experiment-run-action --action gen_trt_engine --id $EXPERIMENT_ID --specs "$SPECS")
See also
For information on how to create an experiment using the remote client, refer to the Creating an experiment section in the Remote Client documentation.
tao deploy centerpose gen_trt_engine -e /path/to/spec.yaml \
results_dir=/path/to/results \
gen_trt_engine.onnx_file=/path/to/onnx/file \
gen_trt_engine.trt_engine=/path/to/engine/file \
gen_trt_engine.tensorrt.data_type=<data_type>
Required Arguments
-e, --experiment_spec
: The experiment spec file to set up TensorRT engine generation.
Optional Arguments
results_dir
: The directory where the JSON status-log file is saved.gen_trt_engine.onnx_file
: The.onnx
model to be converted.gen_trt_engine.trt_engine
: The path where the generated engine is stored.gen_trt_engine.tensorrt.data_type
: The precision to be exported.
Sample Usage
The following is an example of using the gen_trt_engine
command to generate an FP16 TensorRT engine:
tao deploy centerpose gen_trt_engine -e $DEFAULT_SPEC
gen_trt_engine.onnx_file=$ONNX_FILE \
gen_trt_engine.trt_engine=$ENGINE_FILE \
gen_trt_engine.tensorrt.data_type=FP16
Running Evaluation Through a TensorRT Engine#
You can reuse the TAO evaluation spec file for evaluation through a TensorRT engine. The following is a sample spec file:
evaluate:
trt_engine: /path/to/engine/file
opencv: False
eval_num_symmetry: 1
results_dir: /path/to/save/results
dataset:
test_data: /path/to/testing/images/and/json/files
batch_size: 2
workers: 4
Use the following command to run CenterPose engine evaluation:
EVAL_JOB_ID=$(tao-client centerpose experiment-run-action --action evaluate --id $EXPERIMENT_ID --parent_job_id $GEN_TRT_ENGINE_JOB_ID --specs "$SPECS")
See also
For information on how to create an experiment using the remote client, refer to the Creating an experiment section in the Remote Client documentation.
tao deploy centerpose evaluate -e /path/to/spec.yaml \
results_dir=/path/to/results \
evaluate.trt_engine=/path/to/engine/file
Required Arguments
-e, --experiment_spec
: The experiment spec file for evaluation.
This must be the same as the tao evaluate
spec file.
Optional Arguments
results_dir
: The directory where the JSON status-log file and evaluation results are saved.evaluate.trt_engine
: The engine file for evaluation.
Sample Usage
The following is an example of using the evaluate
command to run evaluation with a TensorRT engine:
tao deploy centerpose evaluate -e $DEFAULT_SPEC
results_dir=$RESULTS_DIR \
evaluate.trt_engine=$ENGINE_FILE
Running Inference Through a TensorRT Engine#
You can reuse the TAO inference spec file for inference through a TensorRT engine. The following is a sample spec file:
inference:
trt_engine: /path/to/engine/file
visualization_threshold: 0.3
principle_point_x: 298.3
principle_point_y: 392.1
focal_length_x: 651.2
focal_length_y: 651.2
skew: 0.0
axis_size: 0.5
use_pnp: True
save_json: True
save_visualization: True
opencv: True
dataset:
inference_data: /path/to/inference/files
batch_size: 1
workers: 4
Use the following command to run CenterPose engine inference:
INFERENCE_JOB_ID=$(tao-client centerpose experiment-run-action --action inference --id $EXPERIMENT_ID --parent_job_id $GEN_TRT_ENGINE_JOB_ID --specs "$SPECS")
See also
For information on how to create an experiment using the remote client, refer to the Creating an experiment section in the Remote Client documentation.
tao deploy centerpose inference -e /path/to/spec.yaml \
results_dir=/path/to/results \
inference.trt_engine=/path/to/engine/file
Required Arguments
-e, --experiment_spec
: The experiment spec file for inference.
This should be the same as the tao inference
spec file.
Optional Arguments
results_dir
: The directory where the JSON status-log file and inference results are saved.inference.trt_engine
: The engine file for inference.
Sample Usage
The following is an example of using the inference
command to run inference with a TensorRT engine:
tao deploy centerpose inference -e $DEFAULT_SPEC
results_dir=$RESULTS_DIR \
inference.trt_engine=$ENGINE_FILE
The visualization results are stored in $RESULTS_DIR
.