CenterPose with TAO Deploy#

To generate an optimized TensorRT engine:

  1. Generate a CenterPose .onnx file using tao model centerpose export.

  2. Specify the .onnx file as the input to tao deploy centerpose gen_trt_engine.

For more information about training a CenterPose model, refer to the CenterPose training documentation.

Converting an ONNX File into TensorRT Engine#

SPECS=$(tao-client centerpose get-spec --action gen_trt_engine --job_type experiment --id $EXPERIMENT_ID)

See also

For information on how to create an experiment using the remote client, refer to the Creating an experiment section in the Remote Client documentation.

Parameter

Datatype

Default

Description

Supported Values

onnx_file

string

The precision to be used for the TensorRT engine

trt_engine

string

The maximum workspace size for the TensorRT engine

input_channel

unsigned int

3

The input channel size. Only the value 3 is supported.

3

input_width

unsigned int

512

The input width

>0

input_height

unsigned int

512

The input height

>0

batch_size

unsigned int

-1

The batch size of the ONNX model

>=-1

tensorrt#

The tensorrt parameter defines TensorRT engine generation.

Parameter

Datatype

Default

Description

Supported Values

data_type

string

fp32

The precision to be used for the TensorRT engine

fp32/fp16/int8

workspace_size

unsigned int

1024

The maximum workspace size for the TensorRT engine

>1024

min_batch_size

unsigned int

1

The minimum batch size used for the optimization profile shape

>0

opt_batch_size

unsigned int

1

The optimal batch size used for the optimization profile shape

>0

max_batch_size

unsigned int

1

The maximum batch size used for the optimization profile shape

>0

calibration#

The calibration parameter defines TensorRT engine generation with PTQ INT8 calibration.

Parameter

Datatype

Default

Description

Supported Values

cal_image_dir

string

The list of paths that contain images used for calibration

cal_cache_file

string

The path to the calibration cache file to be dumped

cal_batch_size

unsigned int

1

The batch size per batch during calibration

>0

cal_batches

unsigned int

1

The number of batches to calibrate

>0

Use the following command to run CenterPose engine generation:

GEN_TRT_ENGINE_JOB_ID=$(tao-client centerpose experiment-run-action --action gen_trt_engine --id $EXPERIMENT_ID --specs "$SPECS")

See also

For information on how to create an experiment using the remote client, refer to the Creating an experiment section in the Remote Client documentation.

Running Evaluation Through a TensorRT Engine#

You can reuse the TAO evaluation spec file for evaluation through a TensorRT engine. The following is a sample spec file:

evaluate:
  trt_engine: /path/to/engine/file
  opencv: False
  eval_num_symmetry: 1
  results_dir: /path/to/save/results
dataset:
  test_data: /path/to/testing/images/and/json/files
  batch_size: 2
  workers: 4

Use the following command to run CenterPose engine evaluation:

EVAL_JOB_ID=$(tao-client centerpose experiment-run-action --action evaluate --id $EXPERIMENT_ID --parent_job_id $GEN_TRT_ENGINE_JOB_ID --specs "$SPECS")

See also

For information on how to create an experiment using the remote client, refer to the Creating an experiment section in the Remote Client documentation.

Running Inference Through a TensorRT Engine#

You can reuse the TAO inference spec file for inference through a TensorRT engine. The following is a sample spec file:

inference:
  trt_engine: /path/to/engine/file
  visualization_threshold: 0.3
  principle_point_x: 298.3
  principle_point_y: 392.1
  focal_length_x: 651.2
  focal_length_y: 651.2
  skew: 0.0
  axis_size: 0.5
  use_pnp: True
  save_json: True
  save_visualization: True
  opencv: True
dataset:
  inference_data: /path/to/inference/files
  batch_size: 1
  workers: 4

Use the following command to run CenterPose engine inference:

INFERENCE_JOB_ID=$(tao-client centerpose experiment-run-action --action inference --id $EXPERIMENT_ID --parent_job_id $GEN_TRT_ENGINE_JOB_ID --specs "$SPECS")

See also

For information on how to create an experiment using the remote client, refer to the Creating an experiment section in the Remote Client documentation.