RT-DETR with TAO Deploy#

To generate an optimized TensorRT engine, a RT-DETR ONNX file, which is first generated using tao model rtdetr export, is taken as an input to tao deploy rtdetr gen_trt_engine. For more information about training a RT-DETR model, refer to the RT-DETR training documentation.

Note

  • Throughout this documentation, you will see references to $EXPERIMENT_ID and $DATASET_ID in the FTMS Client sections.

    • For instructions on creating a dataset using the remote client, see the Creating a dataset section in the Remote Client documentation.

    • For instructions on creating an experiment using the remote client, see the Creating an experiment section in the Remote Client documentation.

  • The spec format is YAML for TAO Launcher and JSON for FTMS Client.

  • File-related parameters, such as dataset paths or pretrained model paths, are required only for TAO Launcher and not for FTMS Client.

Converting RT-DETR .onnx File into TensorRT Engine#

To convert the .onnx file, you can reuse the spec file from the tao model rtdetr export command.

gen_trt_engine#

The gen_trt_engine parameter defines TensorRT engine generation.

Use the following command to get an experiment spec file for ReIdentificationNet:

SPECS=$(tao-client rtdetr get-spec --action train --job_type experiment --id $EXPERIMENT_ID)

Parameter

Datatype

Default

Description

Supported Values

onnx_file

string

The precision to be used for the TensorRT engine

trt_engine

string

The maximum workspace size for the TensorRT engine

input_channel

unsigned int

3

The input channel size. Only a value of 3 is supported.

3

input_width

unsigned int

960

The input width

>0

input_height

unsigned int

544

The input height

>0

batch_size

unsigned int

-1

The batch size of the ONNX model

>=-1

tensorrt#

The tensorrt parameter defines the TensorRT engine generation.

Parameter

Datatype

Default

Description

Supported Values

data_type

string

fp32

The precision to be used for the TensorRT engine

fp32/fp16/int8

workspace_size

unsigned int

1024

The maximum workspace size for the TensorRT engine

>1024

min_batch_size

unsigned int

1

The minimum batch size used for the optimization profile shape

>0

opt_batch_size

unsigned int

1

The optimal batch size used for the optimization profile shape

>0

max_batch_size

unsigned int

1

The maximum batch size used for the optimization profile shape

>0

calibration#

The calibration parameter defines the TensorRT engine generation with PTQ INT8 calibration.

Parameter

Datatype

Default

Description

Supported Values

cal_image_dir

string list

The list of paths that contain images used for calibration

cal_cache_file

string

The path to calibration cache file to be dumped

cal_batch_size

unsigned int

1

The batch size per batch during calibration

>0

cal_batches

unsigned int

1

The number of batches to calibrate

>0

Note

For RT-DETR, int8 calibration is only supported for the ResNet series of backbones.

Use the following command to run RT-DETR engine generation:

GTE_JOB_ID=$(tao-client rtdetr experiment-run-action --action gen_trt_engine --id $EXPERIMENT_ID --parent_job_id $EXPORT_JOB_ID --specs "$SPECS")

See also

The Export job ID is the job ID of the tao-client rtdetr experiment-run-action --action export command.

Running Evaluation through TensorRT Engine#

You can reuse the TAO evaluation spec file for evaluation through a TensorRT engine. The following is a sample spec file:

evaluate:
  trt_engine: /path/to/engine/file
  conf_threshold: 0.0
  input_width: 640
  input_height: 640
dataset:
  test_data_sources:
    image_dir: /data/raw-data/val2017/
    json_file: /data/raw-data/annotations/instances_val2017.json
  num_classes: 80
  batch_size: 8

Use the following command to run RT-DETR engine evaluation:

EVAL_JOB_ID=$(tao-client rtdetr experiment-run-action --action evaluate --id $EXPERIMENT_ID --parent_job_id $GTE_JOB_ID --specs "$SPECS")

Running Inference through TensorRT Engine#

You can reuse the TAO inference spec file for inference through a TensorRT engine. The following is a sample spec file:

inference:
  conf_threshold: 0.5
  input_width: 640
  input_height: 640
  trt_engine: /path/to/engine/file
  color_map:
    person: green
    car: red
    cat: blue
dataset:
  infer_data_sources:
    image_dir: ["/data/raw-data/val2017/"]
    classmap: /path/to/coco/annotations/coco_classmap.txt
  num_classes: 80
  batch_size: 8

Use the following command to run RT-DETR engine inference:

EVAL_JOB_ID=$(tao-client rtdetr experiment-run-action --action inference --id $EXPERIMENT_ID --parent_job_id $GTE_JOB_ID --specs "$SPECS")

The visualization will be stored under $RESULTS_DIR/images_annotated, and KITTI format predictions will be stored under $RESULTS_DIR/labels.