NVIDIA TAO Toolkit v30.2205
NVIDIA TAO Release 30.2205

Deformable DETR with TAO Deploy

Deformable DETR etlt file generated from tao export is taken as an input to tao-deploy to generate optimized TensorRT engine. For 4.0.0, we do not support Int8 precision for Deformable DETR. For more information about training the Deformable DETR, please refer to Deformable DETR training documentation.

Same spec file can be used as the tao deformable_detr export command.

trt_config

The trt_config parameter provides options related to TensorRT generation.

Copy
Copied!
            

trt_config: data_type: FP32 workspace_size: 1024 min_batch_size: 1 opt_batch_size: 1 max_batch_size: 1

Parameter

Datatype

Default

Description

Supported Values

data_type

string

FP32

The precision to be used for the TensorRT engine

FP32/FP16

workspace_size

unsigned int

1024

The maximum workspace size for the TensorRT engine

>1024

min_batch_size

unsigned int

1

The minimum batch size used for optimization profile shape

>0

opt_batch_size

unsigned int

1

The optimal batch size used for optimization profile shape

>0

max_batch_size

unsigned int

1

The maximum batch size used for optimization profile shape

>0

Use the following command to run Deformable DETR engine generation:

Copy
Copied!
            

tao-deploy deformable_detr gen_trt_engine -e /path/to/spec.yaml \ encryption_key=<key> \ output_file=/path/to/etlt/file \ trt_engine=/path/to/engine/file \ trt_config.data_type=<data_type>


Required Arguments

  • -e, --experiment_spec: The experiment spec file to set up the TensorRT engine generation. This should be the same as the export specification file.

  • -k, --key: A user-specific encoding key to load a .etlt model.

  • output_file: The .etlt model to be converted.

  • trt_engine: The path where the generated engine will be stored.

  • data_type: Deformable DETR only supports FP32 and FP16.

Sample Usage

Here’s an example of using the gen_trt_engine command to generate FP16 TensorRT engine:

Copy
Copied!
            

tao-deploy deformable_detr gen_trt_engine -e $DEFAULT_SPEC encryption_key=$YOUR_KEY \ model_path=$ETLT_FILE \ trt_engine=$ENGINE_FILE \ trt_config.data_type=FP16


Same spec file as TAO evaluation spec file. Sample spec file:

Copy
Copied!
            

num_gpus: 1 conf_threshold: 0.5 input_width: 960 input_height: 544 dataset_config: test_data_sources: image_dir: /data/raw-data/val2017/ json_file: /data/raw-data/annotations/instances_val2017.json num_classes: 91 batch_size: 8

Use the following command to run Deformable DETR engine evaluation:

Copy
Copied!
            

tao-deploy deformable_detr evaluate -e /path/to/spec.yaml \ model_path=/path/to/engine/file \ output_dir=/path/to/outputs

Required Arguments

  • -e, --experiment_spec: The experiment spec file for evaluation. This should be the same as the tao evaluate specification file.

  • model_path: The engine file to run evaluation.

  • output_dir: The directory where evaluation results will be stored.

Sample Usage

Here’s an example of using the evaluate command to run evaluation with the TensorRT engine:

Copy
Copied!
            

tao-deploy deformable_detr evaluate -e $DEFAULT_SPEC model_path=$ENGINE_FILE \ output_dir=$RESULTS_DIR


Same spec file as TAO inference spec file. Sample spec file:

Copy
Copied!
            

conf_threshold: 0.5 input_width: 960 input_height: 544 dataset_config: infer_data_sources: image_dir: /data/raw-data/val2017/ json_file: /data/raw-data/annotations/instances_val2017.json num_classes: 91 batch_size: 8 workers: 8 color_map: person: green car: red cat: blue

Use the following command to run Deformable DETR engine inference:

Copy
Copied!
            

tao-deploy deformable_detr inference -e /path/to/spec.yaml \ model_path=/path/to/engine/file \ output_dir=/path/to/outputs

Required Arguments

  • -e, --experiment_spec: The experiment spec file for inference. This should be the same as the tao inference specification file.

  • model_path: The engine file to run inference.

  • output_dir: The directory where inference results will be stored.

Sample Usage

Here’s an example of using the inference command to run inference with the TensorRT engine:

Copy
Copied!
            

tao-deploy deformable_detr inference -e $DEFAULT_SPEC model_path=$ENGINE_FILE \ output_dir=$RESULTS_DIR

The visualization will be stored under $RESULTS_DIR/images_annotated and KITTI format predictions will be stored under $RESULTS_DIR/labels.

© Copyright 2022, NVIDIA.. Last updated on Dec 2, 2022.