Mask Grounding DINO with TAO Deploy#

To generate an optimized TensorRT engine, a Grounding DINO .onnx file, which is first generated using tao model mask_grounding_dino export, is taken as an input to tao deploy mask_grounding_dino gen_trt_engine. For more information about training a Mask Grounding DINO model, refer to the Grounding DINO training documentation.

Converting ONNX File into TensorRT Engine#

To convert the .onnx file, you can reuse the default experiment spec file from the tao model mask_grounding_dino export command.

gen_trt_engine#

The gen_trt_engine parameter defines TensorRT engine generation.

gen_trt_engine:
  onnx_file: /path/to/onnx_file
  trt_engine: /path/to/trt_engine
  input_channel: 3
  input_width: 960
  input_height: 544
  tensorrt:
    data_type: fp16
    workspace_size: 1024
    min_batch_size: 1
    opt_batch_size: 10
    max_batch_size: 10

Field

Value_type

Description

default_value

valid_min

valid_max

valid_options

automl_enabled

results_dir

string

Path to where all the assets generated from a task are stored.

FALSE

gpu_id

int

The index of the GPU to build the TensorRT engine.

0

FALSE

onnx_file

string

Path to the ONNX model file.

???

FALSE

trt_engine

string

Path to the TensorRT engine generated should be stored.
This only works with tao-deploy.








FALSE

input_channel

int

Number of channels in the input tensor.

3

3

FALSE

input_width

int

Width of the input image tensor.

960

32

FALSE

input_height

int

Height of the input image tensor.

544

32

FALSE

opset_version

int

Operator set version of the ONNX model used to generate
the TensorRT engine.
17

1





FALSE

batch_size

int

The batch size of the input tensor for the engine.
A value of -1 implies dynamic tensor shapes.
-1

-1





FALSE

verbose

bool

Flag to enable verbose TensorRT logging.

False

FALSE

tensorrt

collection

Hyper parameters to configure the TensorRT Engine builder.

FALSE

tensorrt#

The tensorrt parameter defines TensorRT engine generation.

Field

value_type

Description

default_value

valid_min

valid_max

valid_options

automl_enabled

data_type

string

The precision to be set for building the TensorRT engine.

FP32

FP32,FP16

FALSE

workspace_size


int


The size (in MB) of the workspace TensorRT has
to run it’s optimization tactics and generate the
TensorRT engine.
1024











FALSE


min_batch_size

int

The minimum batch size in the optimization profile for
the input tensor of the TensorRT engine.
1







FALSE

opt_batch_size

int

The optimum batch size in the optimization profile for
the input tensor of the TensorRT engine.
1







FALSE

max_batch_size

int

The maximum batch size in the optimization profile for
the input tensor of the TensorRT engine.
1







FALSE

Use the following command to run Grounding DINO engine generation:

tao deploy mask_grounding_dino gen_trt_engine -e /path/to/spec.yaml \
           gen_trt_engine.onnx_file=/path/to/onnx/file \
           gen_trt_engine.trt_engine=/path/to/engine/file \
           gen_trt_engine.tensorrt.data_type=<data_type>

Required Arguments#

  • -e, --experiment_spec: The experiment spec file to set up TensorRT engine generation

Optional Arguments#

  • gen_trt_engine.onnx_file: The .onnx model to be converted

  • gen_trt_engine.trt_engine: The path where the generated engine is stored

  • gen_trt_engine.tensorrt.data_type: The precision to be exported

Sample Usage#

The following is an example of using the gen_trt_engine command to generate an FP16 TensorRT engine:

tao deploy mask_grounding_dino gen_trt_engine -e $DEFAULT_SPEC
           gen_trt_engine.onnx_file=$ONNX_FILE \
           gen_trt_engine.trt_engine=$ENGINE_FILE \
           gen_trt_engine.tensorrt.data_type=FP16

Running Evaluation through a TensorRT Engine#

You can reuse the TAO evaluation spec file for evaluation through a TensorRT engine. The following is a sample spec file:

evaluate:
  trt_engine: /path/to/engine/file
  conf_threshold: 0.0
  input_width: 960
  input_height: 544
dataset:
  test_data_sources:
    image_dir: /data/raw-data/val2017/
    json_file: /data/raw-data/annotations/instances_val2017.json
  max_labels: 80
  batch_size: 8

Use the following command to run Grounding DINO engine evaluation:

tao deploy mask_grounding_dino evaluate -e /path/to/spec.yaml \
           evaluate.trt_engine=/path/to/engine/file

Required Arguments#

  • -e, --experiment_spec: The experiment spec file for evaluation This should be the same as the tao evaluate spec file

Optional Arguments#

  • evaluate.trt_engine: The engine file for evaluation

Sample Usage#

This is an example of using the evaluate command to run evaluation with a TensorRT engine:

tao deploy mask_grounding_dino evaluate -e $DEFAULT_SPEC
           evaluate.trt_engine=$ENGINE_FILE

Running Inference through a TensorRT Engine#

You can reuse the TAO inference spec file for inference through a TensorRT engine. The following is a sample spec file:

inference:
  conf_threshold: 0.5
  input_width: 960
  input_height: 544
  trt_engine: /path/to/engine/file
  color_map:
    "black cat": green
    car: red
    person: blue
dataset:
  infer_data_sources:
    - image_dir: /path/to/coco/images/val2017/
      captions: ["black cat", "car", "person"]
  max_labels: 80
  batch_size: 8

Use the following command to run Grounding DINO engine inference:

tao deploy mask_grounding_dino inference -e /path/to/spec.yaml \
           inference.trt_engine=/path/to/engine/file

Required Arguments#

  • -e, --experiment_spec: The experiment spec file for inference. This must be the same as the tao inference spec file.

Optional Arguments#

  • inference.trt_engine: The engine file for inference

Sample Usage#

An example of using the inference command to run inference with a TensorRT engine:

tao deploy mask_grounding_dino inference -e $DEFAULT_SPEC
           results_dir=$RESULTS_DIR \
           evaluate.trt_engine=$ENGINE_FILE

The visualization is be stored in $RESULTS_DIR/images_annotated, and the KITTI format predictions is be stored under $RESULTS_DIR/labels.