Grounding DINO with TAO Deploy#
To generate an optimized TensorRT engine, a Grounding DINO .onnx file, which is first generated using tao model grounding_dino export,
is taken as an input to tao deploy grounding_dino gen_trt_engine. For more information about training a Grounding DINO model,
refer to the Grounding DINO training documentation.
Converting ONNX File into TensorRT Engine#
To convert the .onnx file, you can reuse the spec file from the tao model grounding_dino export command.
gen_trt_engine#
The gen_trt_engine parameter defines TensorRT engine generation.
Use the following command to get an experiment spec file for ReIdentificationNet:
SPECS=$(tao-client grounding_dino get-spec --action train --job_type experiment --id $EXPERIMENT_ID)
gen_trt_engine:
  onnx_file: /path/to/onnx_file
  trt_engine: /path/to/trt_engine
  input_channel: 3
  input_width: 960
  input_height: 544
  tensorrt:
    data_type: fp16
    workspace_size: 1024
    min_batch_size: 1
    opt_batch_size: 10
    max_batch_size: 10
| Field | value_type | Description | default_value | valid_min | valid_max | valid_options | automl_enabled | 
|---|---|---|---|---|---|---|---|
| 
 | string | Path to where all the assets generated from a task are stored. | FALSE | ||||
| 
 | int | The index of the GPU to build the TensorRT engine. | 0 | FALSE | |||
| 
 | string | Path to the ONNX model file. | ??? | FALSE | |||
| trt_engine | string | Path where the generated TensorRT engine from  gen_trt_engineis stored.This only works with  tao-deploy. | FALSE | ||||
| 
 | int | Number of channels in the input tensor. | 3 | 3 | FALSE | ||
| 
 | int | Width of the input image tensor. | 960 | 32 | FALSE | ||
| 
 | int | Height of the input image tensor. | 544 | 32 | FALSE | ||
| opset_version | int | Operator set version of the ONNX model used to generate the TensorRT engine. | 17 | 1 | FALSE | ||
| batch_size | int | The batch size of the input tensor for the engine. A value of  -1implies dynamic tensor shapes. | -1 | -1 | FALSE | ||
| 
 | bool | Flag to enable verbose TensorRT logging. | False | FALSE | |||
| 
 | collection | Hyper parameters to configure the TensorRT Engine builder. | FALSE | 
tensorrt#
The tensorrt parameter defines the TensorRT engine generation.
| Field | value_type | Description | default_value | valid_min | valid_max | valid_options | automl_enabled | 
|---|---|---|---|---|---|---|---|
| 
 | string | The precision to be set for building the TensorRT engine. | FP32 | FP32,FP16 | FALSE | ||
| workspace_size | int | The size (in MB) of the workspace TensorRT has to run it’s optimization tactics and generate the TensorRT engine. | 1024 | FALSE | |||
| min_batch_size | int | The minimum batch size in the optimization profile for the input tensor of the TensorRT engine. | 1 | FALSE | |||
| opt_batch_size | int | The optimum batch size in the optimization profile for the input tensor of the TensorRT engine. | 1 | FALSE | |||
| max_batch_size | int | The maximum batch size in the optimization profile for the input tensor of the TensorRT engine. | 1 | FALSE | 
Use the following command to run Grounding DINO engine generation:
GTE_JOB_ID=$(tao-client grounding_dino experiment-run-action --action gen_trt_engine --id $EXPERIMENT_ID --parent_job_id $EXPORT_JOB_ID --specs "$SPECS")
See also
The Export job ID is the job ID of the tao-client grounding_dino experiment-run-action --action export command.
tao deploy grounding_dino gen_trt_engine -e /path/to/spec.yaml \ gen_trt_engine.onnx_file=/path/to/onnx/file \ gen_trt_engine.trt_engine=/path/to/engine/file \ gen_trt_engine.tensorrt.data_type=<data_type>
Required Arguments
- -e, --experiment_spec: The experiment spec file to set up TensorRT engine generation.
Optional Arguments
- gen_trt_engine.onnx_file: The- .onnxmodel to be converted
- gen_trt_engine.trt_engine: The path where the generated engine will be stored
- gen_trt_engine.tensorrt.data_type: The precision to be exported
Sample Usage
Here’s an example of using the gen_trt_engine command to generate an FP16 TensorRT engine:
tao deploy grounding_dino gen_trt_engine -e $DEFAULT_SPEC
            gen_trt_engine.onnx_file=$ONNX_FILE \
            gen_trt_engine.trt_engine=$ENGINE_FILE \
            gen_trt_engine.tensorrt.data_type=FP16
Running Evaluation through a TensorRT Engine#
You can reuse the TAO evaluation spec file for evaluation through a TensorRT engine. The following is a sample spec file:
evaluate:
  trt_engine: /path/to/engine/file
  conf_threshold: 0.0
  input_width: 960
  input_height: 544
dataset:
  test_data_sources:
    image_dir: /data/raw-data/val2017/
    json_file: /data/raw-data/annotations/instances_val2017.json
  max_labels: 80
  batch_size: 8
Use the following command to run Grounding DINO engine evaluation:
EVAL_JOB_ID=$(tao-client grounding_dino experiment-run-action --action evaluate --id $EXPERIMENT_ID --parent_job_id $GTE_JOB_ID --specs "$SPECS")
tao deploy grounding_dino evaluate -e /path/to/spec.yaml \ evaluate.trt_engine=/path/to/engine/file
Required Arguments
- -e, --experiment_spec: The experiment spec file for evaluation. This should be the same as the- tao evaluatespec file.
Optional Arguments
- evaluate.trt_engine: The engine file for evaluation.
Sample Usage
The following is an example of using the evaluate command to run evaluation with a TensorRT engine:
tao deploy grounding_dino evaluate -e $DEFAULT_SPEC
            evaluate.trt_engine=$ENGINE_FILE
Running Inference through a TensorRT Engine#
You can reuse the TAO inference spec file for inference through a TensorRT engine. The following is a sample spec file:
inference:
  conf_threshold: 0.5
  input_width: 960
  input_height: 544
  trt_engine: /path/to/engine/file
  color_map:
    "black cat": green
    car: red
    person: blue
dataset:
  infer_data_sources:
    - image_dir: /path/to/coco/images/val2017/
      captions: ["black cat", "car", "person"]
  max_labels: 80
  batch_size: 8
Use the following command to run Grounding DINO engine inference:
INFER_JOB_ID=$(tao-client grounding_dino experiment-run-action --action inference --id $EXPERIMENT_ID --parent_job_id $GTE_JOB_ID --specs "$SPECS")
tao deploy grounding_dino inference -e /path/to/spec.yaml \ inference.trt_engine=/path/to/engine/file
Required Arguments
- -e, --experiment_spec: The experiment spec file for inference. This should be the same as the- tao inferencespec file.
Optional Arguments
- inference.trt_engine: The engine file for inference.
Sample Usage
The following is an example of using the inference command to run inference with a TensorRT engine:
tao deploy grounding_dino inference -e $DEFAULT_SPEC
            results_dir=$RESULTS_DIR \
            evaluate.trt_engine=$ENGINE_FILE
The visualization is stored in $RESULTS_DIR/images_annotated, and the KITTI format predictions is stored
under $RESULTS_DIR/labels.