NVIDIA TAO Toolkit v5.2.0
TAO Toolkit v5.2.0

TRTEXEC with Deformable-DETR

The trtexec tool is a command-line wrapper included as part of the TensorRT samples. TAO 5.0.0 exposes the trtexec tool in the TAO Deploy container (or task group when run via launcher) for deploying the model with an x86-based CPU and discrete GPUs. To run trtexec on other platforms, such as Jetson devices, or with versions of TensorRT that are not used by default in the TAO containers, you can follow the official TensorRT documentation on how to get trtexec.

This section describes how to generate a TensorRT engine using trtexec, which allows you to deploy TAO-trained models on TensorRT, Triton, and Deepstream.

To generate an .onnx file for D-DETR, refer to the D-DETR documentation. You can also refer to the D-DETR TAO-Deploy documentation for instructions on generating an INT8 calibration file.

Copy
Copied!
            

trtexec --onnx=/path/to/model.onnx \ --maxShapes=inputs:16x3x544x960 \ --minShapes=inputs:1x3x544x960 \ --optShapes=inputs:8x3x544x960 \ --calib=/path/to/int8/calib.txt \ --fp16 \ --int8 \ --saveEngine=/path/to/save/trt/model.engine

Previous TRTEXEC with Classification TF1/TF2/PyT
Next TRTEXEC with DetectNet-v2
© Copyright 2024, NVIDIA. Last updated on Mar 18, 2024.