TAO Converter with YOLOv3
The tao-converter
tool is provided with the TAO Toolkit
to facilitate the deployment of TAO trained models on TensorRT and/or Deepstream.
This section elaborates on how to generate a TensorRT engine using tao-converter
.
For deployment platforms with an x86-based CPU and discrete GPUs, the tao-converter
is distributed within the TAO docker. Therefore, we suggest using the docker to generate
the engine. However, this requires that the user adhere to the same minor version of
TensorRT as distributed with the docker. The TAO docker includes TensorRT version 8.0.
For an x86 platform with discrete GPUs, the default TAO package includes the tao-converter
built for TensorRT 8.2.5.1 with CUDA 11.4 and CUDNN 8.2. However, for any other version of CUDA and
TensorRT, please refer to the overview section for download. Once the
tao-converter
is downloaded, follow the instructions below to generate a TensorRT engine.
Unzip the zip file on the target machine.
Install the OpenSSL package using the command:
sudo apt-get install libssl-dev
Export the following environment variables:
$ export TRT_LIB_PATH=”/usr/lib/x86_64-linux-gnu”
$ export TRT_INC_PATH=”/usr/include/x86_64-linux-gnu”
Run the
tao-converter
using the sample command below and generate the engine.Instructions to build TensorRT OSS on Jetson can be found in the TensorRT OSS on x86 section above or in this GitHub repo.
Make sure to follow the output node names as mentioned in the Exporting the Model section of the respective model.
For the Jetson platform, the tao-converter
is available to download in the NVIDIA developer zone. You may choose
the version you wish to download as listed in the overview section.
Once the tao-converter
is downloaded, please follow the instructions below to generate a
TensorRT engine.
Unzip the zip file on the target machine.
Install the OpenSSL package using the command:
sudo apt-get install libssl-dev
Export the following environment variables:
$ export TRT_LIB_PATH=”/usr/lib/aarch64-linux-gnu”
$ export TRT_INC_PATH=”/usr/include/aarch64-linux-gnu”
For Jetson devices, TensorRT comes pre-installed with Jetpack. If you are using older JetPack, upgrade to JetPack-5.0DP.
Instructions to build TensorRT OSS on Jetson can be found in the TensorRT OSS on Jetson (ARM64) section above or in this GitHub repo.
Run the
tao-converter
using the sample command below and generate the engine.
Make sure to follow the output node names as mentioned in Exporting the Model
section of the respective model.
tao-converter [-h] -k <encryption_key>
-d <input_dimensions>
-o <comma separated output nodes>
[-c <path to calibration cache file>]
[-e <path to output engine>]
[-b <calibration batch size>]
[-m <maximum batch size of the TRT engine>]
[-t <engine datatype>]
[-w <maximum workspace size of the TRT Engine>]
[-i <input dimension ordering>]
[-p <optimization_profiles>]
[-s]
[-u <DLA_core>]
input_file
Required Arguments
input_file
: The path to the.etlt
model exported usingtao yolo_v3 export
.-k
: The key used to encode the.tlt
model when doing the training.-d
: A comma-separated list of input dimensions that should match the dimensions used fortao yolo_v3 export
.-o
: A comma-separated list of output blob names that should match the output configuration. used fortao yolo_v3 export
. For YOLOv3, set this argument toBatchedNMS
.-p
: Optimization profiles for.etlt
models with dynamic shape. Use a comma-separated list of optimization profile shapes in the format<input_name>,<min_shape>,<opt_shape>,<max_shape>
, where each shape has the format:<n>x<c>x<h>x<w>
. The input name for YOLOv3 isInput
Optional Arguments
-e
: The path to save the engine to. The default path is./saved.engine
.-t
: The desired engine data type. The options arefp32
,fp16
, orint8
. Selecting INT8 mode will generate a calibration cache.-w
: The maximum workspace size for the TensorRT engine. The default value is1073741824(1<<30)
.-i
: The input-dimension ordering. All other TAO commands use NCHW. The options arenchw
,nhwc
, andnc
. The default value isnchw
, so you can omit this argument for YOLOv3.-s
: A Boolean value specifying whether to apply TensorRT strict-type constraints when building the TensorRT engine.-u
: Only needed if use DLA core. Specifying DLA core index when building the TensorRT engine on Jetson devices.
INT8 Mode Arguments
-c
: The path to calibration cache file (only used in INT8 mode). The default value is./cal.bin
.-b
: The batch size used during the export step for INT8 calibration cache generation (default:8
).-m
: The maximum batch size for the TensorRT engine. The default value is16
. If out-of-memory issues occur, decrease the batch size accordingly. This parameter is only useful for.etlt
models generated with static shape.
Sample Output Log
Here is a sample log for exporting a YOLOv3 model.
tao-converter -k $KEY \
-p Input,1x3x384x1248,8x3x384x1248,16x3x384x1248 \
-e /export/trt.fp16.engine \
-t fp16 \
/ws/yolov3_resnet18_epoch_100.etlt