TAO Converter with FasterRCNN
The tao-converter
tool is provided with the TAO Toolkit
to facilitate the deployment of TAO trained models on TensorRT and/or Deepstream.
This section elaborates on how to generate a TensorRT engine using tao-converter
.
For deployment platforms with an x86-based CPU and discrete GPUs, the tao-converter
is distributed within the TAO docker. Therefore, we suggest using the docker to generate
the engine. However, this requires that the user adhere to the same minor version of
TensorRT as distributed with the docker. The TAO docker includes TensorRT version 8.0.
Copy
/opt/nvidia/tools/tao-converter
to the target machine.Install TensorRT for the respective target machine.
For FasterRCNN, we need to build TensorRT Open source software on the machine. Instructions to build TensorRT OSS on x86 can be found in TensorRT OSS on x86 section above or in this GitHub repo.
Run
tao-converter
using the sample command below and generate the engine.
For the Jetson platform, the tao-converter
is available to download in the dev zone.
Once the tao-converter
is downloaded, follow the instructions below to generate a
TensorRT engine.
Unzip
tao-converter-trt7.1.zip
on the target machine.Install the open ssl package using the command:
sudo apt-get install libssl-dev
Export the following environment variables:
$ export TRT_LIB_PATH=”/usr/lib/aarch64-linux-gnu”
$ export TRT_INC_PATH=”/usr/include/aarch64-linux-gnu”
For Jetson devices, TensorRT comes pre-installed with Jetpack. If you are using older JetPack, upgrade to the latest one that
tao-converter
can support.For FasterRCNN, instructions to build TensorRT OSS on Jetson can be found in TensorRT OSS on Jetson (ARM64) section above or in this GitHub repo.
Run the
tao-converter
using the sample command below and generate the engine.
Make sure to follow the output node names as mentioned in Exporting the Model.
tao-converter [-h] -k <encryption_key>
-d <input_dimensions>
-o <comma separated output nodes>
[-c <path to calibration cache file>]
[-e <path to output engine>]
[-b <calibration batch size>]
[-m <maximum batch size of the TRT engine>]
[-t <engine datatype>]
[-w <maximum workspace size of the TRT Engine>]
[-i <input dimension ordering>]
[-p <optimization_profiles>]
[-s]
[-u <DLA_core>]
input_file
Required Arguments
input_file
: Path to the.etlt
model exported usingexport
.-k
: The key used to encode the.tlt
model when doing the traning.-d
: Comma-separated list of input dimensions that should match the dimensions used forexport
. Unlikeexport
this cannot be inferred from calibration data. This parameter is not required for new models introduced in TAO Toolkit 3.0-21.08 (e.g., LPRNet, UNet, GazeNet, etc).-o
: Comma-separated list of output blob names that should match the output configuration used forexport
. This parameter is not required for new models introduced in TAO Toolkit 3.0 (e.g., LPRNet, UNet, GazeNet, etc). For FasterRCNN, set this argument toNMS
.
Optional Arguments
-e
: Path to save the engine to. (default:./saved.engine
)-t
: Desired engine data type, generates calibration cache if in INT8 mode. The default value isfp32
. The options are {fp32
,fp16
,int8
}.-w
: Maximum workspace size for the TensorRT engine. The default value is1073741824(1<<30)
.-i
: Input dimension ordering, all other TAO commands use NCHW. The default value isnchw
. The options are {nchw
,nhwc
,nc
}. For FasterRCNN, we can omit it (defaults tonchw
).-p
: Optimization profiles for.etlt
models with dynamic shape. Comma separated list of optimization profile shapes in the format<input_name>,<min_shape>,<opt_shape>,<max_shape>
, where each shape has the format:<n>x<c>x<h>x<w>
. Can be specified multiple times if there are multiple input tensors for the model. This is only useful for new models introduced in TAO Toolkit 3.21.08. This parameter is not required for models that are already existed in version 2.0.-s
: TensorRT strict type constraints. A Boolean to apply TensorRT strict type constraints when building the TensorRT engine.-u
: Use DLA core. Specifying DLA core index when building the TensorRT engine on Jetson devices.
INT8 Mode Arguments
-c
: Path to calibration cache file, only used in INT8 mode. The default value is./cal.bin
.-b
: Batch size used during the export step for INT8 calibration cache generation. (default:8
).-m
: Maximum batch size for TensorRT engine.(default:16
). If meet with out-of-memory issue, please decrease the batch size accordingly. This parameter is not required for.etlt
models generated with dynamic shape (This is only possible for new models introduced in TAO Toolkit 3.21.08).
Sample Output Log
Here is a sample log for exporting a FasterRCNN model.
tao-converter -d 3,544,960 \
-k nvidia_tlt \
-o NMS \
/workspace/tao-experiments/faster_rcnn/resnet18_pruned.epoch45.etlt
..
[INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[INFO] Detected 1 inputs and 2 output network tensors.