TensorRT
NVIDIA TensorRT is an SDK for high-performance deep learning inference. TensorRT provides APIs and parsers to import trained models from all major deep learning frameworks. It then generates optimized runtime engines deployable in the datacenter as well as in automotive and embedded environments. To understand TensorRT and it’s capabilities better, refer to the official TensorRT documentation.
The models trained in TLT are deployed to NVIDIA’s Inference SDK’s such as DeepStream, Riva etc
via TensorRT. While the conversational AI models trained using TLT can be consumed via TensorRT only via Riva,
the computer vision models trained by TLT can be consumed by TensorRT, via the tlt-converter
tool. The
TLT converter parses the exported .etlt
model file, and generates an optimized TensorRT engine. These engines
can be generated to support inference at low precision, such as FP16
or INT8
.
While most of the TLT models support direct integration of the .etlt files to DeepStream 5.1, DeepStream can also
consume the optimized engine generated by the tlt-converter
.
The TensorRT engines generated by this tlt-converter
are specific to the GPU that it was generated on. So,
based on the platform that the model is being deployed to, you will need to download the specific version of
the tlt-converter
and generate the engine there.
The TLT models have been verified to integrate with TensorRT version 7.0, 7.1 and 7.2.
Eventhough TensorRT contains optimized implementations for several common operations used in Deep Neural Networks(DNNs),
with Deep Learning being such a quickly evolving discipline, TensorRT provides users a method to bring in new operations
via to the model graph via custom TensorRT Plugins
. Several samples of these custom plug-ins are hosted on
GitHub under the repository called TensorRT OSS.
Instructions to build and install TensorRT OSS can be found in this repository.
The TLT applications that require TensorRT OSS are:
FasterRCNN
SSD
DSSD
YOLOv3
YOLOv4
RetinaNet
MaskRCNN
The TLT Converter
is distributed as a separate binary for x86 and Jetson platforms. The following table lists the links where you can download
the tlt-converter
.
CUDA/CUDNN |
TensorRT |
Platform |
---|---|---|
10.2/8.0 |
7.2 |
|
11.0/8.0 |
7.2 |
|
11.1/8.0 |
7.2 |
|
11.2/8.0 |
7.2 |
|
10.2/8.0 |
7.1 |
|
11.0/8.0 |
7.1 |
JetPack Version |
Availability |
---|---|
4.4 |
|
4.5 |
Installing on an x86 platform
For an x86 platform with discrete GPUs, the default TLT package includes the tlt-converter
built for TensorRT 7.2 with CUDA 11.1 and CUDNN 8.0. However, for any other version of CUDA and
TensorRT, please refer to the overview section for download. Once the
tlt-converter
is downloaded, follow the instructions below to generate a TensorRT engine.
Unzip the zip file on the target machine.
Install the OpenSSL package using the command:
sudo apt-get install libssl-dev
Export the following environment variables:
$ export TRT_LIB_PATH=”/usr/lib/x86_64-linux-gnu”
$ export TRT_INC_PATH=”/usr/include/x86_64-linux-gnu”
Run the
tlt-converter
using the sample command below and generate the engine.Instructions to build TensorRT OSS on Jetson can be found in the TensorRT OSS on x86 section above or in this GitHub repo.
Make sure to follow the output node names as mentioned in Exporting the Model
section of the respective model.
Installing on an jetson platform
For the Jetson platform, the tlt-converter
is available to download in the NVIDIA developer zone. You may choose
the version you wish to download as listed in the overview section.
Once the tlt-converter
is downloaded, please follow the instructions below to generate a
TensorRT engine.
Unzip the zip file on the target machine.
Install the OpenSSL package using the command:
sudo apt-get install libssl-dev
Export the following environment variables:
$ export TRT_LIB_PATH=”/usr/lib/aarch64-linux-gnu”
$ export TRT_INC_PATH=”/usr/include/aarch64-linux-gnu”
For Jetson devices, TensorRT 7.1 comes pre-installed with Jetpack. If you are using older JetPack, upgrade to JetPack 4.4 or JetPack 4.5.
Instructions to build TensorRT OSS on Jetson can be found in the TensorRT OSS on Jetson (ARM64) section above or in this GitHub repo.
Run the
tlt-converter
using the sample command below and generate the engine.
Make sure to follow the output node names as mentioned in Exporting the Model
section of the respective model.
Using the tlt-converter
tlt-converter [-h] -k <encryption_key>
-d <input_dimensions>
-o <comma separated output nodes>
[-c <path to calibration cache file>]
[-e <path to output engine>]
[-b <calibration batch size>]
[-m <maximum batch size of the TRT engine>]
[-t <engine datatype>]
[-w <maximum workspace size of the TRT Engine>]
[-i <input dimension ordering>]
[-p <optimization_profiles>]
[-s]
[-u <DLA_core>]
input_file
Required Arguments
input_file
: Path to the.etlt
model exported usingtlt <model> export
.-k
: The key used to encode the.tlt
model when doing the training.-d
: Comma-separated list of input dimensions that should match the dimensions used fortlt <model> export
.-o
: Comma-separated list of output blob names that should match the output configuration used fortlt <model> export
.
Optional Arguments
-e
: Path to save the engine to. (default:./saved.engine
)-t
: Desired engine data type, generates calibration cache if in INT8 mode. The default value isfp32
. The options are {fp32
,fp16
,int8
}.-w
: Maximum workspace size for the TensorRT engine. The default value is1073741824(1<<30)
.-i
: Input dimension ordering, all other TLT commands use NCHW. The default value isnchw
. The options are {nchw
,nhwc
,nc
}.-p
: Optimization profiles for.etlt
models with dynamic shape. Comma separated list of optimization profile shapes in the format<input_name>,<min_shape>,<opt_shape>,<max_shape>
, where each shape has the format:<n>x<c>x<h>x<w>
. Can be specified multiple times if there are multiple input tensors for the model. This is only useful for new models introduced in TLT 3.0. This parameter is not required for models that are already existed in TLT 2.0.-s
: TensorRT strict type constraints. A Boolean to apply TensorRT strict type constraints when building the TensorRT engine.-u
: Use DLA core. Specifying DLA core index when building the TensorRT engine on Jetson devices.
INT8 Mode Arguments
-c
: Path to calibration cache file, only used in INT8 mode. The default value is./cal.bin
.-b
: Batch size used during the export step for INT8 calibration cache generation. (default:8
).-m
: Maximum batch size for TensorRT engine.(default:16
). If meet with out-of-memory issue, decrease the batch size accordingly. This parameter is not required for.etlt
models generated with dynamic shape. (This is only possible for new models introduced in TLT 3.0.)
The usage for each TLT Computer Vision is explained in the respective models chapter.