Deploying to DeepStream for Multitask Classification
The deep learning and computer vision models that you’ve trained can be deployed on edge devices, such as a Jetson Xavier or Jetson Nano, a discrete GPU, or in the cloud with NVIDIA GPUs. TAO Toolkit has been designed to integrate with DeepStream SDK, so models trained with TAO Toolkit will work out of the box with DeepStream SDK.
DeepStream SDK is a streaming analytic toolkit to accelerate building AI-based video analytic applications. This section will describe how to deploy your trained model to DeepStream SDK.
To deploy a model trained by TAO Toolkit to DeepStream we have two options:
Option 1: Integrate the
.etltmodel directly in the DeepStream app. The model file is generated by export.
Option 2: Generate a device-specific optimized TensorRT engine using TAO Deploy. The generated TensorRT engine file can also be ingested by DeepStream.
Option 3 (Deprecated for x86 devices): Generate a device-specific optimized TensorRT engine using TAO Converter.
Machine-specific optimizations are done as part of the engine creation process, so a distinct engine should be generated for each environment and hardware configuration. If the TensorRT or CUDA libraries of the inference environment are updated (including minor version updates), or if a new model is generated, new engines need to be generated. Running an engine that was generated with a different version of TensorRT and CUDA is not supported and will cause unknown behavior that affects inference speed, accuracy, and stability, or it may fail to run altogether.
Option 1 is very straightforward. The
.etlt file and calibration cache are directly
used by DeepStream. DeepStream will automatically generate the TensorRT engine file and then run
inference. TensorRT engine generation can take some time depending on size of the model
and type of hardware.
Engine generation can be done ahead of time with Option 2: TAO Deploy is used to convert the
file to TensorRT; this file is then provided directly to DeepStream. The TAO Deploy workflow is similar to
TAO Converter, which is deprecated for x86 devices in TAO version 4.0.1 but is still required for
deployment to Jetson devices.
See the Exporting the Model section for more details on how to export a TAO model.
There are two options to integrate TAO models with DeepStream:
Option 1: Integrate the model (
.etlt) with the encrypted key directly in the DeepStream app. The model file is generated by
tao multitask_classification export.
Option 2: Generate a device-specific optimized TensorRT engine using tao-converter. The TensorRT engine file can also be ingested by DeepStream.
To integrate the models with DeepStream, you need the following:
.etltmodel file and optional calibration cache for INT8 precision.
labels.txtfile containing the labels for classes in the order in which the networks produces outputs.
config_infer_*.txtfile to configure the nvinfer element in DeepStream. The nvinfer element handles everything related to TensorRT optimization and engine creation in DeepStream.
DeepStream SDK ships with an end-to-end reference application that is fully configurable. You
can configure input sources, the inference model, and output sinks. The app requires a primary
object-detection model, followed by an optional secondary classification model. The reference
application is installed as
deepstream-app. The graphic below shows the architecture of the
Typically, two or more configuration files are used with this app. In the install
directory, the config files are located in
sample/configs/tlt_pretrained_models. The main config file configures all the high-level
parameters in the pipeline above. This will set the input source and resolution, number of
inferences, tracker, and output sinks. The other supporting config files are for each individual
inference engine. The inference-specific configuration files are used to specify the models,
inference resolution, batch size, number of classes, and other customizations. The main
configuration file will call all the supporting configuration files.
Here are some configuration files in
samples/configs/deepstream-app for reference:
source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt: The main configuration file
config_infer_primary.txt: The supporting configuration file for the primary detector in the pipeline above
config_infer_secondary_*.txt: The supporting configuration file for the secondary classifier in the pipeline above
deepstream-app will only work with the main config file. This file will most likely
remain the same for all models and can be used directly from the DeepStream SDK with little to no
change. You will only need to modify or create
Integrating a Multitask Image Classification Model
See Exporting The Model for more details on how to export a TAO model. After the model has been generated, you can use the DeepStream sample app provided in GitHub repository to integrate the exported model. The GitHub repository also provides a README file for adjustments needed to integrate a custom model you trained on your own dataset.