Deploying to DeepStream for FasterRCNN

The deep learning and computer vision models that you’ve trained can be deployed on edge devices, such as a Jetson Xavier or Jetson Nano, a discrete GPU, or in the cloud with NVIDIA GPUs. TAO Toolkit has been designed to integrate with DeepStream SDK, so models trained with TAO Toolkit will work out of the box with DeepStream SDK.

DeepStream SDK is a streaming analytic toolkit to accelerate building AI-based video analytic applications. This section will describe how to deploy your trained model to DeepStream SDK.

To deploy a model trained by TAO Toolkit to DeepStream we have two options:

  • Option 1: Integrate the .etlt model directly in the DeepStream app. The model file is generated by export.

  • Option 2: Generate a device specific optimized TensorRT engine using tao-deploy. The generated TensorRT engine file can also be ingested by DeepStream.

  • Option 3: (Deprecated) Generate a device specific optimized TensorRT engine using tao-converter.

Machine-specific optimizations are done as part of the engine creation process, so a distinct engine should be generated for each environment and hardware configuration. If the TensorRT or CUDA libraries of the inference environment are updated (including minor version updates), or if a new model is generated, new engines need to be generated. Running an engine that was generated with a different version of TensorRT and CUDA is not supported and will cause unknown behavior that affects inference speed, accuracy, and stability, or it may fail to run altogether.

Option 1 is very straightforward. The .etlt file and calibration cache are directly used by DeepStream. DeepStream will automatically generate the TensorRT engine file and then run inference. TensorRT engine generation can take some time depending on size of the model and type of hardware. Engine generation can be done ahead of time with Option 2. With option 2, the tao-deploy is used to convert the .etlt file to TensorRT; this file is then provided directly to DeepStream. The tao-converter follows the similar workflow as tao-deploy. This option is deprecated for 4.0.0 and will not be available in the future release.

See the Exporting the Model section for more details on how to export a TAO model.

Important

As of 4.0.0, tao converter is deprecated. This method may not be available in the future releases. This section is only applicable if you’re still using tao converter for legacy. For tao-deploy, please jump to Integrating FasterRCNN Model.

TensorRT OSS build is required for FasterRCNN models. This is required because several TensorRT plugins that are required by these models are only available in TensorRT open source repo and not in the general TensorRT release. Specifically, for FasterRCNN, we need the cropAndResizePlugin and proposalPlugin.

If the deployment platform is x86 with NVIDIA GPU, follow instructions for x86. If your deployment is on NVIDIA Jetson platform, follow instructions for Jetson.

TensorRT OSS on x86

Building TensorRT OSS on x86:

  1. Install Cmake (>=3.13).

    Note

    TensorRT OSS requires cmake >= v3.13, so install cmake 3.13 if your cmake version is lower than 3.13c

    Copy
    Copied!
                

    sudo apt remove --purge --auto-remove cmake wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz tar xvf cmake-3.13.5.tar.gz cd cmake-3.13.5/ ./configure make -j$(nproc) sudo make install sudo ln -s /usr/local/bin/cmake /usr/bin/cmake


  2. Get GPU architecture. The GPU_ARCHS value can be retrieved by the deviceQuery CUDA sample:

    Copy
    Copied!
                

    cd /usr/local/cuda/samples/1_Utilities/deviceQuery sudo make ./deviceQuery

    If the /usr/local/cuda/samples doesn’t exist in your system, you could download deviceQuery.cpp from this GitHub repo. Compile and run deviceQuery.

    Copy
    Copied!
                

    nvcc deviceQuery.cpp -o deviceQuery ./deviceQuery

    This command will output something like this, which indicates the GPU_ARCHS is 75 based on CUDA Capability major/minor version.

    Copy
    Copied!
                

    Detected 2 CUDA Capable device(s) Device 0: "Tesla T4" CUDA Driver Version / Runtime Version 10.2 / 10.2 CUDA Capability Major/Minor version number: 7.5

  3. Build TensorRT OSS:

    Copy
    Copied!
                

    git clone -b 21.08 https://github.com/nvidia/TensorRT cd TensorRT/ git submodule update --init --recursive export TRT_SOURCE=`pwd` cd $TRT_SOURCE mkdir -p build && cd build

    Note

    Make sure your GPU_ARCHS from step 2 is in TensorRT OSS CMakeLists.txt. If GPU_ARCHS is not in TensorRT OSS CMakeLists.txt, add -DGPU_ARCHS=<VER> as below, where <VER> represents GPU_ARCHS from step 2.

    Copy
    Copied!
                

    /usr/local/bin/cmake .. -DGPU_ARCHS=xy -DTRT_LIB_DIR=/usr/lib/x86_64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out make nvinfer_plugin -j$(nproc)

    After building ends successfully, libnvinfer_plugin.so* will be generated under \`pwd\`/out/.

  4. Replace the original libnvinfer_plugin.so*:

    Copy
    Copied!
                

    sudo mv /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.x.y ${HOME}/libnvinfer_plugin.so.8.x.y.bak // backup original libnvinfer_plugin.so.x.y sudo cp $TRT_SOURCE/`pwd`/out/libnvinfer_plugin.so.8.m.n /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.x.y sudo ldconfig

TensorRT OSS on Jetson (ARM64)

  1. Install Cmake (>=3.13)

    Note

    TensorRT OSS requires cmake >= v3.13, while the default cmake on Jetson/Ubuntu 18.04 is cmake 3.10.2.

    Upgrade TensorRT OSS using:

    Copy
    Copied!
                

    sudo apt remove --purge --auto-remove cmake wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz tar xvf cmake-3.13.5.tar.gz cd cmake-3.13.5/ ./configure make -j$(nproc) sudo make install sudo ln -s /usr/local/bin/cmake /usr/bin/cmake

  2. Get GPU architecture based on your platform. The GPU_ARCHS for different Jetson platform are given in the following table.

    Jetson Platform

    GPU_ARCHS

    Nano/Tx1

    53

    Tx2

    62

    AGX Xavier/Xavier NX

    72

  3. Build TensorRT OSS:

    Copy
    Copied!
                

    git clone -b 21.03 https://github.com/nvidia/TensorRT cd TensorRT/ git submodule update --init --recursive export TRT_SOURCE=`pwd` cd $TRT_SOURCE mkdir -p build && cd build

    Note

    The -DGPU_ARCHS=72 below is for Xavier or NX, for other Jetson platform, change 72 referring to GPU_ARCHS from step 2.

    Copy
    Copied!
                

    /usr/local/bin/cmake .. -DGPU_ARCHS=72 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out make nvinfer_plugin -j$(nproc)

    After building ends successfully, libnvinfer_plugin.so* will be generated under ‘pwd’/out/.

  4. Replace "libnvinfer_plugin.so*" with the newly generated.

    Copy
    Copied!
                

    sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8.x.y ${HOME}/libnvinfer_plugin.so.8.x.y.bak // backup original libnvinfer_plugin.so.x.y sudo cp `pwd`/out/libnvinfer_plugin.so.8.m.n /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8.x.y sudo ldconfig

The tao-converter tool is provided with the TAO Toolkit to facilitate the deployment of TAO trained models on TensorRT and/or Deepstream. This section elaborates on how to generate a TensorRT engine using tao-converter.

For deployment platforms with an x86-based CPU and discrete GPUs, the tao-converter is distributed within the TAO docker. Therefore, we suggest using the docker to generate the engine. However, this requires that the user adhere to the same minor version of TensorRT as distributed with the docker. The TAO docker includes TensorRT version 8.0.

Instructions for x86

  1. Copy /opt/nvidia/tools/tao-converter to the target machine.

  2. Install TensorRT for the respective target machine.

  3. For FasterRCNN, we need to build TensorRT Open source software on the machine. Instructions to build TensorRT OSS on x86 can be found in TensorRT OSS on x86 section above or in this GitHub repo.

  4. Run tao-converter using the sample command below and generate the engine.

Instructions for Jetson

For the Jetson platform, the tao-converter is available to download in the dev zone. Once the tao-converter is downloaded, follow the instructions below to generate a TensorRT engine.

  1. Unzip tao-converter-trt7.1.zip on the target machine.

  2. Install the open ssl package using the command:

    Copy
    Copied!
                

    sudo apt-get install libssl-dev

  3. Export the following environment variables:

Copy
Copied!
            

$ export TRT_LIB_PATH=”/usr/lib/aarch64-linux-gnu” $ export TRT_INC_PATH=”/usr/include/aarch64-linux-gnu”

  1. For Jetson devices, TensorRT comes pre-installed with Jetpack. If you are using older JetPack, upgrade to the latest one that tao-converter can support.

  2. For FasterRCNN, instructions to build TensorRT OSS on Jetson can be found in TensorRT OSS on Jetson (ARM64) section above or in this GitHub repo.

  3. Run the tao-converter using the sample command below and generate the engine.

Note

Make sure to follow the output node names as mentioned in Exporting the Model.


Using the tao-converter

Copy
Copied!
            

tao-converter [-h] -k <encryption_key> -d <input_dimensions> -o <comma separated output nodes> [-c <path to calibration cache file>] [-e <path to output engine>] [-b <calibration batch size>] [-m <maximum batch size of the TRT engine>] [-t <engine datatype>] [-w <maximum workspace size of the TRT Engine>] [-i <input dimension ordering>] [-p <optimization_profiles>] [-s] [-u <DLA_core>] input_file

Required Arguments

  • input_file: Path to the .etlt model exported using export.

  • -k: The key used to encode the .tlt model when doing the traning.

  • -d: Comma-separated list of input dimensions that should match the dimensions used for export. Unlike export this cannot be inferred from calibration data. This parameter is not required for new models introduced in TAO Toolkit 3.0-21.08 (e.g., LPRNet, UNet, GazeNet, etc).

  • -o: Comma-separated list of output blob names that should match the output configuration used for export. This parameter is not required for new models introduced in TAO Toolkit 3.0 (e.g., LPRNet, UNet, GazeNet, etc). For FasterRCNN, set this argument to NMS.

Optional Arguments

  • -e: Path to save the engine to. (default: ./saved.engine)

  • -t: Desired engine data type, generates calibration cache if in INT8 mode. The default value is fp32. The options are {fp32, fp16, int8}.

  • -w: Maximum workspace size for the TensorRT engine. The default value is 1073741824(1<<30).

  • -i: Input dimension ordering, all other TAO commands use NCHW. The default value is nchw. The options are {nchw, nhwc, nc}. For FasterRCNN, we can omit it (defaults to nchw).

  • -p: Optimization profiles for .etlt models with dynamic shape. Comma separated list of optimization profile shapes in the format <input_name>,<min_shape>,<opt_shape>,<max_shape>, where each shape has the format: <n>x<c>x<h>x<w>. Can be specified multiple times if there are multiple input tensors for the model. This is only useful for new models introduced in TAO Toolkit 3.21.08. This parameter is not required for models that are already existed in version 2.0.

  • -s: TensorRT strict type constraints. A Boolean to apply TensorRT strict type constraints when building the TensorRT engine.

  • -u: Use DLA core. Specifying DLA core index when building the TensorRT engine on Jetson devices.

INT8 Mode Arguments

  • -c: Path to calibration cache file, only used in INT8 mode. The default value is ./cal.bin.

  • -b: Batch size used during the export step for INT8 calibration cache generation. (default: 8).

  • -m: Maximum batch size for TensorRT engine.(default: 16). If meet with out-of-memory issue, please decrease the batch size accordingly. This parameter is not required for .etlt models generated with dynamic shape (This is only possible for new models introduced in TAO Toolkit 3.21.08).

Sample Output Log

Here is a sample log for exporting a FasterRCNN model.

Copy
Copied!
            

tao-converter -d 3,544,960 \ -k nvidia_tlt \ -o NMS \ /workspace/tao-experiments/faster_rcnn/resnet18_pruned.epoch45.etlt .. [INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output. [INFO] Detected 1 inputs and 2 output network tensors.

There are 2 options to integrate models from TAO with DeepStream:

  • Option 1: Integrate the model (.etlt) with the encrypted key directly in the DeepStream app. The model file is generated by export.

  • Option 2: Generate a device specific optimized TensorRT engine using tao-converter. The TensorRT engine file can also be ingested by DeepStream.

For FasterRCNN, we will need to build TensorRT Open source plugins and custom bounding box parser. The instructions are provided below in the TensorRT OSS section above and the required code can be found in this GitHub repo.

In order to integrate the models with DeepStream, you need the following:

  1. Download and install DeepStream SDK. The installation instructions for DeepStream are provided in the DeepStream Development Guide.

  2. An exported .etlt model file and optional calibration cache for INT8 precision.

  3. TensorRT OSS Plugins .

  4. A labels.txt file containing the labels for classes in the order in which the networks produces outputs.

  5. A sample config_infer_*.txt file to configure the nvinfer element in DeepStream. The nvinfer element handles everything related to TensorRT optimization and engine creation in DeepStream.

DeepStream SDK ships with an end-to-end reference application which is fully configurable. Users can configure input sources, inference model, and output sinks. The app requires a primary object detection model, followed by an optional secondary classification model. The reference application is installed as deepstream-app. The graphic below shows the architecture of the reference application.

arch_ref_appl.png


There are typically 2 or more configuration files that are used with this app. In the install directory, the config files are located in samples/configs/deepstream-app or sample/configs/tlt_pretrained_models. The main config file configures all the high level parameters in the pipeline above. This would set input source and resolution, number of inferences, tracker, and output sinks. The other supporting config files are for each individual inference engine. The inference specific config files are used to specify models, inference resolution, batch size, number of classes and other customization. The main config file will call all the supporting config files. Here are some config files in samples/configs/deepstream-app for your reference.

  • source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt: Main config file

  • config_infer_primary.txt: Supporting config file for primary detector in the pipeline above

  • config_infer_secondary_*.txt: Supporting config file for secondary classifier in the pipeline above

The deepstream-app will only work with the main config file. This file will most likely remain the same for all models and can be used directly from the DeepStream SDK will little to no change. User will only have to modify or create config_infer_primary.txt and config_infer_secondary_*.txt.

Integrating a FasterRCNN Model

To run a FasterRCNN model in DeepStream, you need a label file and a DeepStream configuration file. In addition, you need to compile the TensorRT Open source software and FasterRCNN bounding box parser for DeepStream.

A DeepStream sample with documentation on how to run inference using the trained FasterRCNN models from TAO Toolkit is provided on GitHub here.

Prerequisite for FasterRCNN Model

  1. FasterRCNN requires the cropAndResizePlugin and the proposalPlugin. This plugin is available in the TensorRT open source repo. Detailed instructions to build TensorRT OSS can be found in TensorRT Open Source Software (OSS).

  2. FasterRCNN requires custom bounding box parsers that are not built-in inside the DeepStream SDK. The source code to build custom bounding box parsers for FasterRCNN is available here. The following instructions can be used to build bounding box parser:

Step 1: Install git-lfs (git >= 1.8.2)

Copy
Copied!
            

curl -s https://packagecloud.io/install/repositories/github/git-lfs/ script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install

Step 2: Download Source Code with SSH or HTTPS

Copy
Copied!
            

git clone -b release/tlt3.0 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps

Step 3: Build

Copy
Copied!
            

// or Path for DS installation export CUDA_VER=10.2 // CUDA version, e.g. 10.2 make

This generates libnvds_infercustomparser_tlt.so in the directory post_processor.

Label File

The label file is a text file containing the names of the classes that the FasterRCNN model is trained to detect. The order in which the classes are listed here must match the order in which the model predicts the output. This order is derived from the order the objects are instantiated in the target_class_mapping field of the FasterRCNN experiment specification file. During the training, TAO FasterRCNN will make all the class names in lower case and sort them in alphabetical order. For example, if the target_class_mapping label file is:

Copy
Copied!
            

target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "person" value: "person" } target_class_mapping { key: "bicycle" value: "bicycle" }

The actual class name list is bicycle, car, person. The example of the corresponding label_file_frcnn.txt file is (we always append a background class at the end):

Copy
Copied!
            

bicycle car person background

Note

If --gen_ds_config is provided during TAO export of a FasterRCNN model, then a label file named labels.txt will be generated automatically. Without knowing the above details, the labels.txt file can be used directly in DeepStream inference.


DeepStream Configuration File

The detection model is typically used as a primary inference engine. It can also be used as a secondary inference engine. To run this model in the sample deepstream-app, you must modify the existing config_infer_primary.txt file to point to this model as well as the custom parser.

dstream_deploy_options3.png


Option 1: Integrate the model (.etlt) directly in the DeepStream app.

For this option, users will need to add the following parameters in the configuration file. The int8-calib-file is only required for INT8 precision.

Copy
Copied!
            

tlt-encoded-model=<TLT exported .etlt> tlt-model-key=<Model export key> int8-calib-file=<Calibration cache file>

The tlt-encoded-model parameter points to the exported model (.etlt) from TAO Toolkit. The tlt-model-key is the encryption key used during model export.

Option 2: Integrate TensorRT engine file with DeepStream app.

Step 1: Generate TensorRT engine using tao-deploy

Step 2: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream.

Copy
Copied!
            

model-engine-file=<PATH to generated TensorRT engine>

All other parameters are common between the 2 approaches. To use the custom bounding box parser instead of the default parsers in DeepStream, modify the following parameters in [property] section of primary infer configuration file:

Copy
Copied!
            

parse-bbox-func-name=NvDsInferParseCustomNMSTLT custom-lib-path=<PATH to libnvds_infercustomparser_tlt.so>

Add the label file generated above using:

Copy
Copied!
            

labelfile-path=<Classification labels>

For all the options, see the configuration file below. To learn about what all the parameters are used for, refer to DeepStream Development Guide.

Here’s a sample config file, config_infer_primary.txt:

Copy
Copied!
            

[property] gpu-id=0 net-scale-factor=1.0 offsets=<image mean values as in the training spec file> # e.g.: 103.939;116.779;123.68 model-color-format=1 labelfile-path=<Path to frcnn_labels.txt> tlt-encoded-model=<Path to FasterRCNN model> tlt-model-key=<Key to decrypt the model> infer-dims=<c;h;w> # e.g., 3;544;960 Where c = number of channels, h = height of the model input, w = width of model input uff-input-order=0 uff-input-blob-name=<input_blob_name> # e.g.: input_image batch-size=<batch size> e.g.: 1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 num-detected-classes=<number of classes to detect(including background)> # e.g.: 5 interval=0 gie-unique-id=1 is-classifier=0 #network-type=0 output-blob-names=<output_blob_names> e.g.: NMS parse-bbox-func-name=NvDsInferParseCustomNMSTLT custom-lib-path=<PATH to libnvds_infercustomparser_tlt.so> [class-attrs-all] pre-cluster-threshold=0.6 roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0

Note

If --gen_ds_config is provided during TAO export of a FasterRCNN model, then a config file named nvinfer_config.txt will be generated automatically. This file is an incomplete config file for DeepStream inference; you should copy and paste available fields in this partial config file to you own complete config file.

© Copyright 2022, NVIDIA.. Last updated on Dec 13, 2022.