Deploying to Deepstream for Segformer#

To deploy a TAO-trained Segformer model to DeepStream, you need to use TAO Deploy to generate a device-specific optimized TensorRT engine, which can then be ingested by DeepStream.

Machine-specific optimizations are performed as part of the engine creation process, so you should generate a distinct engine for each environment and hardware configuration. Furthermore, if the TensorRT or CUDA libraries of the inference environment are updated (including minor version updates), or if a new model is generated, you will need to generate new engines. Running an engine that was generated with a different version of TensorRT and CUDA is not supported and will cause unknown behavior that affects inference speed, accuracy, and stability–or it may fail to run altogether.

See the Exporting the Model documentation for SegFormer for more details on how to export a TAO model.

TensorRT Open Source Software (OSS)#

Segformer models require the TensorRT OSS build because several prerequisite TensorRT plugins are only available in the TensorRT open source repo.

If your deployment platform is an x86 PC with an NVIDIA GPU, follow the TensorRT OSS on x86 instructions; if your deployment platform is NVIDIA Jetson, follow the TensorRT OSS on Jetson (ARM64) instructions.

TensorRT OSS on x86#

Building TensorRT OSS on x86:

  1. Install Cmake (>=3.13).

    Note

    TensorRT OSS requires cmake >= v3.13, so install cmake 3.13 if your cmake version is lower than 3.13c

    sudo apt remove --purge --auto-remove cmake
    wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz
    tar xvf cmake-3.13.5.tar.gz
    cd cmake-3.13.5/
    ./configure
    make -j$(nproc)
    sudo make install
    sudo ln -s /usr/local/bin/cmake /usr/bin/cmake
    
  2. Get GPU architecture. The GPU_ARCHS value can be retrieved by the deviceQuery CUDA sample:

    cd /usr/local/cuda/samples/1_Utilities/deviceQuery
    sudo make
    ./deviceQuery
    

    If the /usr/local/cuda/samples doesn’t exist in your system, you could download deviceQuery.cpp from this GitHub repo. Compile and run deviceQuery.

    nvcc deviceQuery.cpp -o deviceQuery
    ./deviceQuery
    

    This command will output something like this, which indicates the GPU_ARCHS is 75 based on CUDA Capability major/minor version.

    Detected 2 CUDA Capable device(s)
    
    Device 0: "Tesla T4"
      CUDA Driver Version / Runtime Version          10.2 / 10.2
      CUDA Capability Major/Minor version number:    7.5
    
  3. Build TensorRT OSS:

    git clone -b 21.08 https://github.com/nvidia/TensorRT
    cd TensorRT/
    git submodule update --init --recursive
    export TRT_SOURCE=`pwd`
    cd $TRT_SOURCE
    mkdir -p build && cd build
    

    Note

    Make sure your GPU_ARCHS from step 2 is in TensorRT OSS CMakeLists.txt. If GPU_ARCHS is not in TensorRT OSS CMakeLists.txt, add -DGPU_ARCHS=<VER> as below, where <VER> represents GPU_ARCHS from step 2.

    /usr/local/bin/cmake .. -DGPU_ARCHS=xy  -DTRT_LIB_DIR=/usr/lib/x86_64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out
    make nvinfer_plugin -j$(nproc)
    

    After building ends successfully, libnvinfer_plugin.so* will be generated under `pwd`/out/.

  4. Replace the original libnvinfer_plugin.so*:

    sudo mv /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.x.y ${HOME}/libnvinfer_plugin.so.8.x.y.bak   // backup original libnvinfer_plugin.so.x.y
    sudo cp $TRT_SOURCE/`pwd`/out/libnvinfer_plugin.so.8.m.n  /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.x.y
    sudo ldconfig
    

TensorRT OSS on Jetson (ARM64)#

  1. Install Cmake (>=3.13)

    Note

    TensorRT OSS requires cmake >= v3.13, while the default cmake on Jetson/Ubuntu 18.04 is cmake 3.10.2.

    Upgrade TensorRT OSS using:

    sudo apt remove --purge --auto-remove cmake
    wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz
    tar xvf cmake-3.13.5.tar.gz
    cd cmake-3.13.5/
    ./configure
    make -j$(nproc)
    sudo make install
    sudo ln -s /usr/local/bin/cmake /usr/bin/cmake
    
  2. Get GPU architecture based on your platform. The GPU_ARCHS for different Jetson platform are given in the following table.

    Jetson Platform

    GPU_ARCHS

    Nano/Tx1

    53

    Tx2

    62

    AGX Xavier/Xavier NX

    72

  3. Build TensorRT OSS:

    git clone -b 21.03 https://github.com/nvidia/TensorRT
    cd TensorRT/
    git submodule update --init --recursive
    export TRT_SOURCE=`pwd`
    cd $TRT_SOURCE
    mkdir -p build && cd build
    

    Note

    The -DGPU_ARCHS=72 below is for Xavier or NX, for other Jetson platform, change 72 referring to GPU_ARCHS from step 2.

    /usr/local/bin/cmake .. -DGPU_ARCHS=72  -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out
    make nvinfer_plugin -j$(nproc)
    

    After building ends successfully, libnvinfer_plugin.so* will be generated under ‘pwd’/out/.

  4. Replace "libnvinfer_plugin.so*" with the newly generated.

    sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8.x.y ${HOME}/libnvinfer_plugin.so.8.x.y.bak   // backup original libnvinfer_plugin.so.x.y
    sudo cp `pwd`/out/libnvinfer_plugin.so.8.m.n  /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8.x.y
    sudo ldconfig
    

Label File#

The label file is a text file containing the names of the classes that the Segformer model is trained to segment. The order in which the classes are listed here must match the order in which the model predicts the output. This order is derived from the target_class_id_mapping.json file that is saved in the results directory after training. Here is an example of the target_class_id_mapping.json file:

{"0": ["foreground"], "1": ["background"]}

Here is an example of the corresponding segformer_labels.txt file. The order in the segformer_labels.txt should match the order of the target_class_id_mapping.json keys:

foreground
background

Integrating the model with DeepStream#

The segmentation model is typically used as a primary inference engine. It can also be used as a secondary inference engine. Download ds-tlt from the deepstream_tao_apps repo.

Follow these steps to use the TensorRT engine file with the ds-tlt:

  1. Generate the TensorRT engine using TAO Deploy.

  2. Once the engine file is generated successfully, do the following to set up ds-tlt with DS 6.1.

DeepStream Configuration File#

To run this model with the sample ds-tao-segmentation, you must modify the existing pgie_citysemsegformer_tao_config.txt file here to point to this model. For all options, see the configuration file below. To learn more about the parameters, refer to the DeepStream Development Guide.

From TAO 5.0.0, .etlt is deprecated. To integrate .etlt directly in the DeepStream app, you need following parmaters in the configuration file.

tlt-encoded-model=<TAO exported .etlt>
tlt-model-key=<Model export key>
int8-calib-file=<Calibration cache file>
[property]
gpu-id=0
net-scale-factor=0.01735207357279195
offsets=123.675;116.28;103.53
labelfile-path=../../models/citysemsegformer_vdeployable_v1.0/labels.txt
model-engine-file=../../models/citysemsegformer_vdeployable_v1.0/citysemsegformer.onnx_b1_gpu0_fp16.engine
# tlt-encoded-model=../../models/citysemsegformer_vdeployable_v1.0/citysemsegformer.etlt # If it is an etlt file
# tlt-model-key=tlt_encode # This is needed if etlt file is used.
onnx-file=../../models/citysemsegformer_vdeployable_v1.0/citysemsegformer.onnx
infer-dims=3;1024;1024
model-color-format=0
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
## workspace-size default to 1024 x 1024 MB
workspace-size=1048576
interval=0
gie-unique-id=1
cluster-mode=2
## 0=Detector, 1=Classifier, 2=Semantic Segmentation, 3=Instance Segmentation, 100=Other
network-type=100 # Skip nvinfer post-processing, use pgie_pad_buffer_probe_network_type100() instead.
## num-detected-classes= is required to set NvDsInferSegmentationMeta::classes.
num-detected-classes=19
## Allow post-processing to access output tensors.
output-tensor-meta=1
##specify the output tensor order, 0(default value) for CHW and 1 for HWC
segmentation-output-order=1

The following is an example of a modified config file for a resnet18 3-channel model trained on the ISBI dataset:

[property]
gpu-id=0
net-scale-factor=0.007843
# Since the model input channel is 3, and pre-processing of SegFormer TAO requires BGR format, set the color format to BGR.
# 0-RGB, 1-BGR, 2-Gray
model-color-format=1 # For grayscale, this should be set to 2
offsets=127.5;127.5;127.5
labelfile-path=/home/nvidia/deepstream_tlt_apps/configs/segformer_tlt/segformer_labels.txt
##Replace following path to your model file
# Argument to be used if you are using an tensorrt engine
model-engine-file=/home/nvidia/deepstream_tlt_apps/models/segformer/segformer_isbi.engine
infer-dims=3;512;512
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=2
interval=0
gie-unique-id=1

## 0=Detector, 1=Classifier, 2=Semantic Segmentation (sigmoid activation), 3=Instance Segmentation, 100=skip nvinfer postprocessing
network-type=100
output-tensor-meta=1 # Set this to 1 when network-type is 100
output-blob-names=argmax_1 # If you had used softmax for segmentation model, it would have been replaced with argmax by TAO for optimization.
                           # Hence, you need to provide argmax_1
segmentation-threshold=0.0
##specify the output tensor order, 0(default value) for CHW and 1 for HWC
segmentation-output-order=1

Note

Currently, Segformer only supports TensorRT Engine input in the DS configuration file. Convert the .onnx engine to .trt using tao deploy.

Below is a sample ds-tao-segmentation command for inference on a single image:

ds-tao-segmentation -c pgie_config_file -i image_isbi_rgb.jpg