# Deploying to Deepstream

To deploy a TAO-trained UNet model to DeepStream, you need to use tao-converter to generate a device-specific optimized TensorRT engine, which can then be ingested by DeepStream. Download the corresponding device specific tao-converter from the TAO converter matrix.

Machine-specific optimizations are performed as part of the engine creation process, so you should generate a distinct engine for each environment and hardware configuration. Furthermore, if the TensorRT or CUDA libraries of the inference environment are updated (including minor version updates), or if a new model is generated, you will need to generate new engines. Running an engine that was generated with a different version of TensorRT and CUDA is not supported and will cause unknown behavior that affects inference speed, accuracy, and stability–or it may fail to run altogether.

See the Exporting the Model documentation for UNet for more details on how to export a TAO model.

## TensorRT Open Source Software (OSS)

UNet models require the TensorRT OSS build because several prerequisite TensorRT plugins are only available in the TensorRT open source repo.

If your deployment platform is an x86 PC with an NVIDIA GPU, follow the TensorRT OSS on x86 instructions; if your deployment platform is NVIDIA Jetson, follow the TensorRT OSS on Jetson (ARM64) instructions.

### TensorRT OSS on x86

Building TensorRT OSS on x86:

1. Install Cmake (>=3.13).

Note

TensorRT OSS requires cmake >= v3.13, so install cmake 3.13 if your cmake version is lower than 3.13c

Copy
Copied!

sudo apt remove --purge --auto-remove cmake
tar xvf cmake-3.13.5.tar.gz
cd cmake-3.13.5/
./configure
make -j$(nproc) sudo make install sudo ln -s /usr/local/bin/cmake /usr/bin/cmake  2. Get GPU architecture. The GPU_ARCHS value can be retrieved by the deviceQuery CUDA sample: Copy Copied!  cd /usr/local/cuda/samples/1_Utilities/deviceQuery sudo make ./deviceQuery  If the /usr/local/cuda/samples doesn’t exist in your system, you could download deviceQuery.cpp from this GitHub repo. Compile and run deviceQuery. Copy Copied!  nvcc deviceQuery.cpp -o deviceQuery ./deviceQuery  This command will output something like this, which indicates the GPU_ARCHS is 75 based on CUDA Capability major/minor version. Copy Copied!  Detected 2 CUDA Capable device(s) Device 0: "Tesla T4" CUDA Driver Version / Runtime Version 10.2 / 10.2 CUDA Capability Major/Minor version number: 7.5  3. Build TensorRT OSS: Copy Copied!  git clone -b 21.08 https://github.com/nvidia/TensorRT cd TensorRT/ git submodule update --init --recursive export TRT_SOURCE=pwd cd$TRT_SOURCE
mkdir -p build && cd build


Note

Make sure your GPU_ARCHS from step 2 is in TensorRT OSS CMakeLists.txt. If GPU_ARCHS is not in TensorRT OSS CMakeLists.txt, add -DGPU_ARCHS=<VER> as below, where <VER> represents GPU_ARCHS from step 2.

Copy
Copied!

/usr/local/bin/cmake .. -DGPU_ARCHS=xy  -DTRT_LIB_DIR=/usr/lib/x86_64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out
make nvinfer_plugin -j$(nproc)  After building ends successfully, libnvinfer_plugin.so* will be generated under \pwd\/out/. 4. Replace the original libnvinfer_plugin.so*: Copy Copied!  sudo mv /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.x.y${HOME}/libnvinfer_plugin.so.8.x.y.bak   // backup original libnvinfer_plugin.so.x.y
sudo cp $TRT_SOURCE/pwd/out/libnvinfer_plugin.so.8.m.n /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.x.y sudo ldconfig  ### TensorRT OSS on Jetson (ARM64) 1. Install Cmake (>=3.13) Note TensorRT OSS requires cmake >= v3.13, while the default cmake on Jetson/Ubuntu 18.04 is cmake 3.10.2. Upgrade TensorRT OSS using: Copy Copied!  sudo apt remove --purge --auto-remove cmake wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz tar xvf cmake-3.13.5.tar.gz cd cmake-3.13.5/ ./configure make -j$(nproc)
sudo make install
sudo ln -s /usr/local/bin/cmake /usr/bin/cmake


2. Get GPU architecture based on your platform. The GPU_ARCHS for different Jetson platform are given in the following table.

 Jetson Platform GPU_ARCHS Nano/Tx1 53 Tx2 62 AGX Xavier/Xavier NX 72
3. Build TensorRT OSS:

Copy
Copied!

git clone -b 21.03 https://github.com/nvidia/TensorRT
cd TensorRT/
git submodule update --init --recursive
export TRT_SOURCE=pwd
cd $TRT_SOURCE mkdir -p build && cd build  Note The -DGPU_ARCHS=72 below is for Xavier or NX, for other Jetson platform, change 72 referring to GPU_ARCHS from step 2. Copy Copied!  /usr/local/bin/cmake .. -DGPU_ARCHS=72 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out make nvinfer_plugin -j$(nproc)


After building ends successfully, libnvinfer_plugin.so* will be generated under ‘pwd’/out/.

4. Replace "libnvinfer_plugin.so*" with the newly generated.

Copy
Copied!

$export TRT_INC_PATH=”/usr/include/x86_64-linux-gnu”  1. Run the tao-converter using the sample command below and generate the engine. 2. Instructions to build TensorRT OSS on Jetson can be found in the TensorRT OSS on x86 section above or in this GitHub repo. Note Make sure to follow the output node names as mentioned in Exporting the Model section of the respective model. ### Instructions for Jetson For the Jetson platform, the tao-converter is available to download in the NVIDIA developer zone. You may choose the version you wish to download as listed in the overview section. Once the tao-converter is downloaded, please follow the instructions below to generate a TensorRT engine. 1. Unzip the zip file on the target machine. 2. Install the OpenSSL package using the command: Copy Copied!  sudo apt-get install libssl-dev  3. Export the following environment variables: Copy Copied!  $ export TRT_LIB_PATH=”/usr/lib/aarch64-linux-gnu”
$export TRT_INC_PATH=”/usr/include/aarch64-linux-gnu”  1. For Jetson devices, TensorRT comes pre-installed with Jetpack. If you are using older JetPack, upgrade to JetPack-5.0DP. 2. Instructions to build TensorRT OSS on Jetson can be found in the TensorRT OSS on Jetson (ARM64) section above or in this GitHub repo. 3. Run the tao-converter using the sample command below and generate the engine. Note Make sure to follow the output node names as mentioned in Exporting the Model section of the respective model. ### Using the tao-converter Copy Copied!  tao-converter [-h] -k <encryption_key> -p <optimization_profiles> [-d <input_dimensions>] [-o <comma separated output nodes>] [-c </path/to/calibration/cache_file>] [-e </path/to/output/engine>] [-b <calibration batch size>] [-m <maximum batch size of the TRT engine>] [-t <engine datatype>] [-w <maximum workspace size of the TRT Engine>] [-i <input dimension ordering>] [-s] [-u <DLA_core>] input_file  #### Required Arguments • input_file: The path to the .etlt model exported using export. • -p: Optimization profiles for .etlt models with dynamic shape. Use a comma-separated list of optimization profile shapes in the format <input_name>,<min_shape>,<opt_shape>,<max_shape>, where each shape has the format: <n>x<c>x<h>x<w>. This can be specified multiple times if there are multiple input tensors for the model. • -k: The key used to encode the .tlt model when doing the traning #### Optional Arguments • -e: The path to save the engine to. The default path is ./saved.engine. Use .engine or .trt as an extension for the engine path. • -t: The desired engine data type. This option generates a calibration cache if in INT8 mode. The default value is fp32. The options are fp32, fp16, and int8. • -w: The maximum workspace size for the TensorRT engine. The default value is 1073741824(1<<30). • -i: The input dimension ordering. The default value is nchw. The options are nchw, nhwc, and nc. For UNet, you can omit this argument. • -s: A Boolean value specifying whether to apply TensorRT strict type constraints when building the TensorRT engine. • -u: Specifies the DLA core index when building the TensorRT engine on Jetson devices. • -d: A comma-separated list of input dimensions that should match the dimensions used for export. • -o: A comma-separated list of output blob names that should match the output configuration used for export. #### INT8 Mode Arguments • -c: The path to the calibration cache file for INT8 mode. The default path is ./cal.bin. • -b: The batch size used during the export step for INT8 calibration cache generation (default: 8). • -m: The maximum batch size for the TensorRT engine. The default value is 16. If you encounter out-of-memory issues, decrease the batch size accordingly. This parameter is not required for .etlt models generated with dynamic shape (which is only possible for new models introduced in TAO Toolkit 3.21.08). #### Sample Output Log Here is a sample log for exporting a UNet model. Copy Copied!  tao-converter -k$KEY
-c $USER_EXPERIMENT_DIR/export/isbi_cal.bin -e$USER_EXPERIMENT_DIR/export/trt.int8.tlt.isbi.engine
-t int8
-p input_1,1x1x572x572,4x1x572x572,16x1x572x572
/workspace/tao-experiments/faster_rcnn/resnet18_pruned.epoch45.etlt
..
[INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[INFO] Detected 1 inputs and 2 output network tensors.


Note

To use the default tao-converter available in the TAO Toolkit package, append tao to the sample usage of the tao_converter as mentioned here.

Note

The input name to be used for shufflenet backbone is input_2:0. For all other backbones, the input name to be used is input_1:0

Once the model and/or TensorRT engine file have been generated, two additional files are required:

## Label File

The label file is a text file containing the names of the classes that the UNet model is trained to segment. The order in which the classes are listed here must match the order in which the model predicts the output. This order is derived from the target_class_id_mapping.json file that is saved in the results directory after training. Here is an example of the target_class_id_mapping.json file:

Copy
Copied!

{"0": ["foreground"], "1": ["background"]}


Here is an example of the corresponding unet_labels.txt file. The order in the unet_labels.txt should match the order of the target_class_id_mapping.json keys:

Copy
Copied!

foreground
background


## Integrating the model with DeepStream

The segmentation model is typically used as a primary inference engine. It can also be used as a secondary inference engine. Download ds-tlt from the deepstream_tao_apps repo.

Follow these steps to use the TensorRT engine file with the ds-tlt:

1. Generate the TensorRT engine using tao-converter. Detailed instructions are provided in the Generating an engine using tao-converter section.

1. Once the engine file is generated successfully, do the following to set up ds-tlt with DS 6.1.

## DeepStream Configuration File

To run this model with the sample ds-tao-segmentation, you must modify the existing pgie_unet_tlt_config.txt file here to point to this model. For all options, see the configuration file below. To learn more about the parameters, refer to the DeepStream Development Guide.

Copy
Copied!

[property]
gpu-id=0
net-scale-factor=0.007843
# 0-RGB, 1-BGR, 2-Gray
model-color-format=1 # For grayscale, this should be set to 2
offsets=127.5; 127.5; 127.5
labelfile-path=</Path/to/unet_labels.txt>
##Replace following path to your model file
# You can provide the model as etlt file or convert it to tensorrt engine offline using tao-converter and
# provide it in the config file. If you are providing the etlt model, do not forget to provide the model key.
tlt-encoded-model=/path/to/etlt file
tlt-model-key=tlt_encode

# If you provide the model as etlt file, you need to provide the calibration cache and text file here
labelfile-path=/path/to/labels.txt
int8-calib-file=/path/to/calibration cache text file
# Argument to be used if you are using an tensorrt engine
# model-engine-file=
<path to tensorrt engine generated by tao-converter></path>
infer-dims=c;h;w # where c = number of channels, h = height of the model input, w = width of model input.
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
num-detected-classes=2
interval=0
gie-unique-id=1

## 0=Detector, 1=Classifier, 2=Semantic Segmentation (sigmoid activation), 3=Instance Segmentation, 100=skip nvinfer postprocessing
network-type=100 # set this to 2 if sigmoid activation was used for semantic segmentation

output-tensor-meta=1 # Set this to 1 when network-type is 100
output-blob-names=argmax_1 # If you had used softmax for segmentation model, it would have beedn replaced with argmax by TAO for optimization. Hence, you need to provide argmax_1
segmentation-threshold=0.0
##specify the output tensor order, 0(default value) for CHW and 1 for HWC
segmentation-output-order=1

[class-attrs-all]
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0


The following is an example of a modified config file for a resnet18 3-channel model trained on the ISBI dataset:

Copy
Copied!

[property]
gpu-id=0
net-scale-factor=0.007843
# Since the model input channel is 3, and pre-processing of UNET TAO requires BGR format, set the color format to BGR.
# 0-RGB, 1-BGR, 2-Gray
model-color-format=1 # For grayscale, this should be set to 2
offsets=127.5;127.5;127.5
labelfile-path=/home/nvidia/deepstream_tlt_apps/configs/unet_tlt/unet_labels.txt
##Replace following path to your model file
# You can provide the model as etlt file or convert it to tensorrt engine offline using tao-converter and
# provide it in the config file. If you are providing the etlt model, do not forget to provide the model key.
tlt-encoded-model=/path/to/unet_resnet18.etlt
tlt-model-key=tlt_encode
# Argument to be used if you are using an tensorrt engine
# model-engine-file=/home/nvidia/deepstream_tlt_apps/models/unet/unet_resnet18_isbi.engine
infer-dims=3;320;320
batch-size=1

## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=2
interval=0
gie-unique-id=1

## 0=Detector, 1=Classifier, 2=Semantic Segmentation (sigmoid activation), 3=Instance Segmentation, 100=skip nvinfer postprocessing
network-type=100

output-tensor-meta=1 # Set this to 1 when network-type is 100

output-blob-names=argmax_1 # If you had used softmax for segmentation model, it would have been replaced with argmax by TAO for optimization.
# Hence, you need to provide argmax_1
segmentation-threshold=0.0
##specify the output tensor order, 0(default value) for CHW and 1 for HWC
segmentation-output-order=1

[class-attrs-all]
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0


Below is a sample ds-tlt command for inference on a single image:

Copy
Copied!

ds-tao-segmentation -c pgie_config_file -i image_isbi_rgb.jpg


Note

The .png image format is not supported by DeepStream. Inference image needs to be converted to .jpg. If model_input_channels is set to 3, ensure that grayscale images are converted to three-channel images.