9.25. DeepStream App Operator

DeepStream App Operator is a containerized application, developed for use as one of the operators in the Clara Deploy pipelines. The application is built on top of DeepStream SDK, which provides the reference application to deploy the model created by NVIDIA Transfer Learning Toolkit or any other DeepLearning frameworks.

9.25.1. Quick Start

Sample input data (for app_ds_torso) is available from DeepStream Batch Pipeline

clara pull clara_deepstream_batch_pipeline
cd clara_deepstream_batch_pipeline
# Unzip source code
unzip source.zip
# Build local image
./build.sh
# Unzip app_ds_torso-model_v1.zip and app_ds_torso-input_v1.zip into `input/app_ds_torso` folder
./download_input.sh
# Launch the docker container with the sample input at `input/app_ds_torso`
./run.sh app_ds_torso

9.25.2. Inputs & Outputs

This application, in the form of a Docker container, expects an input folder (/input by default), which can be mapped to the host volume when the Docker container is started.

The input folder’s structure would look like below:

input
└── app_ds_torso              # <required>: app folder
    ├── configs               # <required>: folder for configuration (name depends on `${DS_CONFIG}`)
    │   ├── config.txt        # <required>: a configuration file (name depends on `${DS_CONFIG}`)
    │   └── dslhs_nvinfer_config.txt
    ├── models                # <required>: folder for models/labels (used by the configuration file)
    │   ├── calibration.bin
    │   ├── labels.txt
    │   ├── resnet18_detector.etlt                 # device-independent model file
    │   └── resnet18_detector.etlt_b4_fp16.engine  # device-specific model file
    └── test_input.mp4        # <required>: input video file (.mp4 file in H.264 format)

The sample deepstream app (app_ds_torso) accepts a .mp4 file in H.264 format and outputs .mp4 file in H.264 format.

You will see that the configuration’s input/output folder is set to use /input or /output inside the docker container. ${DS_CONFIG}, ${DS_INPUT} and ${DS_OUTPUT} would be replaced with the environment variables provided to the docker container.

  • DS_CONFIG : Location to DeepStream app’s configuration file. (default: configs/config.txt)

  • DS_INPUT: Input file name in /input folder. If empty, any .mp4 file in /input folder would be used (default: ‘’)

  • DS_OUTPUT: Output file name in /output folder (default: output.mp4)

configs/config.txt

...
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file:///input/${DS_INPUT}
...
config-file=/input/configs/dslhs_nvinfer_config.txt
...

[sink1]
...
output-file=/output/${DS_OUTPUT}
...

Please see this document to understand what each property in configs/config.txt means.

configs/dslhs_nvinfer_config.txt

...
tlt-model-key=nvidia_tlt
tlt-encoded-model=/input/models/resnet18_detector.etlt
int8-calib-file=/input/models/calibration.bin
labelfile-path=/input/models/labels.txt
model-engine-file=/input/models/resnet18_detector.etlt_b4_fp16.engine
...

Please see this document to understand what each property in configs/dslhs_nvinfer_config.txt means.

The sample deepstream app (app_ds_torso) outputs the rendered video (bounding boxes with labels are overlayed on top of the original video) in H.264 format (output.mp4) as well as the primary detector output in a modified KITTI metadata format (.txt files).

The device-specific model (resnet18_detector.etlt_b4_fp16.engine) is optimized from the device-independent model (resnet18_detector.etlt) on GV100 Volta GPU (32GB). If loading the device-specific model file is failed, the application would create the optimized-device-specific model from the device-independent model on runtime which can cause some delays on startup time.

You can see the output video using Google Chrome web browser:

google-chrome output/output.mp4

9.25.3. Sample AI model (Organ Object Detector)

The sample model (app_ds_torso) bundled with the pipeline comes from NVIDIA Clara AGX Developer Kit (as part of Clara-AGX-TLT) and detects stomach and intestines from the input video. The instruction to train the model would be available in the Clara AGX Developer Kit. It uses NVIDIA Transfer Learning Toolkit to train and deploy the model.