12.19. DeepStream Batch Pipeline

The DeepStream Batch pipeline is one of the reference pipelines provided with Clara Deploy SDK.

The pipeline is bundled with an organ detection model running on top of DeepStream SDK which provides a reference application. It accepts a .mp4 file in H.264 format and performs the object detection finding stomach and intestines from the input video. The output of the pipeline is a rendered video (bounding boxes with labels are overlayed on top of the original video) in H.264 format (output.mp4) as well as the primary detector output in a modified KITTI metadata format (.txt files).

The DeepStream Batch pipeline is defined in the Clara Deploy pipeline definition language. This pipeline consists of the following operator:

The following is the details of pipeline definition, with comments describing each operator’s functions as well as input and output.

Copy
Copied!
            

api-version: 0.4.0 name: deepstream-batch-pipeline parameters: DS_CONFIG: configs/config.txt DS_INPUT: # if empty, any .mp4 file in /input folder would be used. DS_OUTPUT: output.mp4 operators: - name: deepstream description: DeepStream Operator container: image: clara/app-deepstream tag: latest command: ["/workspace/launch_deepstream.sh"] variables: DS_CONFIG: ${{DS_CONFIG}} DS_INPUT: ${{DS_INPUT}} DS_OUTPUT: ${{DS_OUTPUT}} requests: gpu: 1 memory: 8192 input: - path: /input/ output: - path: /output/

Please refer to the Run Reference Pipelines using Local Input Files in the How to run a Reference Pipeline section to learn how to register a pipeline and execute the pipeline using local input files.

Example)

Copy
Copied!
            

clara pull clara_deepstream_batch_pipeline cd clara_deepstream_batch_pipeline # Unzip source code unzip source.zip # Unzip app_ds_torso-model_v1.zip and app_ds_torso-input_v1.zip into `input/app_ds_torso` folder ./download_input.sh clara create pipeline -p deepstream-batch-pipeline.yaml clara create jobs -n <JOB NAME> -p <PIPELINE ID> -f input/app_ds_torso clara start job -j <JOB ID>

Input requires a folder containing the following folders/files:

Copy
Copied!
            

. ├── configs # : folder for configuration (name depends on `${DS_CONFIG}`) │ └── config.txt # : a configuration file (name depends on `${DS_CONFIG}`) ├── models # : folder for models/labels (used by the configuration file) │ ├── calibration.bin # calibration data needed for the model │ ├── labels.txt # label data │ ├── model.etlt # device-independent model file │ └── model.engine # device-specific model file └── test_input.mp4 # : input video file (.mp4 file in H.264 format)

The bundled model (app_ds_torso-model_v1.zip) includes configs and models folder so only input video file (.mp4) in H.264 format is needed in the input folder.

Note that the bundled input (app_ds_torso-input_v1.zip) includes a sample input video file and you can use it for the test.

With the bundled model (app_ds_torso), its output is the rendered video (bounding boxes with labels are overlayed on top of the original video) in H.264 format (output.mp4) as well as the primary detector output in a modified KITTI metadata format (.txt files).

Once the job has completed successfully, we can download output payload by using clara download command.

Output payload files are located under /operators/deepstream/ folder. We can use the following command to download output files:

Copy
Copied!
            

clara download <Job ID>:/operators/deepstream/* result # download files into result folder

Then, you can see the output video file using the following command:

Copy
Copied!
            

google-chrome result/output.mp4 # View the video output using Chrome which has a built-in viewer for H.264 file.

If you are working with the server machine, the payload folder is directly accessible at the default payload folder /clara/payloads/<Payload ID>/.

The DeepStream App Operator is a wrapper of DeepStream SDK’s reference application and the container image used by the operator doesn’t include models/configurations for the application. Instead, model and configuration files are also uploaded with the input video file as input payload whenever triggering a job and it can cause performance degradation.

In addition to that, bundled TRT model (resnet18_detector.etlt_b4_fp16.engine) is a device-specific model and optimized from the device-independent model (resnet18_detector.etlt) on GV100 Volta GPU (32GB). If loading the device-specific model file is failed, the application would create the optimized-device-specific model from the device-independent model on runtime which can cause some delays on startup time.

To mitigate the performance degradation, please build a custom docker image from the source code by copying models/configs folder into the container and updating paths/image/tag.

Copy
Copied!
            

clara pull clara_deepstream_batch_pipeline cd clara_deepstream_batch_pipeline # Unzip source code unzip source.zip # Unzip app_ds_torso-model_v1.zip and app_ds_torso-input_v1.zip into `input/app_ds_torso` folder ./download_input.sh # Update Dockerfile to add configs/models folder to the container echo "COPY ./input/app_ds_torso/configs /configs COPY ./input/app_ds_torso/models /models" >> Dockerfile # Convert '/input/configs/' to '/configs/' sed -i -e 's#/input/configs/#/configs/#' input/app_ds_torso/configs/config.txt # Convert '/input/models/' to '/models/' sed -i -e 's#/input/models/#/models/#' input/app_ds_torso/configs/dslhs_nvinfer_config.txt # Convert 'configs/config.txt' to '/configs/config.txt' sed -i -e 's#configs/config.txt#/configs/config.txt#' deepstream-batch-pipeline.yaml # Update the image used in the pipeline definition to 'clara/app_deepstream:latest' sed -i -e 's#image: .*#image: clara/app_deepstream#' deepstream-batch-pipeline.yaml sed -i -e 's#tag: .*#tag: latest#' deepstream-batch-pipeline.yaml # Build local image: clara/app_deepstream:latest ./build.sh # Now you can create/trigger pipeline using 'deepstream-batch-pipeline.yaml' ...

© Copyright 2018-2020, NVIDIA Corporation. All rights reserved. Last updated on Jun 28, 2023.