Configuration
For more context on configuration, when the microservice is used:
In the Multi-Camera Tracking app, please refer to its Operation Parameters section.
In the Occupancy Analytics app, please refer to its Operation Parameters section.
In the Few-Shot Product Recognition app, please refer to its Operation Parameters section.
As a standalone microservice, refer to the
README.md
in its respective directory withinmetropolis-apps-standalone-deployment/modules/
.
Pipeline Configuration Overview
Here are a few ways to configure the provided DeepStream pipeline/application inside the microservice:
To change input source type and number of channels, refer to:
Source Group for offline configuration.
Perception microservice API for dynamic configuration.
To change AI model, batch size, and model parameters, refer to Primary and Secondary GIE Group and the Gst-nvinfer plugin.
To change Multi-Object Tracker parameters, refer to the Gst-nvtracker and the NvMultiObjectTracker Parameter Tuning Guide.
To add your custom TAO models in DeepStream and use that with Metropolis Microservices, refer to TAO-DeepStream integration.
In the K8S deployment, you can change the configuration by overriding the config.properties
as per your requirement:
configs:
config.properties:
sink1:
enable: 1
# msg-broker-conn-str: kafka1.data.nvidiagrid.net;9092;metromind-test-ds4
# topic: metromind-test-ds4
msg-broker-conn-str: mdx-kafka-cluster-kafka-brokers;9092;mdx-raw
topic: mdx-raw
msg-broker-proto-lib: /opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
msg-conv-config: dstest5_msgconv_sample_config.txt
# msg-conv-frame-interval: 30
msg-conv-msg2p-new-api: 0
msg-conv-payload-type: 2
msg-conv-frame-interval: 1
type: 6
primary-gie:
enable: 1
gpu-id: 0
# Required to display the PGIE labels, should be added even when using config-file
# property
batch-size: 1
#Required by the app for OSD, not a plugin property
bbox-border-color0: 1;0;0;1
bbox-border-color1: 0;1;1;1
bbox-border-color2: 0;1;1;1
bbox-border-color3: 0;1;0;1
interval: 0
# Required by the app for SGIE, when used along with config-file property
gie-unique-id: 1
nvbuf-memory-type: 0
config-file: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-fewshot-learning-app/configs/mtmc/mtmc_pgie_config.txt
Source/Input Update
Users can add new cameras (RTSP/file input) to the pipeline.
For Docker Compose Deployments
Addition of cameras/sources are controlled via:
Static DeepStream configuration file
Most important config-keys to change are: num-source-bins
, list
, sensor-id-list
, sensor-name-list
, max-batch-size
in [source-list] config group.
More information on these DeepStream configuration parameters can be found here.
Dynamic REST APIs
If the configuration file is configured to use DeepStream SDK’s nvmultiurisrcbin
plugin, user can add new cameras dynamically to the deployment.
Please refer to the REST APIs to use here.
Sample for [source-list]
config group:
# Sources
[source-list]
num-source-bins=1
list=file:///opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-fewshot-learning-app/samples/Building_K_Cam7.mp4
#sensor-id-list vector is one to one mapped with the uri-list
#identifies each sensor by a unique ID
#sensor-id-list=UniqueSensorId1;UniqueSensorId2
sensor-id-list=UniqueSensorId1
#Optional sensor-name-list vector is one to one mapped with the uri-list
sensor-name-list=UniqueSensorName1
# Set use-nvmultiurisrcbin to 1 to enable sensor provisioning/update feature
use-nvmultiurisrcbin=1
max-batch-size=1
http-ip=localhost
http-port=9000
#sgie batch size is number of sources * fair fraction of number of objects detected per frame per source
#the fair fraction of number of object detected is assumed to be 4
sgie-batch-size=40
[source-attr-all]
enable=1
type=3
num-sources=1
gpu-id=0
cudadec-memtype=0
latency=100
rtsp-reconnect-interval-sec=0
For Kubernetes Deployments
Addition of cameras are controlled using Video Storage Toolkit/VST which takes input as streaming source and SDR adds them to perception pipeline for dynamically inferencing using metropolis apps.
Model Update
For Docker Compose Deployments
Model Addition
Navigate to the
metropolis-apps-standalone-deployment/docker-compose/<app-name>
directory, downloaded & extracted in steps from Getting Started.Copy the custom model (either in onnx or engine file) here or a sub-directory.
Modify & add the below lines - L10-12 - in the DeepStream Dockerfile used, which is
./Dockerfiles/deepstream-cnn.Dockerfile
by default. This is to include the desired models in the Docker Compose app build.# set base image (host OS) FROM nvcr.io/nfgnkvuikvjm/mdx-v2-0/mdx-perception:2.1 # set the working directory in the container WORKDIR /opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/deepstream-fewshot-learning-app # copy the dependencies file to the working directory COPY ./deepstream/configs/cnn-models/* ./ # Copy the local custom model engine files to the working directory (Below example is for the default CNN-based models). # Please note this engine file path will be used in configuration for parameter `model-engine-file` in config file called `ds-<app-name>-pgie-config.yml`. COPY ./deepstream/configs/cnn-models/* ./ # copy the start script COPY ./deepstream/init-scripts/ds-start.sh ./
For PGIE:
If you have a engine file, update the
model-engine-file
parameter in the./deepstream/configs/<model-type>/ds-<app-name>-pgie-config.yml
file, wheremodel-type
iscnn-models
by default.If you have a onnx model, update the
onnx-file
parameter in the./deepstream/configs/<model-type>/ds-<app-name>-pgie-config.yml
file, wheremodel-type
iscnn-models
by default.
For ReID model,
If you have a engine file, update the
modelEngineFile
parameter in the./deepstream/configs/<model-type>/ds-nvdcf-accuracy-tracker-config.yml
file, wheremodel-type
iscnn-models
by default.If you have a onnx model, update the
onnxFile
parameter in the./deepstream/configs/<model-type>/ds-nvdcf-accuracy-tracker-config.yml
file, wheremodel-type
iscnn-models
by default.
Note that for transformer models, <model-type>
is transformer-models
Model Configuration
Changes in model configuration are typically needed for the DeepStream config files used in the Perception Docker container.
The files are at /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-fewshot-learning-app/
, inside the container.
The files and config groups to change are noted below.
File name |
Config group to change |
---|---|
configs/mtmc/mtmc_config.txt (DeepStream config) |
[primary-gie] |
configs/mtmc/mtmc_pgie_config.txt (PGIE config) |
[property] |
configs/mtmc/config_tracker_NvDCF_accuracy.yml (ReID config) |
[property] |
For the configuration keys to update, please refer to the sample configs in YAML format at: PGIE config group changes section for k8s deployment.
Once the Dockerfile and configs are updated, please deploy using Docker Compose - Occupancy Analytics-Pipeline/Multi-Camera Tracking-Pipeline.
However, these config files are also available in the Docker Compose package, which users can edit and will be copied to the container during the Docker Compose app build.
For Kubernetes Deployments
Prerequisites
Perception Microservice already running - This is must to copy the model to storage.
NGC_API_KEY - To pull latest chart from NGC Prod Registry.
MODEL_LOCAL_PATH - Model file relative path present locally on the system.
WDM_DS_CHART_VER - Chart to be deployed, please use latest chart from Deploy Perception (WDM-DeepStream) Microservice. Please note only Chart version needs to be noted Eg:
mdx-wdm-ds-app-0.0.x.tgz
.Update DS Configs - Update Relevant DS configs for new model. Refer to Update Microservice Configuration.
DS_CONFIG_PATH - Updated config file relative path for the new updated model, config file example is packaged in script resource
ds_configs_helm_values.yaml
.
Prepare Update Scripts
Setting up NGC CLI:
Setup NGC cli tool on Ubuntu 20.04 machine by following instructions from this page.
Select ‘AMD64 Linux Install’ tab for Ubuntu installation.
During NGC config set command, select
nfgnkvuikvjm
as Org andmdx-v2-0
as team.
Download perception model update scripts:
Using below commands, download and extract the contents of artifact:
# Download the artifact $ ngc registry resource download-version "nfgnkvuikvjm/mdx-v2-0/ds-model-update-script:0.0.2" $ cd ds-model-update-script_v0.0.2/ # Verify necessary files to update model for DS $ ls README.md 'ds_configs_helm_values.yaml' ds_model_update.sh
Update Microservice Configuration
For detection model (PGIE)
In the helm override values.yaml file ds_configs_helm_values.yaml
, please update the configuration keys for:
mdx-ds:
configs:
primary-gie:
<change configs for config-keys like batch-size, memory buffer type, etc. here.>
mtmc_pgie_config.yml:
property:
<Model parameters could also be changed as needed. Offsets, net-scale-factor, infer-dims, num-detected-classes, output-blob-names, cluster-mode, parse-bbox-func-name, pre-cluster-threshold and other parameters can be different for a new model.>
labelfile-path: <Provide the new labelfile>
onnx-file: <Provide the PATH to model file inside the DeepStream container on deployment (could be volume mount)>
model-engine-file: <Follow the sample to rename model engine file name>
Detailed documentation on primary-gie
configuration parameters can be found here.
Detailed documentation on nvinfer config file (mtmc_pgie_config.yml
) configuration parameters can be found here.
Note
The above example uses TensorRT based nvinfer plugin to do the inference.
If the user wants to leverage NVIDIA Triton using DeepStream SDK NVInferserver plugin, please check the documentation for this plugin along with sample/reference configs here.
Note
User may also change the SGIE (secondary inference) model where applicable in a similar fashion.
The changes will be for the secondary-gie<index>: section in the helm override values.yaml file (detailed documentation here). The SGIE config-file configuration parameters are similar to PGIE and also leverages the nvinfer plugin.
Update app launch params
Changing the model may need changes to the parameters of the launch command line for the Perception app.
For PeopleNet CNN model, the command to use is:
./deepstream-fewshot-learning-app -c ds-main-config.txt -m 1 -t 0 -l 5 --message-rate 1 --tracker-reid 1 --reid-store-age 1
For PeopleNet Transformer model (DINO), the command to use is:
./deepstream-fewshot-learning-app -c ds-main-config.txt -m 1 -t 1 -l 5 --message-rate 1 --tracker-reid 1 --reid-store-age 1
Note that -t 1
identifies the class-id to generate metadata that is sent over IoT.
Currently this is the Person class. class-id varies between models and is the index at which the particular class appear in the labels file.
Label file ds-detector-labels.txt
is packaged for both CNN and transformer model in metropolis-apps-standalone-deployment/docker-compose/<app-name>/deepstream/configs/<model-arch>/
, downloaded here.
In the Helm override values YAML file ds_configs_helm_values.yaml
, please update the configuration keys for DS_PARAMS.
Advanced users can refer to the code in deepstream_fewshot_learning_app.c
in metropolis-apps-standalone-deployment/modules/perception/deepstream-fewshot-learning-app/
, downloaded here.
A brief description of all the command line arguments is provided below.
Command line arguments:
argument
shorter arg
Description
“version”
‘v’
“Print DeepStreamSDK version”
“version-all”
NA
“Print DeepStreamSDK and dependencies version”
“cfg-file”
‘c’
“Set the config file”
“override-cfg-file”
‘o’
“Set the override config file, used for on-the-fly model update feature”
“input-file”
‘i’
“Set the input file”
“playback-utc”
‘p’
“Playback utc; default=false (base UTC from file-URL or RTCP Sender Report) =true (base UTC from file/rtsp URL)”
“pgie-model-used”
‘m’
“PGIE Model used; {0: FSL}, {1: MTMC}, {2: Resnet 4-class [Car, Bicycle, Person, Roadsign]}, {3 - Unknown [DEFAULT]}”
“no-force-tcp”
NA
“Do not force TCP for RTP transport”
“log-level”
‘l’
“Log level for prints, default=0”
“message-rate”
‘r’
“Message rate for broker”
“target-class”
‘t’
“Target class for MTMC”
“tracker-reid”
NA
“Use tracker re-identification as embedding”
“reid-store-age”
NA
“Tracker reid storage”
For ReID model (Tracker)
In the Helm override values YAML file ds_configs_helm_values.yaml
, please update the configuration keys for:
mdx-ds:
configs:
tracker:
<change configs like tracker operation resolution here>
config_tracker_NvDCF_accuracy.yml:
ReID:
<change configs like path to model file here.>
<Model parameters could also be changed when needed. Offsets, netScaleFactor, inferDims, reidFeatureSize and other parameters can be different for a new model.>
onnxFile: <Provide the PATH to model file inside the DeepStream container on deployment (could be volume mount)>
modelEngineFile: <Follow the sample to rename model engine file name>
Detailed documentation on tracker
configuration parameters can be found here.
Detailed documentation on NvDCF tracker config file (config_tracker_NvDCF_accuracy.yml
) configuration parameters can be found here.
Prepare the Environment
export MODEL_LOCAL_PATH=<absolute path of the model present locally on the system> export NGC_API_KEY=<your_api_key> export DS_CONFIG_PATH=<absolute path of updated ds configs file with new model> export WDM_DS_CHART_VER=<Chart Version to be deployed, please use latest chart from docs, eg: mdx-wdm-ds-app-0.0.x.tgz>Note
DS Config file is part of tar packaged or refer to
ds_configs_helm_values.yaml
Run the Script
$ bash ds_model_update.shNote
Script will copy the model into running deployment and deployment will be restarted post model copying to run inferencing using updated configs and model.
3D Tracking (SV3DT)
The default configuration for the Multi-Object Tracker is designed for 2D tracking within the camera’s image coordinate system. However, by activating Single-View 3D Tracking (SV3DT) in the DS-based perception pipeline, it becomes possible to estimate and relay additional data for each object.
This data includes foot locations in both 2D camera and 3D world coordinate systems, visibility (indicating the extent of occlusion), and the convex hull (representing the outline of the projected 3D object model). This information is transmitted via Kafka messages, significantly enhancing Multi-Target Multi-Camera (MTMC) tracking accuracy.
For this feature to function correctly, users must supply a specific camera matrix configuration file for each camera. This file should include a standard 3x4 projection matrix and pertinent 3D model information. For human figures, default values for height and radius are provided for convenience. Detailed information and guidelines on SV3DT can be found in the ‘Single View 3D Tracking for Gst-nvtracker’ documentation.
Upon completion of these steps, 3D tracking results will be integrated into Kafka messages. The Multi-Camera Tracking system will automatically employ these results, thereby improving overall tracking precision. The schema for the 3D tracking information is defined in both JSON schema and Protobuf schema.
For Kubernetes (K8s) deployments, the following steps are required to incorporate 3D tracking capabilities:
Modify the ‘config_tracker_NvDCF_accuracy.yml’ file as described below to adjust the configuration.
Include the matrix configuration file for each camera in the setup.
configs: config_tracker_NvDCF_accuracy.yml: # change StateEstimator as below StateEstimator: stateEstimatorType: 3 processNoiseVar4Loc: 6810.866 processNoiseVar4Vel: 1348.487 measurementNoiseVar4Detector: 100.000 measurementNoiseVar4Tracker: 293.323 # add ObjectModelProjection group ObjectModelProjection: # output below info to Kafka message outputVisibility: 1 outputFootLocation: 1 outputConvexHull: 1 # please replace with the actual path of camera matrix config files cameraModelFilepath: - /opt/configs/cam01_matrix_config.yml - /opt/configs/cam02_matrix_config.yml ... # keep other 2D tracking parameters ... # add matrix config for each camera cam01_matrix_config.yml: # in row major order. Place replace with the values of your actual camera projectionMatrix_3x4: - 996.229 - -202.405 - -9.121 - -1.185 - 105.309 - 478.174 - 890.944 - 1.743 - -0.170 - -0.859 - 0.481 - -1085.484 # cylinder model for people modelInfo: height: 250.0 radius: 30.0 cam02_matrix_config.yml: ...
Runtime Performance Tuning
The perception pipeline uses raw tensor output for the SGIE, and optimal value for tensor-meta-pool-size
would vary based on the use case as it correlates to CPU/GPU utilization.
So to obtain ideal performance, parameters such as PGIE/SGIE batch size and tensor-meta-pool-size
would have to be tuned, depending on the number of cameras, expected number of objects across cameras, GPU, etc.,
For example, in MTMC, assuming 16 cameras with an expected max of 500 objects detected across all cameras, on A30, user can utilize PGIE batch size as 16, SGIE batch size as 500 and tensor-meta-pool-size
as 60-90.