Configuration
Below are different ways to configure the provided reference application for your environment.
Cameras
As mentioned in Quickstart, the reference app operates on live camera feeds and NVStreamer is used to simulate the input live camera feeds.
To add a new camera, if you have a live camera you can directly add the live URL to VST or if you have a video file, you can leverage NVStreamer to simulate the live camera feed the same way as the reference app does.
Briefly you would need to do the following steps to make a new camera working end-to-end in the system:
Add new camera live feed in VST
Add new camera info in Perception (DeepStream) config
Add new camera info in calibration config
Redeploy the end-to-end system
Simulate Cameras From Videos (Optional)
You can create simulated cameras from your own videos by either:
Uploading your video files from NVStreamer UI. NVStreamer UI is accessible via port 31000. For details please refer to NVStreamer. Or
Copying your video files to
metropolis-apps-data/videos/rtls-app/
Then you can add the live URL of the new camera from NVStreamer in VST.
Live URLs from NVStreamer can be found via NVStreamer API: http://<IP>:31000/api/v1/sensor/streams
Add Cameras
Once you have the URL of your cameras, you can add them in VST via VST UI > Camera Management > Add device manually.
VST UI is accessible via port 30000. For details please refer to Video Storage Toolkit (VST).
Cameras can also be added using rtsp_streams.json
config of VST:
For Docker Compose Deployments
Manually (Deployment-Time)
Modify the metropolis-apps-standalone-deployment/docker-compose/rtls-app/vst/configs/rtsp-streams.json
file with the RTSP streams of the new set of cameras.
Dynamically (Runtime)
Perception microservice can accept new streams dynamically via REST API calls.
curl
could be used to dynamically add streams and has to be run locally on the system where Real-Time Location System app is running. Samplecurl
commands to add camera can be found here.
For Kubernetes Deployments
Manually
Modify the rtsp_streams.json
section in the application-helm-configs/RTLS/vst-app-with-ingress-values.yaml
file with the RTSP streams of the new set of cameras.
Below example can be used to update rtsp_streams.json:
rtsp_streams.json: streams: - enabled: true stream_in: rtsp://<new_scene_live_camera_rtsp_url> <<<==== Change Me Eg: rtsp://vms-vms-svc:30554/live/<camera_name> name: <new_scenen_name> <<<==== Change Me Eg: Endeavor_Cafeteria
Note
If we have more source to add the for VST please add them under
streams
with following values for parametersenabled
,stream_in
,name
.Once we have override file updated, install VST-App using Helm installation found in deployment document Deploy NVStreamer and VST Microservices.
Dynamically (Recommended)
Add Camera using VST UI for perception inferencing, camera add details can be found Add cameras.
Configure Perception (DeepStream) Microservice to Use Added Cameras
Next, add the new stream info (IDs & URLs) provided by VST, not NVStreamer, to DeepStream config files.
For Docker Compose Deployments
Manually
In the metropolis-apps-standalone-deployment/docker-compose/rtls-app/deepstream/configs/
directory:
Please select appropriate model type config file and Modify
cnn-models/ds-main-config.txt
ortransformer-models/ds-main-config.txt
to match the new set of cameras. The key parameter islist
,sensor-id-list
andsensor-name-list
under each[source-list]
section. Thislist
defines the sensor RTSP URLs used for inferencing using Perception Pipeline. Thissensor-id-list
andsensor-name-list
needs to match with theid
in the calibration file to make sure analytics works as expected for sensor names.Modify the
[source-list]
section inds-main-config.txt
formax-batch-size
to match number of sensors inlist
. In addition, modifybatch-size
under[streammux]
and[primary-gie]
sections to match with the number of input streams.
Dynamically
Manual configuration of the Perception microservice is not needed with dynamic stream addition.
For Kubernetes Deployments
Manually
In the application-helm-configs/RTLS/
directory:
Modify the
wl_data
section in thewdm-deepstream-mtmc-values.yaml
undermetropolis-wdm
to match the new set of cameras. The key parameter isid
under eachsensor
section. Thisid
defined the sensor name and is the sole identifier of the sensor. Thisid
needs to match with theid
in the calibration file.Modify
batch-size
understreammux
andprimary-gie
sections to match with the number of input streams.
Below example can be used to update wl_data:
workloadSpecs:
workload:
wl_data: |
[
{
"alert_type": "camera_status_change",
"created_at": "2023-05-16T21:50:36Z", #### Can be any value ####
"event": {
"camera_id": "<sensor_name>", #### Sensor Name used in Calibration.json file as ``id`` ####
"camera_name": "<sensor_name>", #### Sensor Name used in Calibration.json file as ``id`` ####
"camera_url": "rtsp://<sensor_rtsp_url>", #### Sensor RTSP URL to be UPDATED ####
"change": "camera_streaming"
},
"source": "preload"
},
{
"alert_type": "camera_status_change",
"created_at": "2023-05-16T21:50:36Z",
"event": {
"camera_id": "<sensor_name>", #### Sensor Name used in Calibration.json file as ``id`` ####
"camera_name": "<sensor_name>", #### Sensor Name used in Calibration.json file as ``id`` ####
"camera_url": "rtsp://<sensor_rtsp_url>", #### Sensor RTSP URL to be UPDATED ####
"change": "camera_streaming"
},
"source": "preload"
}
]
Note
If we have more sources to add for the perception pipeline, please make sure above snippet is replicated for ‘x’ number of streams to be consumed.
For each
wl_data
field snippet like above, please make sure to update the correct RTSP URL forcamera_id
,camera_name
andcamera_url
underevent
section andid: <any-name>
undersensor-<index>
. URL can behttp://<IP>:30554/api/v1/sensor/streams
. It’s recommended to use the K8s service name for VST, which isvms-vms-svc
, then IP for the RTSP URL.Once we have override file updated, install Perception using Helm installation found in deployment document Deploy Perception (WDM-DeepStream) Microservice
Dynamically (Recommended)
Add Camera using VST UI for perception inferencing, camera add details can be found Add cameras.
Camera Calibration
You’ll need to calibrate the added cameras, so the app can map between pixel space and physical space.
A browser-based tool is provided to help with this task. Refer to the Camera Calibration for more details.
For Docker Compose Deployments
In the metropolis-apps-standalone-deployment/docker-compose/rtls-app
directory:
Replace the building plan map
calibration/sample-data/images/building=Retail-Store-Map.png
with the one for the new set of cameras. And modifycalibration/sample-data/images/imagesMetadata.json
accordingly. To keep the config change minimal, you may keep the image file name unchanged. Otherwise you can modifyDockerfiles/import-calibration.Dockerfile
andimport-calibration/init-scripts/calibration-import.sh
accordingly to insert the correct file.Use the Calibration Toolkit provided within the deployment at
http://[deployment-machine-IP-address]:8003/
. Create/modifycalibration.json
for the new set of cameras.Place the modified
calibration.json
insidemetropolis-apps-standalone-deployment/docker-compose/rtls-app/calibration/sample-data
.
For Kubernetes Deployments
In the application-helm-configs/RTLS/
directory:
Replace the building plan map
images/building=Retail-Store-Map.png
with the one for the new set of cameras, and modifyimages/imagesMetadata.json
accordingly. To keep the config change minimal you may keep the image file name unchanged.Use the Calibration Toolkit. Create/modify
calibration.json
on the new set of cameras.Update the calibration file download URL for downstream microservices. Under
application-helm-configs/RTLS/
, within filertls-app-override-values.yaml
, formdx-rtls
, update variablecalibrationJsonDownloadURL
with the correct Google Drive or HTTP(S) URL for the new calibration file.
Note
Calibration Toolkit need to be installed separately, since Kubernetes deployments don’t include the toolkit by default.
If you face any issues in docker compose or Kubernetes, please refer this FAQ section.
Re-Deployment
Once you have added the new cameras, configured DeepStream, and performed calibration, you can stop the app and do a fresh re-deployment to see results on this new set of cameras.
Example: Change to processing 7 BuildingK streams instead of 8 Retail_Synthetic streams
In the metropolis-apps-data/videos/rtls-app/
we provide a couple sets of sample videos where by default the RTLS app is configured to process the 8 Retail_Synthetic videos and the Multi-Camera Tracking app is configured to process the 7 buildingK videos.
As an concrete example of changing cameras, in order to change RTLS app to operated on the buildingK videos, here are the configuration changes you need to make:
Change camera live feed in VST: Replace
docker-compose/rtls-app/vst/configs/rtsp-streams.json
withdocker-compose/mtmc-app/vst/configs/rtsp-streams.json
.Change camera info in Perception (DeepStream) config: Replace
docker-compose/rtls-app/deepstream/configs/<type>-models/ds-main-config.txt
withdocker-compose/mtmc-app/deepstream/configs/<type>-models/ds-main-config.txt
.Change camera info in calibration config: Replace all folder content
docker-compose/rtls-app/calibration/sample-data
withdocker-compose/mtmc-app/calibration/sample-data
.Redeploy the end-to-end system
Partial Deployment
The reference application provides an example of deploying all provided modules as a whole but users may have more complex deployment needs or environment.
Using Docker Compose deployment as an example:
To deploy only part of the provided modules instead of all, users can modify
foundational/mdx-foundational.yml
and/orrtls-app/mdx-rtls-app.yml
files by commenting out one or more services.To deploy only one module, user can specify the service name at the end of the deployment command.
For instance, considering that there is a use case that the user wants to deploy all back-end services in a cloud machine and deploy the UI in a local machine using the Docker Compose option, then the process can be briefly described as the following:
In the cloud VM, all backend services need to connect via localhost:
Set
HOST_IP='localhost'
infoundational/.env
,Then deploy all services except UI with the provided command
$ docker compose -f foundational/mdx-foundational.yml -f rtls-app/mdx-rtls-app.yml --profile e2e up -d --scale web-ui=0 --pull always --build --force-recreate
.
In the local machine, the UI service need to connect to the backend services via cloud machine’s external IP:
Set
HOST_IP='<VM's external IP>'
infoundational/.env
,Then deploy only the UI service with
$ docker compose -f foundational/mdx-foundational.yml -f rtls-app/mdx-rtls-app.yml up -d --no-deps web-ui
.
Accuracy Evaluation
There’re evaluation scripts to help quantify the E2E tracking accuracy of the application (in batch processing mode) on your custom dataset.
Please to this section for more details.
Operation Parameters
The configuration files for different modules are provided as below. You can inspect and make changes to a corresponding module if needed.
To recall what each component does at a high-level, please refer back to the Components section.
For more details, refer to the Configuration page of each microservice.
For Docker Compose Deployments
In the metropolis-apps-standalone-deployment/docker-compose/
directory:
Foundational system configs are provided under
foundational/
. It includes NVStreamer, Kafka message broker, ELK stacks and other supporting modules. In most cases you don’t need to change anything in there.
For app-specific configuration, in rtls-app/
, configs are provided under different directories and files:
VST:
vst/configs/
Perception:
deepstream/configs/
Behavior Analytics:
behavior-analytics/configs/
RTSL Specialized Multi-Camera Tracking:
rtls/configs/
Web API:
analytics-and-tracking-api/configs/
Web UI:
analytics-and-tracking-ui/configs/
For Kubernetes Deployments
In the application-helm-configs/
directory:
Foundational system configs are provided under
foundational-sys/
. It includes changes for monitoring chart mostly like change admin password for Grafana dashboard. In most cases you don’t need to change anything in there.
For microservice-specific configuration, in RTLS/
, configs are provided as different files:
VST:
vst-app-with-ingress-values.yaml
, orvst-app-edge-with-ingress-values.yaml
(for edge-to-cloud deployment, more here)Perception:
wdm-deepstream-rtls-values.yaml`
And as different sections within rtls-app-override-values.yaml
:
Multi-Camera Tracking:
mdx-rtls
Web API:
mdx-web-api
Web UI:
mdx-ui