Production Deployment
Here is the app overview if you need to refer again.
Deployment Using Helm Charts
Deployment Architecture

Order of Deployment
Some of the Microservices pods may not utilize GPU to the full extent. If the system has less GPU available then these can be shared except DeepStream pipeline will require dedicated GPU allocation.
GPU sharing can be enabled by using
NVIDIA_VISIBLE_DEVICES
to assign same GPU-ID to different pods and ensure that it doesn’t overlap with DS GPU_ID. Also GPU_Operator allocation for GPU assignment for pods has to be disabled, below example can be used.
Following example can be used to time-slice a GPU and disable GPU_Operator (which dynamically assigns the GPU to a pod):
resources: limits: nvidia.com/gpu: 1 ``Can be changed to`` env: - name: NVIDIA_VISIBLE_DEVICES value: "<UNIQUE_GPU_ID>" resources: limits: nvidia.com/gpu: 0
Deploy Analytics Microservices
Under
application-helm-configs\
, in fileapplication-helm-configs/people-analytics/people-analytics-app-override-values.yaml
, update"REACT_APP_GOOGLE_MAPS_API_KEY = <google_api_key>"
. This is needed since Occupancy Analytics Web UI uses Google Map API to render some components.Once we have file updated, please the run the app install command as below.
Install Chart for the analytics microservices of the Occupancy Analytics application with the provided config file:
helm install mdx-people-analytics-app https://helm.ngc.nvidia.com/nfgnkvuikvjm/mdx-v2-0/charts/mdx-people-analytics-app-1.0.54.tgz --username='$oauthtoken' --password=<NGC API KEY> -f application-helm-configs/people-analytics/people-analytics-app-override-values.yaml
Deploy Coturn Server for NVStreamer and VST Microservices
Install
export TURN_SERVER_EXTERNAL_IP=`hostname -I | awk '{print $1}'` && echo $TURN_SERVER_EXTERNAL_IP
Coturn Server will be run as a Docker container in a ‘separate machine/server’, please run below command to spin up the container.
docker run --name mdx-coturn -d --network=host instrumentisto/coturn \ -n --log-file=stdout \ --external-ip=$TURN_SERVER_EXTERNAL_IP \ --listening-ip=$TURN_SERVER_EXTERNAL_IP \ --lt-cred-mech --fingerprint \ --user=admin:admin \ --no-multicast-peers \ --realm=nvstreamer.realm.org
Verify if the container is running by “docker ps mdx-coturn”.
TURN_SERVER_EXTERNAL_IP
needs to be configured for NvSteamer and VST as for parameter static_turnurl_list
.
Note
Coturn needs to run on a separate machine/server from K8s node cluster.
Example is using default user/pass as admin/admin, user can change in docker run command to specific credentials.
Make sure to configure for NVStreamer/VST.
Configure
Under this tar application-helm-configs.tar.gz
, We need to configure static_turnurl_list
parameter mentioned files nvstreamer-with-ingress-values.yaml
& vst-app-with-ingress-values.yaml
in order to play WebRTC streams.
Example:
static_turnurl_list: ["user:pass@<TURN_SERVER_EXTERNAL_IP>:3478"].
Note
Make sure to use correct IP for “TURN_SERVER_EXTERNAL_IP and credentials”.
TURN_SERVER_EXTERNAL_IP
this should be a separate HOST IP from the K8s host IP.
Deploy NVStreamer and VST Microservices
NVStreamer
Note
Install NVStreamer if your application is operating on video files.
helm install nvstreamer-app https://helm.ngc.nvidia.com/rxczgrvsg8nx/vst-1-0/charts/nvstreamer-0.2.32.tgz --username='$oauthtoken' --password=<NGC API KEY> -f application-helm-configs/people-analytics/nvstreamer-with-ingress-values.yaml
Videos needs to be uploaded in NVStreamer UI http://<K8s_node_IP>:31000/
, videos are available in metropolis-apps-data tar downloaded in previous steps.
VST
helm install vst-app https://helm.ngc.nvidia.com/rxczgrvsg8nx/vst-1-0/charts/vst-1.0.30.tgz --username='$oauthtoken' --password=<NGC API KEY> -f application-helm-configs/people-analytics/vst-app-with-ingress-values.yaml
NVStreamer sources are fed to VST and considered as simulated live streams.
You can add live cameras directly from VST UI (
http://<K8s_node_IP>:30000/
).For running VST using edge-to-cloud setup, please use
application-helm-configs/people-analytics/vst-app-edge-with-ingress-values.yaml
for helm configs, setting up edge details can be found here Edge-to-Cloud Deployment Using Scripts.
Deploy Perception (WDM-DeepStream) Microservice
helm install mdx-wdm-ds-app https://helm.ngc.nvidia.com/nfgnkvuikvjm/mdx-v2-0/charts/mdx-wdm-ds-app-0.0.33.tgz --username='$oauthtoken' --password=<NGC API KEY> -f application-helm-configs/people-analytics/wdm-deepstream-ppl-values.yaml
Make sure to update application-helm-configs/people-analytics/wdm-deepstream-ppl-values.yaml
. We can also add or remove sources/inputs for the perception microservice directly using Add cameras / Remove cameras or DeepStream API.
Deploy Monitoring Exporter - Metrics Collector Microservices
helm install mdx-mon-exporter-app https://helm.ngc.nvidia.com/nfgnkvuikvjm/mdx-v2-0/charts/mdx-mon-exporter-app-0.0.2.tgz --username='$oauthtoken' --password=<NGC API KEY>
Exporter chart collects metrics for Elasticsearch and Kafka for monitoring metrics in Grafana Dashboard using Promethus as a data source.
Elasticsearch Metrics will be available under Grafana UI > General > Elasticsearch Exporter Quickstart and Dashboard.
Kafka Metrics will be available under Grafana UI > General > Strimzi Kafka Exporter.
Optional Chart, you can skip if monitoring for Kafka/elasticsearch is not needed for your environment.
Deploy Calibration App - Camera Calibration Toolkit
helm install mdx-calibration-app https://helm.ngc.nvidia.com/nfgnkvuikvjm/mdx-v2-0/charts/mdx-calibration-app-0.0.2.tgz --username='$oauthtoken' --password=<NGC API KEY> -f application-helm-configs/calibration-and-notebook-app/calibration-toolkit-app-values.yaml
Camera Calibration Toolkit as a service app for generating calibration.json
Calibration App UI will be available at
http://<K8s_node_IP>:31080/calibration/
.Optional Chart, you can skip if no need to generate or doing camera calibration.
Deploy Notebook App - Metropolis Jupyter Notebook
helm install mdx-notebook-app https://helm.ngc.nvidia.com/nfgnkvuikvjm/mdx-v2-0/charts/mdx-notebook-app-0.0.2.tgz --username='$oauthtoken' --password=<NGC API KEY> -f application-helm-configs/calibration-and-notebook-app/notebook-app-values.yaml
Notebook as a service app for running Jupyter notebooks.
Notebook App UI will be available at
http://<K8s_node_IP>:31080/notebook/
.Optional Chart, you can skip if no need to generate or doing running Jupyter metropolis app notebooks.
Verify Deployment Installation
kubectl get pods -owide kubectl get pods -owide | grep -v 'Compl\|Runn' <<=== to check if any pod is not running or failing.
Note
Once all pods are up and running, Occupancy Analytics UI can be available at
http://<K8s_node_IP>:31080/ui/people-analytics/
If there are any failed pod(s) please debug the issue using troubleshooting steps described below.
Troubleshoot Pod/Deployment Failures
Check the events for the failed/crashed pods:
kubectl describe pod <Failed_Pod_Name>
.View logs of failed pods to find failure error using:
kubectl logs -f <Failed_Pod_Name>
.View logs of a specific container inside a pod using
kubectl logs -f <Failed_Pod_Name> -c <failed_pod_container_name>
(Usekubectl describe pod <Failed_Pod_Name>
to list all the containers running for a pod).If pod is not running due to K8s scheduling then events will shows failure errors. Also if pod is crashing then logs for a pod/container why it failed to start.
Explore Application Reference UI
Once the application deployment is complete, open a browser and access the UI at http://<K8s_host_IP>:31080/ui/people-analytics/
(replace with the deployment Kubernetes master node system IP).
Before trying out the UI you’ll need to perform the following two steps to upload the required files for the specific cameras views.
Upload Calibration File
Select the -Config File Actions option from the top-right corner.
Select
application-helm-configs/people-analytics/calibration.json
from Choose Files. This file is present under folderapplication-helm-configs
created in earlier stage.Click SUBMIT (as shown in the screen shot below).
![]()
Upload Floor Plan Image and Image Metadata
Select the -Upload Images option from the top-right corner.
Select
application-helm-configs/people-analytics/images/Endeavor_Cafeteria.png
,application-helm-configs/people-analytics/images/Nth_Street_Cafe_Entrance.png
andapplication-helm-configs/people-analytics/images/imagesMetadata.json
present under the folderapplication-helm-configs
created in earlier stage.Click Upload as shown in the screen shot below (you can select and upload both images at once).
![]()
Refresh UI
Now you should be able to see the UI contents.
Campus/Building Level View
Room/Cameras Level View
Sensor Level View
For in-depth documentation of the component, refer to the Occupancy Analytics UI section.
Import Kibana Dashboard & Index Patterns
The tar application-helm-configs.tar.gz downloaded in above steps contains the file people_analytics_kibana_objects.ndjson
present in directory application-helm-configs/people-analytics
.
This needs to be uploaded via Kibana to view the dashboard.
Steps to upload Kibana dashboard from Kibana UI:
Go to Kibana UI
http://<K8s_host_IP>:31560/
.From Menu Option, go to stack management > Saved Objects > Import > Upload Dashboard file
people_analytics_kibana_objects.ndjson
> Done.Dashboard can be viewed from Menu option > Dashboard.
Remove App Deployment
Delete a specific installed Helm chart
Grab the chart name for deleting specific chart by running “helm ls” command. Execute:
helm delete <chart_name>
Delete all the installed Helm charts
for helm_chart in `helm ls -q` ; do helm delete $helm_chart ; done
Note
By default
helm delete
will not cleanup Kafka CRDs, which will make Kafka cluster still running. The next steps will clean up Kafka cluster.
Delete the Kafka CRD to cleanup Kafka cluster
for kafka_crd in `kubectl get crd | grep strimzi | awk '{print $1}'` ; do echo $kafka_crd ; kubectl delete crd $kafka_crd; done
Clean up old PVCs and PVs to remove data
kubectl delete pvc --all --force && kubectl delete pv --all --force
Verify the cleanup
helm ls kubectl get pods,pvc,pv
Clean up old PVCs data from machine filesystem
sudo rm -rf /opt/mdx-localpath /opt/mdx-local-nfs-path /opt/hostpath