Configuration

Below are different ways to configure the provided reference application for your environment.


Cameras

As mentioned in Quickstart, the reference app operates on live camera feeds and NVStreamer is used to simulate the input live camera feeds.

To add a new camera, if you have a live camera you can directly add the live URL to VST or if you have a video file, you can leverage NVStreamer to simulate the live camera feed the same way as the reference app does.

Briefly you would need to do the following steps to make a new camera working end-to-end in the system:

  • Add new camera live feed in VST

  • Add new camera info in perception (DeepStream) config

  • Add new camera info in calibration config

  • Redeploy the end-to-end system


Simulate Cameras From Videos (Optional)

You can create simulated cameras from your own videos by one of the following ways:

  • Uploading your video files from NVStreamer UI. NVStreamer UI is accessible via port 31000. For details please refer to NVStreamer.

  • Copying your video files to metropolis-apps-data/videos/people-analytics-app/.

Then you can add the live URL of the new camera from NVStreamer in VST.

Live URL from NVStreamer can be found via NVStreamer API: http://<IP>:31000/api/v1/sensor/streams


Add Cameras

Once you have the URL of your cameras, you can add them in VST via VST UI > Camera Management > Add device manually.

Cameras can also be added using rtsp_streams.json config of VST:

For Docker Compose Deployments

Manually (Deployment-Time)

Modify the metropolis-apps-standalone-deployment/docker-compose/people-analytics-app/vst/configs/rtsp-streams.json file with the RTSP streams of the new set of cameras.

Dynamically (Runtime)

Perception microservice can accept new streams dynamically via REST API calls.

  • curl could be used to dynamically add streams and has to be run locally on the system where Multi-Camera Tracking app is running. Sample curl commands to add camera can be found here.

For Kubernetes Deployments

Manually

Modify the rtsp_streams.json section in the application-helm-configs/people-analytics/vst-app-with-ingress-values.yaml file with the RTSP streams of the new set of cameras.

Below example can be used to update rtsp_streams.json:

rtsp_streams.json:
  streams:
    - enabled: true
      stream_in: rtsp://<new_scene_live_camera_rtsp_url>   <<<==== Change Me Eg: rtsp://vms-vms-svc:30554/live/<camera_name>
      name: <new_scenen_name> <<<==== Change Me    Eg: Endeavor_Cafeteria

Note

  • If we have more source to add the for VST please add them under streams with following values for parameters enabled, stream_in, name.

  • Once we have override file updated, install VST-App using Helm installation found in deployment document Deploy NVStreamer and VST Microservices.

Configure Perception (DeepStream) Microservice to Use Added Cameras

Next, add the new stream info (IDs & URLs) provided by VST, not NVStreamer, to DeepStream config files.

For Docker Compose Deployments

Manually

In the metropolis-apps-standalone-deployment/docker-compose/people-analytics-app/deepstream/configs/ directory:

  1. Please select appropriate model type config file and Modify cnn-models/ds-main-config.txt or transformer-models/ds-main-config.txt to match the new set of cameras. The key parameter is list, sensor-id-list and sensor-name-list under each [source-list] section. This list defines the sensor RTSP URLs used for inferencing using Perception Pipeline. This sensor-id-list and sensor-name-list needs to match with the id in the calibration file to make sure analytics works as expected for sensor names.

  2. Modify the [source-list] section in ds-main-config.txt for max-batch-size to match number of sensors in list. In addition, modify batch-size under [streammux] and [primary-gie] sections to match with the number of input streams.

Dynamically

Manual configuration of the Perception microservice is not needed with dynamic stream addition.

For Kubernetes Deployments

Manually

In the application-helm-configs/people-analytics/ directory:

  1. Modify the wl_data section in the wdm-deepstream-ppl-values.yaml under metropolis-wdm to match the new set of cameras. The key parameter is id under each sensor section. This id defined the sensor name and is the sole identifier of the sensor. This id needs to match with the id in the calibration file.

  2. Modify batch-size under streammux and primary-gie sections to match with the number of input streams.

Below example can be used to update wl_data:

workloadSpecs:
  workload:
    wl_data: |
       [
          {
            "alert_type": "camera_status_change",
            "created_at": "2023-05-16T21:50:36Z",  #### Can be any value  ####
            "event": {
                "camera_id": "<sensor_name>",             #### Sensor Name used in Calibration.json file as ``id``   ####
                "camera_name": "<sensor_name>",           #### Sensor Name used in Calibration.json file as ``id``   ####
                "camera_url": "rtsp://<sensor_rtsp_url>", #### Sensor RTSP URL to be UPDATED ####
                "change": "camera_streaming"
            },
            "source": "preload"
          },
          {
            "alert_type": "camera_status_change",
            "created_at": "2023-05-16T21:50:36Z",
            "event": {
                "camera_id": "<sensor_name>",             #### Sensor Name used in Calibration.json file as ``id``   ####
                "camera_name": "<sensor_name>",           #### Sensor Name used in Calibration.json file as ``id``   ####
                "camera_url": "rtsp://<sensor_rtsp_url>", #### Sensor RTSP URL to be UPDATED   ####
                "change": "camera_streaming"
            },
            "source": "preload"
          }
        ]

Note

  • If we have more sources to add for the perception pipeline, please make sure above snippet is replicated for ‘x’ number of streams to be consumed.

  • For each wl_data field snippet like above, please make sure to update the correct RTSP URL for camera_id, camera_name and camera_url under event section and id: <any-name> under sensor-<index>. URL can be http://<IP>:30554/api/v1/sensor/streams. It’s recommended to use the K8s service name for VST, which is vms-vms-svc, then IP for the RTSP URL.

  • Once we have override file updated, install Perception using Helm installation found in deployment document Deploy Perception (WDM-DeepStream) Microservice.

Dynamically (Recommended)

Add Camera using VST UI for perception inferencing, camera add details can be found Add cameras.


Camera Calibration

You’ll need to calibrate the added cameras, so the app can map between pixel space and physical space.

A browser-based tool is provided to help with this task. Please refer to Camera Calibration for more details.

For Docker Compose Deployments

In the metropolis-apps-standalone-deployment/docker-compose/people-analytics-app/ directory:

  1. Replace the building plan map calibration/sample-data/images/Endeavor_Cafeteria.png & calibration/sample-data/images/Nth_Street_Cafe_Entrance.png with the one for the new set of cameras. And modify calibration/sample-data/images/imagesMetadata.json accordingly. To keep the config change minimal, you may keep the image file names unchanged. Otherwise you can modify Dockerfiles/import-calibration.Dockerfile and import-calibration/init-scripts/calibration-import.sh accordingly to insert the correct file.

  2. Use the Calibration Toolkit provided within the deployment at http://[deployment-machine-IP-address]:8003/. Create/modify calibration.json for the new set of cameras.

  3. Place the modified calibration.json inside metropolis-apps-standalone-deployment/docker-compose/people-analytics-app/calibration/sample-data.

For Kubernetes Deployments

In the application-helm-configs/people-analytics/ directory:

  1. Replace the building plan map images/Endeavor_Cafeteria.png & images/Nth_Street_Cafe_Entrance.png with the one for the new set of cameras, and modify images/imagesMetadata.json accordingly. To keep the config change minimal, you may keep the image file names unchanged.

  2. Use the Calibration Toolkit. Create/modify calibration.json on the new set of cameras.

  3. Once there’s a new calibration.json, update the calibration file download URL in downstream microservices. Under application-helm-configs/people-analytics/, within file people-analytics-app-override-values.yaml, for microservice section mdx-analytic-stream, update variable calibrationJsonDownloadURL with the correct Google Drive or HTTP(S) URL for the new calibration file.

Note

  • Calibration Toolkit need to be installed separately, since Kubernetes deployments don’t include the toolkit by default.

  • If you face any issues in docker compose or Kubernetes, please refer this FAQ section.


Re-Deployment

Once you have added the new cameras, configured Perception (DeepStream), and performed camera calibration, you can stop the app and do a fresh re-deployment to see results on this new set of cameras.


Partial Deployment

The reference application provides an example of deploying all provided modules as a whole but users may have more complex deployment needs or environment.

Using Docker Compose deployment as an example:

  • To deploy only part of the provided modules instead of all, users can modify foundational/mdx-foundational.yml and/or people-analytics-app/mdx-people-analytics-app.yml files by commenting out one or more services.

  • To deploy only one module, user can specify the service name at the end of the deployment command.

For instance, considering that there is a use case that the user wants to deploy all back-end services in a cloud machine and deploy the UI in a local machine using the Docker Compose option, then the process can be briefly described as the following:

  • In the cloud machine, all backend services need to connect via localhost:

    • Set HOST_IP='localhost' in foundational/.env,

    • Then deploy all services except UI with the provided command $ docker compose -f foundational/mdx-foundational.yml -f people-analytics-app/mdx-people-analytics-app.yml --profile e2e up -d --scale web-ui=0 --pull always --build --force-recreate.

  • In the local machine, the UI service need to connect to the backend services via cloud machine’s external IP:

    • Set HOST_IP='<VM's external IP>' in foundational/.env,

    • Then deploy only the UI service with $ docker compose -f foundational/mdx-foundational.yml -f people-analytics-app/mdx-people-analytics-app.yml up -d --no-deps web-ui.


Operation Parameters

The configuration files for different modules are provided as below. You can inspect and make changes to a corresponding module if needed.

To recall what each component does at a high-level, please refer back to the Components section.

For more details, refer to the Configuration page of each microservice.

For Docker Compose Deployments

In the metropolis-apps-standalone-deployment/docker-compose/ directory:

  • Foundational system configs are provided under foundational/. It includes NVStreamer, Kafka message broker, ELK stacks and other supporting modules. In most cases you don’t need to change anything in there.

For app-specific configuration, in people-analytics-app/, configs are provided under different directories and files:

  • VST: vst/configs/

  • Perception: deepstream/configs/

  • Behavior Analytics: behavior-analytics/configs/

  • Behavior Learning: behavior-learning/configs/

  • Web API: analytics-and-tracking-api/configs/

  • Web UI: analytics-and-tracking-ui/configs/

For Kubernetes Deployments

In the application-helm-configs/ directory:

  • Foundational system configs are provided under foundational-sys/. It includes changes for monitoring chart mostly like change admin password for Grafana dashboard. In most cases you don’t need to change anything in there.

For microservice-specific configuration, in people-analytics/, configs are provided as different files:

  • VST: vst-app-with-ingress-values.yaml, or vst-app-edge-with-ingress-values.yaml (for edge-to-cloud deployment, more here)

  • Perception: wdm-deepstream-ppl-values.yaml

And as different sections within people-analytics-app-override-values.yaml:

  • Behavior Analytics: mdx-analytic-stream

  • Behavior Learning: mdx-behavior-learning

  • Web API: mdx-web-api

  • Web UI: mdx-ui