Quickstart Setup

This guide shows the steps to download metropolis data and script packages for the quickstart (Docker Compose) deployment of various reference applications.

The steps described below are required before deploying the following reference applications:

Prerequisites

Developer Preview Sign-up

  • Go to Metropolis Microservices product page.

  • Select Enterprise GPU (x86 platform) option.

  • Fill out a short from.

  • Review the Developer Preview approval email & NGC invitation emails (typically within a week) for instructions.

The sample apps come with Docker Compose files that can start all the components of an application with a single command. But before we start our sample deployment, make sure your system meets the prerequisites listed below.

Hardware Recommendations

  • RAM: 120GB

  • CPU: 18 cores

  • GPU: A100, L40, L4, A6000, H100 etc.

  • Storage: 512GB (SSD)

Note

  • Multi-Camera Tracking app processes 7 streams at 30 FPS. For reference, with 1 RTX A6000 and using the default CNN-based models, VRAM usage is ~5275MiB / 49140MiB and Volatile GPU-Util is ~47% - 52%.

  • Real Time Location System app process 8 streams at 30 FPS. For reference, with 1 RTX A6000 and using the default CNN-based models, VRAM usage is ~3836MiB / 49140MiB and Volatile GPU-Util is ~31%.

  • Occupancy Analytics app process 2 streams at 30 FPS. For reference, with 1 RTX A6000 and using the default CNN-based models, VRAM usage is ~4671MiB / 49140MiB and Volatile GPU-Util is ~23% - 27%.

  • The transformer based models are heavier and you may see lower FPS when running the same number of streams as using CNN-based models. To reach 30 FPS on transformer based models user may need to reduce the number of streams to process.

  • Aside of runing the reference app end-to-end, there are playback mode provide where user can run reference apps without the heavy GPU dependent modules using pre-extracted metadata as input. This is a compromised mode for users to quickly explore the core of the reference apps with no GPU requirement.

  • If you have more than one GPU on your system, you may split the load. Refer to this FAQ section.

Software Requirements

Note

  • The entire software package is fully tested on native Ubuntu 22.04.

  • Ubuntu 22.04 with Windows WSL2 does not work for the end-2-end pipeline of the provided sample applications due to the lack of a certain decoder library that Deepstream requires.

Getting Started

Attention

If you’re migrating from a previous version, refer to the Upgrade Guide for additional instructions & suggestions.

  1. To download the latest packages for Standalone Deployment and Sample Input Data, there are two approaches.

    • Approach 1: NGC User Interface (UI)

    Log into NGC portal and select nv-mdx Org & mdx-v2-0 Team.

    1. Download the latest Standalone Deployment package: metropolis-apps-standalone-deployment-<version>.tar.gz.

    2. Download the latest Sample Input Data package: metropolis-apps-data-<version>.tar.gz (placed in the same folder).

    • Approach 2: Download and install the NGC CLI tool and execute the following commands:

    While using ngc config, use nv-mdx Org & mdx-v2-0 Team. Also, obtain the latest version of Standalone Deployment and Sample Input Data tar from NGC UI and update the commands accordingly.

    1. For the latest Standalone Deployment package:

    $ ngc registry resource download-version "nfgnkvuikvjm/mdx-v2-0/metropolis-apps-standalone-deployment:<version>-<mmddyyyy>"
    
    1. For the Sample Input Data:

    $ ngc registry resource download-version "nfgnkvuikvjm/mdx-v2-0/metropolis-apps-sample-input-data:<version>-<mmddyyyy>"
    
  2. Extract the contents of the metropolis-apps-standalone-deployment-<version>.tar.gz file by running:

    $ tar xvf metropolis-apps-standalone-deployment.tar.gz
    

    This will create the following metropolis-apps-standalone-deployment directory:

    metropolis-apps-standalone-deployment
    ├── docker-compose
    │   ├── foundational
    │   ├── mtmc-app
    │   ├── rtls-app
    │   ├── people-analytics-app
    │   └── heatmap-app
    ├── images
    ├── modules
    │   ├── analytics-tracking-web-api
    │   ├── analytics-tracking-web-ui
    │   ├── behavior-analytics
    │   ├── behavior-learning
    │   ├── detect-track-embedding
    │   ├── embedding-generation
    │   ├── heatmap-overlay
    │   ├── multi-camera-tracking
    │   ├── recognition-evaluator
    │   ├── retail-loss-prevention-web-api
    │   └── similarity-search
    └── notebooks
        ├── heatmap
        └── v2.1_reference_apps
    

    The metropolis-apps-standalone-deployment directory contains three main directories:

    • docker-compose: The docker-compose folder contains all resources needed to deploy the provided reference applications. Each reference app has a corresponding folder containing config and compose files to configure and deploy the app. The foundational folder contains config and compose files for some foundational components that are common to all the apps.

    • modules: The modules folder contains the app level source code for components deployed in the sample apps. This is provided to the user for reference.

    • notebooks: The notebooks folder contains sample python notebooks that showcase additional analytics that can be generated from the data.

  3. Extract the contents of the metropolis-apps-data-<version>.tar.gz file by running:

    $ tar xvf metropolis-apps-data.tar.gz
    

    This will create the following metropolis-apps-data directory:

    metropolis-apps-data
            ├── data_log
            │   ├── behavior_learning_data
            │   ├── calibration_toolkit
            │   ├── elastic
            │   ├── kafka
            │   └── zookeeper
            │       ├── data
            │       └── log
            ├── playback
            └── videos
                    ├── mtmc-app
                    ├── rtls-app
                    ├── people-analytics-app
                    └── heatmap-app
    

    The metropolis-apps-data directory contains short videos and sensor metadata files which will serve as input to the reference apps.

    Make sure the system has the read and write permission on metropolis-apps-data folder and its subfolders. You can give all permission to it by executing the following command:

    $ sudo chmod -R 777 path/to/metropolis-apps-data
    
  1. To run the sample apps, update the below variables in the .env file in the metropolis-apps-standalone-deployment/docker-compose/foundational folder.

    • Update the MDX_SAMPLE_APPS_DIR variable with the absolute path of the metropolis-apps-standalone-deployment/docker-compose folder.

    • Update the MDX_DATA_DIR variable with the absolute path of the metropolis-apps-data folder.

    • Update the HOST_IP variable with the IP address of the deployment machine.

    • Update the GOOGLE_MAPS_API_KEY variable with a Google Maps API key (only required by the Occupancy Analytics application, specifically its reference Web UI). You can obtain a Google API key by following the create API key instructions.

    • Update the MODEL_TYPE variable with either of the following values: cnn or transformer. The default value is cnn. For RTLS app, we recommend to use transformer model. Please refer to the model combination guide for more details.

You are now ready to deploy the following reference apps:

Continue to the next section to get started with a reference app.

Appendix: Software Pre-Requisites Installation Guide

Install Docker

Docker version 24.0.7+ is recommended.

  1. Uninstall old versions:

    for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done
    
  2. Set up the repository:

    # Add Docker's official GPG key:
    sudo apt-get update
    sudo apt-get install ca-certificates curl gnupg
    sudo install -m 0755 -d /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    sudo chmod a+r /etc/apt/keyrings/docker.gpg
    
    # Add the repository to Apt sources:
    echo \
    "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
    $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
    sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    sudo apt-get update
    
  3. Install Docker Engine:

    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
    

For more information, see docker official instructions.

In case the default docker0 bridge has conflict ips with your network, you could change the docker0 bridge:

  1. Create the /etc/docker/daemon.json:

    {
    "bip": "192.168.5.1/24"
    }
    
  2. Then restart docker by running:

    sudo systemctl restart docker
    
  3. Check and verify your new docker0 bridge ip with:

    ifconfig
    

For more information, refer to the Use bridge networks section in the docker documentation.

Install NVIDIA Container Toolkit

  1. Setup the repository and the GPG key:

    distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
         && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
         && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
              sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
              sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
    
  2. Install the nvidia-docker2 package:

    sudo apt-get update
    sudo apt-get install -y nvidia-container-toolkit
    
  3. Configure the Docker daemon to recognize the NVIDIA Container Runtime:

    sudo nvidia-ctk runtime configure --runtime=docker
    
  4. Restart the Docker daemon:

    sudo systemctl restart docker
    
  5. Verify installation (make sure Docker is configured as non-sudo user):

    docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu20.04 nvidia-smi
    

For more information, see NVIDIA’s Official Installation Guide.

Setup NGC

  1. Create an NGC account (if you don’t have one already). Go to the NGC website and register with an email account.

  2. Generate the API key. After logging into the NGC website , go the dropdown manual from top-right corner and select “Setup” then “Generate API Key”.

  3. Log into nvcr.io on your system:

    docker login nvcr.io
    Username: $oauthtoken
    Password: <your ngc API key>
    

Now you can docker pull prebuilt containers from NGC on your system.

Disable IPv6

We must disable IPv6 in order to run VST and Deepstream without any issues in Docker Compose environment. Use the following command to disable IPv6.

sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1