We provide an AI infused Network Video Recorder (AI-NVR) as a sample application built using the Metropolis Microservices for Jetson stack. The application is downloaded as a docker compose package (within compressed tar file) and can be installed based on instructions provided in the setup section.

AI NVR Production Feature Summary

While the Quick Start Guide provided instructions for a functional but limited installation, this section describes various additional functionalities that can be leveraged from the Metropolis Microservices for Jetson stack, and are instrumental in building a production quality system. These include:

  • Remote access of system APIs through reference cloud

  • User authentication and authorization supported by reference cloud

  • Device security through use of firewall and encrypted storage

  • System monitoring support

These features are available as platform services presented to user as Linux services, and can be enabled as relevant to their use case.

Software Configuration

The AI NVR application illustrates best practices for configuring and instantiating various pieces in the Metropolis Microservices for Jetson stack, including:

  • VST for camera discovery, stream ingestion, storage and streaming. Notable configuration for VST includes: Ethernet inference using which camera discovery should occur, ‘aging policy’ for storage governing the watermark level when video will be deleted, number of concurrent WebRTC streams to be supported (based on hardware decoder aka nvdec limit)

  • DeepStream for real-time perception using PeopleNet 2.6. We run Inference on the DLA to enable use of GPU for other purposes (tracking, inference in analytics etc). Further inference interval is set to ‘1’ to enable support of larger stream count based on available compute in the DLA.

  • Analytics deployment configuration Configuration parameters specifies the spatial and temporal buffers in the implementation of line crossing and region of interest . This defines the tradeoff between latency and accuracy that the user can make based on their use case.

As part of platform services:

  • Ingress has been configured with routes for various microservices. As a user brings in their own custom microservices into their application, this configuration can be extended accordingly.

  • Redis has been configured to enable snapshot that preserves state across restarts.

  • Firewall has been configured to allow outgoing traffic from the IoT Gateway microservice and for webRTC streaming.

  • Storage has been configured with disk quotas for each microservice so that any particular microservice cannot monopolize the available storage. User can further modify or extend this file depending on their software stack.

Docker Compose

To deploy the Metropolis Microservices based systems on a Jetson device, we use Docker compose. Compose is a tool for defining and running multi-container Docker applications. It allows you to easily manage and orchestrate all your application stack into a single unit of deployment. Docker compose reads one or more yaml files containing the infrastructure configuration, then spins up your container environments.

Microservice config in docker compose: Yaml configuration files are at the core of docker compose. A deployment should contain one or more configuration files. Each block in the configuration file represents a microservice deployment setup. Here is a sample service configuration.

  user: "0:0"
  network_mode: "host"
    driver: "json-file"
      max-size: "8192m"
      max-file: "3"
    CONFIG_LOCATION: "/config"
    PORT: 6002
    INSTANCE_ID: emdx-analytics-02
    - ./config/emdx-analytics:/config
    - /data/emdx-volume:/data/emdx-volume
    - /data/logging-volume:/log
  restart: always
  container_name: emdx-analytics-02
  command: sh -c 'gunicorn --worker-class uvicorn.workers.UvicornWorker --workers 1 --bind main:app 2>&1 | tee -a /log/emdx_02.log'
      condition: service_completed_successfully
        memory: 512M
      condition: always

Main attributes

  • image: refers to the location/image-repositoriry of your container

  • environment: This is the section where you can define the environmental variable to inject into the container

  • volumes: you can mount a file or folder into the container using this attribute. The mounted asset will be available in both the host machine and inside the container

  • restart: This option defines the behavior if the container exits with error.

  • command: This is the command to start you application if there are any

  • depend_on: This attribute is used to define the startup sequence. Any service that must start before the service defined here should be listed here

  • network_mode: this is the network configuration of the container. In our reference deployments, we run all containers on the host network. Use of host networking makes all containers run in the same network namespace as the host Jetson system, and hence are able to access each other. Note that for security considerations while going to production, this has to be used in conjunction with the firewall platform service to prevent authorized access to device APIs

init containers & startup sequence

As mentioned in the previous section, it is sometimes important for a container to wait for certain conditions such as an available database connection before starting. In such cases, the container will make sure the db connection is checked before the service can starts. An init container can be used to achieve this. It will initially run to check the database connection, and until the connection is established (or other defined conditions) it will exit and allow the main service to start. In the sample configuration we shared, emdx-analytics depends on another service. To make sure it starts before the emdx-analytics does, we have created an init container called moj-http-based-init-emdx-analytics.

  image: ubuntu
  network_mode: "host"
    - ./config/init-container/
  command: sh -c "/"
    ENDPOINTS: "5000/api/core/healthz"
      condition: on-failure

This moj-http-based-init-emdx-analytics container makes an api call to some endpoint (localhost:5000/api/core/healthz) in a loop until the request is successful. Then it will exit. This init container is meant to run only once in the lifetime of the container.

Creating your custom docker compose package

Lets go through a simple example of integrating a new app service to the microservices stack.

Step 1: Create a work environment and compose.yaml file or use an existing one. This compose.yaml will contain our service configuration.

sh-3.2$ mkdir test-app
sh-3.2$ cd test-app
sh-3.2$ touch compose.yaml

Step 2: Edit the compose.yaml to add your service configuration.

version: '2'
  image: remove-repository-image:v1.0
  user: "0:0"
  network_mode: "host"
    APP-PORT: 8080
    - ./config/test-app/app.cfg:/opt/test-app/config/app.cfg
  restart: always
  container_name: test-app
  command: sh -c '/opt/test-app/'
      condition: always

Step 3: Mount the configuration into the container. If your service requires a configuration, you can define your configuration and mount it into the container as we did.

- /config/test-app/app.cfg:/opt/test-app/config/app.cfg

Step 4: To expose your service through ingress, create an ingress config file: test-app-nginx.cfg.

sh-3.2$ cd test-app
sh-3.2$ touch config/test-app-nginx.cfg

Edit the test-app-nginx.cfg file and make sure that no other service is running on the host on the same port.

location /app-prefix/ {
  rewrite ^/app-prefix/?(.*)$ /$1 break;
  proxy_set_header Host $host;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  access_log /var/log/nginx/access.log timed_combined;
  proxy_pass http://localhost:8080;

location: refers to the prefix for your ustream api. This prefix must be unique across all configurations. rewrite: this directive is used to rewrite the URL proxy_set_header: set request header to forward to upstream server access_log: where we should be logging proxy_pass: specifies the destination host and port of the upstream server

Step 5: Copy your ingress config from the config directory to the ingress config folder. You can find more instructions on how to use this config in the ingress section of the platform services documentation.

sh-3.2$ cd test-app
sh-3.2$ cp config/test-app-nginx.cfg /opt/nvidia/jetson/services/ingress/config

Step 6: Start the app service and start ingress.

sh-3.2$ cd test-app
sh-3.2$ sudo docker compose -f compose.yaml up -d --force-recreate
sh-3.2$ sudo systemctl restart jetson-ingress

Folder struct:


AI-NVR full setup


Before you get started, go through following to acquire the necessary hardware components and get access to the software components.

Required Hardware

  • Orin AGX devkit or

  • Orin NX16 devkit (self built) with 128GB(min) NVMe drive

  • SATA drive: 2 TB or more, up to 10 TB for Orin AGX and 6TB for Orin NX16

  • SATA power supply and cable connected to the SATA power supply

  • SATA PCIe controller suitable for Orin AGX. If not available, a USB to SATA controller may be used

  • USB to SATA controller for Orin NX16

  • An NVMe drive may be used instead of SATA for Orin AGX. Samsung 980 PRO MZ-V8P2T0B - SSD - 2 TB or more - PCIe 4.0 x4

  • Cat 6 Ethernet Cable x 4

  • USB to Ethernet Adapter, CableCreation USB 3.0 to 10/100/1000 Gigabit Wired LAN Network Adapter Compatible with Windows PC and more

  • IP Camera: Suggested camera to use is Amcrest UltraHD 4K (8MP) Outdoor Bullet POE IP Camera, 3840x2160, 98ft NightVision, 2.8mm Lens, IP67 Weatherproof, MicroSD Recording, White (IP8M-2496EW-V2)

  • TP-Link AC1200 Gigabit WiFi Router (Archer A6) - Dual Band MU-MIMO Wireless Internet Router, 4 x Antennas, OneMesh and AP mode, Long Range Coverage

  • PoE switch: TP-Link TL-SG1210MPE V2 - switch - 10 ports - smart

  • Ubuntu 20.04 or 22.04 Desktop/Laptop

  • USB-C flashing cable

  • Monitor, Keyboard, Mouse (for Jetson)

  • The mobile app is recommended to be run on Android phones running Android version 13, but it is supposed to work from Android version 8

The Orin AGX hardware setup is shown in the image below:


The Orin NX16 hardware setup is shown in the image below:



Get access to NGC and obtain the API key through steps documented in the Quick Start Guide.

Hardware Setup

Now proceed to setting up the Jetson device hardware. This involves a series of steps described here.

Connecting SATA drive

Orin AGX Devkit: Ensure device is powered off. Insert the PCIe controller card into the PCIe slot (located inside the magnetic black side panel) checking for proper placement of card in the slot. Connect it to the SATA drive with the data cable. Connect the power supply to the SATA drive using the power connector cable. Switch on the power supply to the drive, then power on the device.

Orin NX16 device: Ensure device is powered off. Attach the SATA drive to the power adapter, aligning the power and data ports. Connect the USB cable from the power adapter to one of the device USB ports. Switch on the power supply to the drive, then power on the device.

Connecting NVMe drive (option for AGX)

If you do not have SATA components available, an NVMe drive for storage may be used for Orin AGX. Turn Off the system and unscrew the screw and place the NVMe drive in the NVMe slot and screw the screw back.

Networking setup

The system needs two separate ethernet ports. The ethernet port available on the device is connected to the external network. A USB based ethernet adaptor is used for connecting to the PoE switch. This adapter’s ethernet port is then connected to the POE switch uplink port.

For video streaming to the mobile app, it should be connected to the same local network as the Jetson. If there is no available WIFI on the network you are connecting Jetson to, use a WIFI router to connect to that network, and then connect the mobile device to that.

Connecting cameras

Connect the cameras to one of the available POE ports. Note that while AI NVR supports connecting up to 16 H.265 cameras for AGX and 4 H.265 cameras for NX16, the number of ports on the PoE switch may limit the number of cameras you can connect.

Refer to Quick Start Guide for information about supported stream count and system resource utilization.

Refer to VST for list of supported cameras.

Connect monitor, keyboard & mouse to Jetson

Connect a monitor using the DP port, or using DP-TO-HDMI dongle to connect using HDMI, or DP-TO-DP cable for using DisplayPort. Attach USB hub to the free USB port on the device and connect keyboard and mouse to it.

Connect Jetson to Host

Connect the host (Ubuntu Desktop/Laptop) to the Jetson devkit USB-C flashing port using the USB cable.

Software Setup


Follow the steps in the Quick Start Guide to get NGC access, install the BSP R36.2, install platform services, and install application bundle on the Jetson devkit.

Some additional setup steps needed are:

  • Copy the storage quota file from app bundle directory to jetson-configs directory:

sudo cp ai_nvr/config/storage-quota.json /opt/nvidia/jetson-configs

  • If you want to use monitoring services, then uncomment all the lines in the ingress config file (otherwise keep them commented out):


  • NVStreamer App

Setup NVStreamer (optional, if you want to stream video files in addition to, or in lieu of camera feed). Refer to the Quick Start Guide for setting up NVStreamer on your Jetson Orin device.

  • Mobile App

Install the AI-NVR mobile app from the Google Play Store. There are two ways to use the mobile app - one is direct access and other is through cloud. To get started you can use the simpler direct access option, but if you like to setup and use the reference cloud, follow the steps below to install that.

  • Reference Cloud

The reference cloud Reference Cloud for Jetson devices running Metropolis Microservices is an optional piece proving product grade features such as remote access, security and user accounts. Our recommendation is to first start with AI-NVR install without the cloud and the incorporate cloud as the system matures.

Deploy the cloud using instructions in Installation and Setup.

Once the cloud is deployed, next step is to deploy the iotgateway service, that connects the device to the cloud. First configure the iotgateway service on your device with an OTP (One Time Passcode), which is needed to connect the device to your cloud. For that, you need to download the OTP from the provisioning server running in the cloud. If your cloud is not deployed yet, you can omit this OTP section and come back later. So far, we assume you have the cloud ready.

  1. Get the admin API key value from secret manager named: admin-api-key-prov-server-admin-api.

aws secretsmanager get-secret-value --secret-id admin-api-key-prov-server-admin-api
  1. Use it in a curl command to download the otp as follows:

    curl -H 'Authorization: PROV-ADMIN <your-api-key>' -k https://prov-admin.<your-domain-name>/admin/api/otp

    You should get a otp value as a response.

  2. On your device, update the value in /opt/nvidia/jetson-configs/.envfile as a root user.

    sudo vi /opt/nvidia/jetson-configs/.envfile

    The content will look like below, which you need to update with your values as shown:

  3. On your device, update the value in /opt/nvidia/jetson/services/iotgateway/compose.yaml as a root user:

    sudo vi /opt/nvidia/jetson/services/iotgateway/compose.yaml

    The content will look like below, which you need to update the tcpmux-client environment variable with your value as shown:


  4. Restart iotgateway

    sudo systemctl enable jetson-iot-gateway --now

Run AI-NVR Application

The AI NVR reference application depends on various platform services, which need to enabled and run before running the AI NVR application. For details of each of the platform services, please see the Platform Services. As quick reference the steps are outlined below.

Start Services

Login to using your NGC API key (as d):

sudo docker login -u "\$oauthtoken" -p <NGC-API-KEY>

Start the required services:

sudo systemctl enable jetson-storage --now

sudo systemctl enable jetson-networking --now

Reboot system:

sudo systemctl enable jetson-redis --now

If using the cloud for connectivity, then also start iotgateway (after configuring OTP, as described earlier):

sudo systemctl enable jetson-iot-gateway --now

You may also start other optional services like monitoring, if you want to:

sudo systemctl enable jetson-monitoring --now

sudo systemctl enable jetson-sys-monitoring --now

sudo systemctl enable jetson-gpu-monitoring --now

After all of the required platform services have been started, start the ingress service:

sudo systemctl enable jetson-ingress --now


Any of these services can be disabled & stopped as follows:

sudo systemctl disable <service-name> --now

Start Application

Launch the application from the downloaded bundle. Note that the docker compose launch command depends on the device it is running on:

cd ai_nvr

If on Orin AGX: sudo docker compose -f compose_agx.yaml up -d --force-recreate

If on Orin NX16: sudo docker compose -f compose_nx.yaml up -d --force-recreate

You can check that containers are running as expected by running the docker ps command.

Note: If needed, the application services can stopped as follows

cd ai_nvr

If on Orin AGX: sudo docker compose -f compose_agx.yaml down --remove-orphans

If on Orin NX16: sudo docker compose -f compose_nx.yaml down --remove-orphans

Add streams to VST

Add camera or NVStreamer video stream to VST. Ensure that the stream can be viewed from the VST Reference Web App through the Live Streams tab. Refer to the Usage & Configuration for the detailed steps.

Exposed Ports

Exposed Ports















View video overlay & analytics

Processed video output can be viewed as an RTSP stream accessible at rtsp://<JETSON-DEVICE-IP>:8555/ds-test and rtsp://<JETSON-DEVICE-IP>:8556/ds-test depending on the pipeline the input stream was placed in. Use a media player such as VLC to open and view the stream. The output stream shows live the processed video along with bounding boxes around people. Alternatively, the stream can be added to VST as a new stream to be able to view via the VST web UI. If adding to VST, ensure that the name includes the word “overlay”, otherwise SDR will not ignore this stream and add it back to DeepStream causing a circular dependency.

A sample screenshot from the overlay stream can be seen below:


Use mobile app

Refer to AI-NVR Mobile Application User Guide for a rich Android based client to interact with the AI-NVR system and view video, analytics and alerts generated by the Metropolis Microservices.

The AI-NVR mobile app enables users to access the full AI-NVR device functionality. The mobile app is distributed by NVIDIA through the Play Store. The mobile app has two launcher entry points. One is for direct access to the AI-NVR device, and one is for remote access to the device through the cloud.

Option A: Direct access

The device is accessed directly from the mobile app using device IP address. You will be presented with a dialog window to enter the IP address of the device. Once the IP address is submitted, the app will launch the UI with the list of cameras connected to the device.

Option B: Cloud based access

The device is accessed remotely from the mobile app through cloud. You will be presented with a web-based login based on AWS Cognito. Follow the prompts and create an account. Once you receive an email confirming an account creation, you will be redirected back to the mobile application, and a UI with a list of devices will be shown.