Prerequisites#

This section covers the prerequisites for deploying and using the Smart City Blueprint. Prerequisites are organized into common requirements for all deployments, and deployment-specific requirements for Docker Compose.

Common Prerequisites#

These prerequisites are required for Docker Compose deployment.

Hardware Requirements#

The Smart City Blueprint has been validated and tested on the following NVIDIA GPUs:

  • NVIDIA H100

  • NVIDIA L40S

  • NVIDIA RTX PRO 6000 Blackwell

Minimum System Requirements#

  • x86-64 architecture

  • 18 core CPU (x86 systems)

  • 128 GB RAM

  • 1 TB SSD

  • 1 x 1 Gbps network interface

Operating System and Base Software#

  • Ubuntu 24.04

  • NGC CLI 4.10.0 or later

Runtime Environment Settings#

The following Linux kernel settings are required:

# Ensure the sysctl.d directory exists
sudo mkdir -p /etc/sysctl.d

# Write VSS kernel settings to 99-vss.conf (persistent across reboots)
sudo bash -c "printf '%s\n' \
  'net.ipv6.conf.all.disable_ipv6 = 1' \
  'net.ipv6.conf.default.disable_ipv6 = 1' \
  'net.ipv6.conf.lo.disable_ipv6 = 1' \
  'net.core.rmem_max = 5242880' \
  'net.core.wmem_max = 5242880' \
  'net.ipv4.tcp_rmem = 4096 87380 16777216' \
  'net.ipv4.tcp_wmem = 4096 65536 16777216' \
  > /etc/sysctl.d/99-vss.conf"

# Reload sysctl to apply the new settings
sudo sysctl --system

Docker Compose-Specific Prerequisites#

The following prerequisites are required only for Docker Compose deployments:

  • NVIDIA Driver 580.105.08

  • Docker 27.2.0 or later

  • Docker Compose v2.29.0 or later

  • NVIDIA Container Toolkit 1.17.8

Note

For detailed installation instructions, see the Appendix:

Docker Compose deployments:

Common software (both deployments):

NGC API Key Access#

To deploy the Smart City Blueprint, you need an NGC API key with access to the required resources.

Create an NVIDIA NGC Account#

  1. Go to https://ngc.nvidia.com

  2. Click Sign up if you don’t have an account, or Sign in if you do

  3. Complete the registration process with your email address

Generate an API Key#

  1. Once logged in, click on your username in the top right corner

  2. Select Setup from the dropdown menu

  3. Navigate to API Keys under the Keys/Secrets section

  4. Click Generate API Key

  5. Click Generate Personal Key (available at both the top-center and top-right of the page)

  6. Provide a descriptive name for your API key (e.g., “Smart City Blueprint Development”)

  7. Select NGC Catalog for Key Permissions

  8. Click Generate Personal Key

  9. Copy the generated API key immediately - you won’t be able to see it again

  10. Store it securely in a password manager or encrypted file

Note

Keep your NGC API key secure and never commit it to version control systems.

Verify Your NGC Access#

To verify your NGC key has the correct permissions:

# Set your NGC API key
export NGC_CLI_API_KEY='your_ngc_api_key'

# Test access to the required resources
ngc registry resource list "nvidia/vss-smartcities/*"
ngc registry resource list "nvidia/vss-core/*"

# Test access to the required images
ngc registry image list "nvidia/vss-core/*"

# Test access to the required models
ngc registry model list "nvidia/tao/*"

If you encounter permission errors, contact NVIDIA support for assistance.

Google Maps API Key#

The Video-Analytics-UI map view functionality requires a Google Maps API key.

You can obtain a Google Maps API key by following the create API key instructions.

Note

For production deployments, ensure you configure appropriate API key restrictions and usage limits.

Appendix: Software Prerequisites Installation Guide#

This appendix provides installation instructions for the software prerequisites. Instructions are organized by common software (required for all deployments) and deployment-specific software.

Common Software Installation#

Install NVIDIA Driver#

NVIDIA Driver version 580.105.08 is required for Docker Compose deployments.

You can download the driver directly from: https://www.nvidia.com/en-us/drivers/details/257738/

For detailed installation instructions, refer to the official NVIDIA Driver Installation Guide: https://docs.nvidia.com/datacenter/tesla/driver-installation-guide/index.html

You can also browse drivers for your specific GPU and platform from the NVIDIA Driver Downloads page: https://www.nvidia.com/Download/index.aspx

Note

After installation, verify the driver is correctly installed by running nvidia-smi to confirm the driver version and GPU detection.

Note

NVIDIA Fabric Manager Requirement

NVIDIA Fabric Manager is required on systems with multiple GPUs that are connected using NVLink or NVSwitch technology. This typically applies to:

  • Multi-GPU systems with NVLink bridges (e.g., DGX systems, HGX platforms)

  • Systems with NVSwitch fabric interconnects

  • Hosts running NVIDIA H100, A100, V100, or other datacenter GPUs with NVLink

Fabric Manager is not required for:

  • Single GPU systems

  • Multi-GPU systems without NVLink/NVSwitch (PCIe-only configurations)

For installation instructions, refer to the official NVIDIA Fabric Manager documentation:

Verify Fabric Manager status after installation:

sudo systemctl status nvidia-fabricmanager

Install NGC CLI#

NGC CLI version 4.10.0 or later is required to download blueprints and authenticate with NGC resources.

Download and install NGC CLI:

For ARM64 Linux:

curl -sLo "/tmp/ngccli.zip" https://api.ngc.nvidia.com/v2/resources/nvidia/ngc-apps/ngc_cli/versions/4.10.0/files/ngccli_arm64.zip

For AMD64 Linux:

curl -sLo "/tmp/ngccli.zip" https://api.ngc.nvidia.com/v2/resources/nvidia/ngc-apps/ngc_cli/versions/4.10.0/files/ngccli_linux.zip

After downloading, install NGC CLI:

sudo mkdir -p /usr/local/bin
sudo unzip -qo /tmp/ngccli.zip -d /usr/local/lib
sudo chmod +x /usr/local/lib/ngc-cli/ngc
sudo ln -sfn /usr/local/lib/ngc-cli/ngc /usr/local/bin/ngc

Verify installation:

ngc --version

Configure NGC CLI with your API key:

ngc config set

When prompted, enter your NGC API key. For information on obtaining an NGC API key, see the NGC API Key Access section.

Note

For the latest version of NGC CLI, visit the NGC CLI downloads page: https://ngc.nvidia.com/setup/installers/cli

Docker Compose Software Installation#

These installation instructions are only required for Docker Compose deployments.

Install Docker#

Docker version 27.2.0+ is recommended. Follow the guide here for installing Docker: https://docs.docker.com/engine/install/ubuntu/

After installation, complete the Linux post-installation steps so that Docker can run without sudo. See Linux post-installation steps for Docker Engine.

Configure Docker#

After installing Docker, you must configure it to use the cgroupfs cgroup driver.

Edit the Docker daemon configuration file:

Add or verify the following entry in /etc/docker/daemon.json:

"exec-opts": ["native.cgroupdriver=cgroupfs"]

This configuration must be included within the JSON object. For example:

{
    "exec-opts": ["native.cgroupdriver=cgroupfs"]
}

If your /etc/docker/daemon.json already contains other settings, ensure this entry is added to the existing configuration.

Apply the configuration:

After editing the daemon configuration, restart Docker to apply the changes:

sudo systemctl daemon-reload
sudo systemctl restart docker

Note

Restarting Docker will temporarily stop all running containers. Plan accordingly if you have containers running.

Install Docker Compose#

Docker Compose v2.29.0 or later is required. Docker Compose v2 is typically installed as a plugin with modern Docker installations.

Verify if Docker Compose is already installed:

docker compose version

If not installed, install Docker Compose plugin:

sudo apt update
sudo apt install docker-compose-plugin

Verify installation:

docker compose version

Note

Docker Compose v2 uses the command docker compose (without hyphen) instead of the older docker-compose command.

Install NVIDIA Container Toolkit#

The NVIDIA Container Toolkit is required to run the NVIDIA containers. Follow the guide here for installing the NVIDIA Container Toolkit: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

Note

This blueprint uses NodePort services for external access to application endpoints.