Docker Compose Deployment Prerequisites#

This guide covers the prerequisites for deploying CDS using Docker Compose. This deployment method is best for local development, testing, and small-scale deployments.

System Requirements#

Hardware Requirements#

Minimum Requirements#

  • CPU: 8+ cores

  • RAM: 32GB system memory

  • GPU: NVIDIA GPU with 16GB+ VRAM

  • Storage: 100GB+ available disk space

GPU Requirements#

CDS requires an NVIDIA GPU for running the Cosmos-embed NIM service. Supported GPUs include the following:

GPU

GPU Memory

Support Level

H100

80GB

Preferred

A100, L40s, L4, H20, L20

24GB+

Optimized

Other Ampere+ GPUs

16GB+

Functional

Support Level Definitions

  • Preferred: Best performance with full TensorRT-LLM optimization

  • Optimized: Full TensorRT-LLM optimization with excellent performance

  • Functional: Runs end-to-end with fallback paths; lower throughput expected

Cosmos-embed NIM Requirements#

  • GPU Memory: Minimum 16GB; 24GB+ recommended for optimal performance

  • CUDA: Compatible with CUDA 11.8+ runtime

  • Refer to the Cosmos-embed NIM Prerequisites for detailed hardware requirements

Software Requirements#

Operating System#

CDS has been tested on the following operating systems:

  • Ubuntu 22.04 LTS

  • Ubuntu 24.04 LTS

Required Software#

Package

Version

Purpose

Docker

20.10+

Container runtime

Docker Compose

2.0+

Multi-container orchestration

Python

3.10

Development and CLI tools

Git LFS

3.0+

Model weights and large files

UV

0.8.17+

Python dependency management

NVIDIA Drivers

525+

GPU driver (CUDA 11.8+ support)

Required Licenses#

  • NVIDIA AI Enterprise (NVAIE) License or NIM Developer License - Required to pull and deploy Cosmos-embed NIM. Contact your NVIDIA account team or visit NVIDIA AI Enterprise for license information.

Pre-Installation Setup#

Install Docker and Docker Compose#

The following commands install Docker and Docker Compose on Ubuntu/Debian. For other platforms, refer to the Docker installation guide.

# Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

# Add user to docker group
sudo usermod -aG docker $USER
newgrp docker

# Verify installation
docker --version
docker compose version

Install NVIDIA Container Toolkit#

The NVIDIA Container Toolkit enables Docker containers to access the system GPU. The following commands install the NVIDIA Container Toolkit on Ubuntu/Debian.

# Configure the repository
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

# Install the toolkit
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit

# Configure Docker to use the NVIDIA runtime
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

# Verify GPU access
docker run --rm --gpus all nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smi

For more detailed instructions, refer to the NVIDIA Container Toolkit installation guide.

Install Python and UV Package Manager#

The following commands install Python and the UV package manager on Ubuntu/Debian.

# Install Python 3.10 (if not already installed)
sudo apt-get update
sudo apt-get install -y python3.10 python3-pip

# Install UV package manager
curl -LsSf https://astral.sh/uv/install.sh | sh

# Verify installations
python --version
# Should show Python 3.10.x

# Note: On older distributions, python may not be linked to python3
# If the above command fails, create an alias:
# sudo apt-get install -y python-is-python3
# Or manually create a symlink:
# sudo ln -s /usr/bin/python3 /usr/bin/python

# uv will require a shell restart or sourcing the following file. 
source $HOME/.local/bin/env
uv --version

Install Git LFS#

Git LFS is required for downloading model weights and large files.

# Ubuntu/Debian
sudo apt-get install git-lfs
git lfs install

# Verify installation
git lfs version

NGC Configuration#

Access to NGC (NVIDIA GPU Cloud) is required for pulling the Cosmos-embed NIM container and models.

Create NGC Account and API Key#

  1. Create an account at NGC

  2. Generate an API Key

  3. Ensure your NGC account has access to the following:

    • The nvidia/cosmos-embed model

    • The nvcr.io container registry

    • A valid NVAIE or NIM Developer license entitlement

Authenticate Docker with NGC#

Use the following command to authenticate Docker with NGC.

docker login nvcr.io
Username: $oauthtoken
Password: <your-NGC-API-key>

Verify NGC Access#

Test your NGC authentication and license access by pulling the Cosmos-Embed NIM container (optional):

# Optional: Test pulling the Cosmos-embed NIM image
# Note: This image is large (~20GB) and will take time to download
docker pull nvcr.io/nim/nvidia/cosmos-embed1:latest

If the pull succeeds, your NGC authentication and NVAIE/NIM Dev license have been configured correctly. If you encounter authentication or permission errors, verify your NGC API key and license access with your NVIDIA account team.

Network Requirements#

LocalStack Hostname Mapping#

Important

Docker Compose deployment requires a hostname mapping for LocalStack (S3-compatible storage).

To ensure that your system recognizes “localstack” as an alias for localhost, you must add a hostname mapping to your /etc/hosts file. The following command appends the required entry to /etc/hosts (this requires sudo privileges):

echo "127.0.0.1   localstack" | sudo tee -a /etc/hosts

Verify the mapping:

grep localstack /etc/hosts

Important

Without this mapping, data ingestion and storage operations will fail.

Firewall and Port Requirements#

CDS requires the following ports to be available. Any port conflicts must be resolved before deployment.

Port

Service

Purpose

8888

CDS API

REST API endpoint

9000

Cosmos-embed NIM

Embedding service

19530

Milvus

Vector database

4566

LocalStack

S3-compatible storage

8080

React UI

Web user interface

Use the following command to check for port conflicts:

# Verify ports are available
ss -tuln | grep -E ':(8888|9000|19530|4566|8080)'

If no output is returned, all required ports are available. If any output is returned, those ports are already in use. In this case, you can do either of the following:

  • Stop the services using those ports.

  • (Advanced) Modify the CDS configuration to use different ports. Refer to the Docker Compose Deployment Guide for more details.

Storage Requirements#

Disk Space#

200GB+ free disk space is recommended for development and testing with sample datasets. This includes the following:

  • Base installation: ~50GB for Docker images and model cache

  • Model cache: ~20GB for Cosmos-embed NIM models, which is downloaded on first run

  • Data storage: The amount required is based on the size of the dataset:

    • Video storage: Matches your dataset size

    • Embeddings: ~1.3KB per video frame/segment

    • Metadata: Minimal (in the MB range)

Storage Performance#

  • High-performance SSDs are recommended for the model cache and vector database.

  • For production workloads, consider using NVMe storage.

Environment Variables#

CDS uses a .env file to manage all required environment variables. The .env file is located in the deploy/standalone/ directory and is automatically loaded by Docker Compose during deployment.

Environment configuration, including setting your NGC API key and data directory, is covered in detail in the Docker Compose Deployment Guide.

Next Steps#

After completing all prerequisites, proceed to the Docker Compose Deployment Guide to deploy CDS.

For troubleshooting issues during prerequisite setup or deployment, refer to Docker Compose Troubleshooting.