Getting Started#

This guide will help you access and start using the VSS Auto Calibration Microservice and User Interface.

Prerequisites#

Before using the microservice and UI, ensure you have:

System Requirements

  • x86_64 system

  • OS Ubuntu 24.04

  • NVIDIA GPU with hardware encoder (NVENC)

  • NVIDIA driver 580

  • Docker (setup to run without sudo privilege)

  • NVIDIA container toolkit (Refer to the Prerequisites section)

Required

  • At least 2 camera video files (MP4, AVI, MOV, or MKV format)

  • Layout/map image (PNG, JPG, or JPEG format)

Optional

  • Ground truth data (ZIP file) for calibration evaluation

  • Pre-existing alignment data (JSON file)

  • Focal length values for cameras

  • Config parameters if any for your dataset

Deployment Steps (Docker Compose)

Deploy the UI and backend microservice using Docker Compose. Currently we use the VSS Auto Calibration deployment resources from the Warehouse Blueprint. Refer to the Warehouse Blueprint Introduction for more details.

  1. Setup NGC access:

    # Setup NGC access
    export NGC_CLI_API_KEY=<NGC_CLI_API_KEY>
    export NGC_CLI_ORG='nvidia'
    
  2. Download deployment resources. Refer to the Prerequisites section for NGC CLI installation guide.

    ngc \
     registry \
     resource \
     download-version \
     "nvidia/vss-warehouse/vss-warehouse-compose:3.1.0"
    
     #OR Manually download the tar file from NGC
     #URL https://catalog.ngc.nvidia.com/orgs/nvidia/teams/vss-warehouse/resources/vss-warehouse-compose?version=3.1.0
    
     # Extract the package
     cd vss-warehouse-compose_v3.1.0
     tar -xvf deploy-warehouse-compose.tar.gz
    
  3. Navigate to the VSS Auto Calibration directory:

    cd deployments/auto-calib
    

    Your directory structure should be:

    ├── compose.yml
    ├── ms
    │   └── compose.yml
    └── ui
        └── compose.yml
    
  4. Create a .env file in your current directory with the following environment variables:

    VSS_AUTO_CALIBRATION_PORT=8000
    VSS_AUTO_CALIBRATION_UI_PORT=5000
    MDX_SAMPLE_APPS_DIR=/path/to/your/sample_apps_dir
    MDX_DATA_DIR=/path/to/your/data_dir
    HOST_IP=<HOST_IP_ADDRESS>
    

    Replace the paths, ports and IP address with your actual values.

  5. Download and set up the VGGT model and create projects directory:

    1. Download the VGGT commercial model from HuggingFace.

      Note

      You need to sign up for a HuggingFace account and accept the model license to download.

    2. Move the downloaded model file (vggt_1B_commercial.pt) to the VGGT model directory:

      mkdir -p ${MDX_DATA_DIR}/auto-calib/vggt
      mv vggt_1B_commercial.pt ${MDX_DATA_DIR}/auto-calib/vggt/
      
    3. Create projects directory:

      mkdir -p ${MDX_SAMPLE_APPS_DIR}/auto-calib/projects
      

    Note

    Projects will be saved in: ${MDX_SAMPLE_APPS_DIR}/auto-calib/projects

  6. Change the ownership of the directory to UID 1000 and GID 1000:

    sudo chown -R 1000:1000 ${MDX_DATA_DIR}/auto-calib
    sudo chown -R 1000:1000 ${MDX_SAMPLE_APPS_DIR}/auto-calib
    
  7. Ensure you have access to NGC to pull the containers.

  8. Start both the microservice and UI servers:

    docker compose --profile "auto-calib" up -d
    
  9. Open your browser and navigate to:

    http://<HOST_IP>:<VSS_AUTO_CALIBRATION_UI_PORT>
    

    For example, with the default settings: http://<HOST_IP>:5000

  10. To stop the containers

    docker compose --profile "auto-calib" down
    

First Time Setup#

When you first access the UI, you’ll see the main interface with a stepper showing 6 workflow steps.

First Launch Screen

Interface Overview

The interface consists of:

  1. Header Bar

    • Application name and version

    • Theme toggle button (light/dark mode)

    • Settings button (visible only when you are on the Parameters step)

  2. Stepper Navigation

    • Visual progress indicator

    • Click on steps to navigate (after selecting a project)

    • Current step is highlighted

  3. Main Content Area

    • Step-specific content and controls

    • Forms, file uploads, and interactive tools

  4. Navigation Buttons

    • “Previous” button to go back

    • “Next” button to proceed

    • Disabled when requirements aren’t met

  5. Footer

    • Copyright information

    • Application version

  6. Notifications

    • Success/error messages appear in bottom-right corner

    • Auto-dismiss after 6 seconds

Quick Start Guide#

Follow these steps to perform your first calibration:

Step 0: Deploy the UI (If Not Already Running)

If you’re deploying via Docker Compose, follow the steps in the Production Mode (Docker Compose) section above. Once deployed, access the UI at http://<HOST_IP>:<VSS_AUTO_CALIBRATION_UI_PORT> (default: port 5000).

Step 1: Create a Project

  1. On the Project Setup page, enter a project name (e.g., warehouse_calibration)

  2. Click “Create” button

  3. Your new project appears in the list below

  4. Click “Select” on your project card

Project Setup

Step 2: Upload Files

  1. Click “Next” to go to Video Configuration

  2. Upload at least 2 video files:

    • Click “Select Videos” button

    • Select video files named cam_00.mp4, cam_01.mp4, etc.

    • Reorder videos by dragging.

    • Click “Upload Videos” to upload the videos.

  3. Upload layout image:

    • Click “Upload Layout” button

    • Select your PNG/JPG layout/map image

    • Confirm upload success

Video Configuration

Step 3: Configure Parameters

  1. Click “Next” to go to Parameters

  2. Select a camera from the dropdown

  3. Draw ROIs (optional):

    • Click “Draw ROI” button

    • Click on the video frame to add points (minimum 3)

    • Press ‘F’ or double-click to finish

    • ROI is saved automatically

  4. Draw tripwires (optional):

    • Click “Draw Tripwire” button

    • Click twice to define start and end points

    • Tripwire is saved automatically

  5. Add focal lengths (optional):

    • Enter comma-separated values (one per camera)

    • Click “Save Focal Length”

Parameters Configuration

Step 4: Create Alignment Data

  1. Click “Next” to go to Manual Alignment

  2. Choose one of two options:

    Option A: Upload Existing Alignment

    • Click “Upload alignment_data.json”

    • Select your JSON file

    • Wait for upload confirmation

    Option B: Create Alignment Interactively

    • Click “Open Alignment Tool”

    • Click the same physical point on Camera 0, Camera 1, and Layout (in order)

    • Repeat for at least 4 different points

    • Click “Save Alignment” when complete

Manual Alignment

Step 5: Run Calibration

  1. Click “Next” to go to Execute

  2. Review the requirements checklist

  3. Click “Verify Project” button

  4. Once verified, click “Start Calibration”

  5. Monitor the progress (status updates every 3 seconds)

  6. Wait for “Calibration completed successfully” message

Execute Calibration

Step 6: View Results

  1. Click “Next” to go to Results

  2. View the overlay image showing calibration results. Click “Download” to save overlay image

  3. Review camera parameters for each camera

  4. Export calibration data:

    • Click “Full Export AMC” for complete calibration data. It will load the json which contains the calibration data for all the cameras in the project, which you can edit according to your usecase.

    • Click “MV3DT ZIP AMC” for MV3DT-compatible format

Calibration Results

Settings#

Click the settings icon in the top-right corner to access application settings. Note: The settings icon is only visible when you are on the Parameters step.

Settings Dialog

Available Settings

  • Theme: Switch between light and dark themes

  • Version Information: View current application version

Note: Most settings are configured during deployment and cannot be changed from the UI.

Tips for Success#

File Naming Convention

  • Ensure videos are synchronized in time

  • While uploading videos maintain the order based on their FOV overlapping.

Input Video

  • Use high-resolution videos for better calibration accuracy

  • AMC is heavily dependent on the moving people instances in the videos. The videos should have enough number of clearly visible moving people.

Alignment Points

  • Choose points on the ground plane visible in all cameras

  • Select points at different depths and locations

  • Avoid points on moving objects

  • Use distinct features (corners, markings, etc.)

ROI and Tripwire Drawing

  • Draw ROIs to cover areas of interest

  • Place tripwires perpendicular to expected motion

  • Use tripwire directions to indicate motion direction

  • Test with different zoom levels for precision

Keyboard Shortcuts#

Parameters Step (Drawing)

  • F key: Finish current ROI

  • Esc key: Cancel current drawing

  • Scroll wheel: Zoom in/out on canvas

  • Click + Drag: Pan around zoomed canvas

Manual Alignment Step

  • Scroll wheel: Zoom in/out on alignment canvas

  • Click + Drag: Pan around zoomed canvas (when zoomed)

Next Steps#

Now that you’re familiar with the basics, explore: