Custom Container Base Image Labels#
Overview#
AI Workbench uses Docker labels to read metadata from custom base images. These labels tell AI Workbench about the operating system, installed software, package managers, and available applications in your base image.
When you create a project with a custom base image, AI Workbench reads these labels to:
Build the project container correctly on top of your base image
Enable features like the package manager widget
Display metadata about the environment in the Desktop App
Configure user management and permissions
Integrate applications with the Application Launcher
Labels are added to your base image Dockerfile using the LABEL instruction. AI Workbench inherits labels from parent images, but you can override them in child images.
Label Schema Convention#
All AI Workbench labels follow the convention com.nvidia.workbench.<field-name>.
For example:
LABEL com.nvidia.workbench.programming-languages="python3"
LABEL com.nvidia.workbench.os-distro="ubuntu"
Each label key must be unique. Labels in parent images are inherited but can be overridden in child images. For more information about Docker labels, see Docker object labels.
Label Requirements#
AI Workbench labels fall into three categories based on their importance and functionality.
Required Labels#
Five labels are required for AI Workbench to recognize and use a custom base image:
Label |
Purpose |
|---|---|
|
Display name shown in the UI |
|
Brief description shown in the UI |
|
Must be “v2” - indicates label schema version |
|
Semantic version for sorting multiple image versions |
|
CUDA version for driver compatibility checks (empty string if no CUDA) |
Without these five labels, AI Workbench cannot use the container as a base image.
Recommended Labels#
These labels enable important AI Workbench features and should be included for the best experience:
Label |
Purpose |
|---|---|
|
Operating system name (typically “linux”) |
|
OS distribution name (e.g., “ubuntu”) |
|
OS distribution version (e.g., “22.04”) |
|
Installed programming languages (e.g., “python3”) |
|
Path to package manager binary (required for package manager widget) |
|
List of installed packages for display |
|
Virtual environment type (“conda” or “venv”) |
|
Path to virtual environment |
The package manager labels are essential for the package manager widget in the Desktop App to function.
Optional Labels#
These labels provide additional functionality but are not required:
Label |
Purpose |
|---|---|
|
UID of existing user to preserve |
|
GID of existing user to preserve |
|
Username of existing user to preserve |
|
Comma-separated search keywords/tags |
|
URL to icon image for display |
|
Build timestamp (auto-updated by AI Workbench) |
|
Path to base image entrypoint script to preserve |
|
Application integration labels (see Application Labels section) |
Core Metadata Labels#
name#
Label: com.nvidia.workbench.name
Required: Yes
Description: Display name for the base image shown in the AI Workbench UI.
Example:
LABEL com.nvidia.workbench.name="PyTorch with CUDA 12.2"
description#
Label: com.nvidia.workbench.description
Required: Yes
Description: Brief description of the base image shown in the AI Workbench UI. Should summarize key features or use cases.
Example:
LABEL com.nvidia.workbench.description="PyTorch 2.1 environment with CUDA 12.2 and JupyterLab"
schema-version#
Label: com.nvidia.workbench.schema-version
Required: Yes
Description: Indicates the version of the AI Workbench label schema. Must be set to “v2” for current versions of AI Workbench.
Example:
LABEL com.nvidia.workbench.schema-version="v2"
image-version#
Label: com.nvidia.workbench.image-version
Required: Yes
Description: Semantic version number for the base image. AI Workbench uses this to sort and order multiple versions of the same image in the UI and CLI. If not specified, image tags are string-sorted instead.
Example:
LABEL com.nvidia.workbench.image-version="1.0.5"
cuda-version#
Label: com.nvidia.workbench.cuda-version
Required: Yes
Description: Version of CUDA installed in the base image. AI Workbench uses this to verify that the host NVIDIA driver is compatible when GPUs are requested. Set to an empty string if CUDA is not installed.
Example with CUDA:
LABEL com.nvidia.workbench.cuda-version="12.2"
Example without CUDA:
LABEL com.nvidia.workbench.cuda-version=""
Operating System Labels#
os#
Label: com.nvidia.workbench.os
Recommended: Yes
Description: Operating system name. Typically “linux” for container images.
Example:
LABEL com.nvidia.workbench.os="linux"
os-distro#
Label: com.nvidia.workbench.os-distro
Recommended: Yes
Description: Operating system distribution name. Common values include “ubuntu”, “debian”, “centos”, “rocky”.
Example:
LABEL com.nvidia.workbench.os-distro="ubuntu"
os-distro-release#
Label: com.nvidia.workbench.os-distro-release
Recommended: Yes
Description: Operating system distribution version number.
Example:
LABEL com.nvidia.workbench.os-distro-release="22.04"
programming-languages#
Label: com.nvidia.workbench.programming-languages
Recommended: Yes
Description: Programming languages installed in the base image. Use space-separated values for multiple languages.
Example:
LABEL com.nvidia.workbench.programming-languages="python3"
User Management Labels#
AI Workbench automatically handles user management when building the project container. The behavior depends on whether user labels are present in the base image.
Without user labels: AI Workbench creates a default workbench user with UID/GID 1000 and sudo access.
With user labels: AI Workbench adjusts the specified user to UID 1000 while preserving the username. This is useful when the base image has an existing user with files and permissions already configured.
user.uid#
Label: com.nvidia.workbench.user.uid
Optional: Yes
Description: User ID of an existing user in the base image. AI Workbench will adjust this user to UID 1000 during project container build.
Example:
LABEL com.nvidia.workbench.user.uid="1001"
user.gid#
Label: com.nvidia.workbench.user.gid
Optional: Yes
Description: Group ID of an existing user in the base image.
Example:
LABEL com.nvidia.workbench.user.gid="1000"
user.username#
Label: com.nvidia.workbench.user.username
Optional: Yes
Description: Username of an existing user in the base image. This username is preserved when AI Workbench adjusts the user to UID 1000.
Example:
LABEL com.nvidia.workbench.user.username="rapids"
Complete user labels example:
# Preserve the rapids user from the base image
LABEL com.nvidia.workbench.user.uid="1001"
LABEL com.nvidia.workbench.user.gid="1000"
LABEL com.nvidia.workbench.user.username="rapids"
Package Manager Labels#
Package manager labels are essential for the AI Workbench package manager widget to function. These labels tell AI Workbench where package manager binaries are located and what packages are installed.
package-manager.<name>.binary#
Label: com.nvidia.workbench.package-manager.<name>.binary
Recommended: Yes
Description: Absolute path to the package manager binary. Replace <name> with the package manager name (apt, pip, conda, conda3, etc.). Required for the package manager widget to add packages.
Examples:
LABEL com.nvidia.workbench.package-manager.apt.binary="/usr/bin/apt"
LABEL com.nvidia.workbench.package-manager.pip.binary="/usr/bin/pip"
LABEL com.nvidia.workbench.package-manager.conda.binary="/opt/conda/bin/conda"
package-manager.<name>.installed-packages#
Label: com.nvidia.workbench.package-manager.<name>.installed-packages
Recommended: Yes
Description: Space-separated list of packages installed by this package manager. Used for display in the Desktop App. Can list key packages rather than all packages.
Examples:
LABEL com.nvidia.workbench.package-manager.apt.installed-packages="curl git git-lfs python3 vim"
LABEL com.nvidia.workbench.package-manager.pip.installed-packages="jupyterlab==4.3.5 pandas numpy"
LABEL com.nvidia.workbench.package-manager.conda.installed-packages="pytorch cudatoolkit"
package-manager-environment.type#
Label: com.nvidia.workbench.package-manager-environment.type
Recommended for conda/venv: Yes
Description: Type of package manager environment. Valid values are “conda” or “venv”. Used when the base image has a virtual environment or Conda environment that should be activated before installing packages.
Example:
LABEL com.nvidia.workbench.package-manager-environment.type="conda"
package-manager-environment.target#
Label: com.nvidia.workbench.package-manager-environment.target
Recommended for conda/venv: Yes
Description: Absolute path to the package manager environment. For Conda, this is typically the Conda installation directory. For venv, this is the virtual environment directory.
Examples:
# For Conda
LABEL com.nvidia.workbench.package-manager-environment.target="/opt/conda"
# For Python venv
LABEL com.nvidia.workbench.package-manager-environment.target="/opt/venv"
Complete package manager example with Conda:
# Package managers
LABEL com.nvidia.workbench.package-manager.apt.binary="/usr/bin/apt"
LABEL com.nvidia.workbench.package-manager.apt.installed-packages="curl git vim"
LABEL com.nvidia.workbench.package-manager.conda.binary="/opt/conda/bin/conda"
LABEL com.nvidia.workbench.package-manager.conda.installed-packages="python=3.11 pytorch cudatoolkit"
LABEL com.nvidia.workbench.package-manager.pip.binary="/opt/conda/bin/pip"
LABEL com.nvidia.workbench.package-manager.pip.installed-packages="jupyterlab"
# Conda environment
LABEL com.nvidia.workbench.package-manager-environment.type="conda"
LABEL com.nvidia.workbench.package-manager-environment.target="/opt/conda"
Application Labels#
Application labels integrate applications like JupyterLab, TensorBoard, or custom applications with the AI Workbench Application Launcher. These labels are completely optional but enable automatic application startup, health monitoring, and URL discovery.
All application labels follow the pattern com.nvidia.workbench.application.<application-name>.<property>.
Application Types#
AI Workbench supports three application classes:
webapp: Web applications accessed through a browser (e.g., JupyterLab, TensorBoard)
process: Background processes without a UI (e.g., training scripts, data processing)
native: Applications that launch natively on the host (e.g., VS Code)
Common Application Labels#
These labels apply to all application types:
application.<name>.type#
Label: com.nvidia.workbench.application.<name>.type
Description: Type identifier for the application. Can be a standard type like “jupyterlab” or “tensorboard”, or a custom type name.
Example:
LABEL com.nvidia.workbench.application.jupyterlab.type="jupyterlab"
LABEL com.nvidia.workbench.application.myapp.type="custom"
application.<name>.class#
Label: com.nvidia.workbench.application.<name>.class
Description: Application class. Valid values: “webapp”, “process”, “native”.
Example:
LABEL com.nvidia.workbench.application.jupyterlab.class="webapp"
application.<name>.start-cmd#
Label: com.nvidia.workbench.application.<name>.start-cmd
Description: Shell command to start the application. Must not be a blocking command.
Example:
LABEL com.nvidia.workbench.application.jupyterlab.start-cmd="jupyter lab --allow-root --port 8888 --ip 0.0.0.0 --no-browser --NotebookApp.base_url=\\$PROXY_PREFIX --NotebookApp.default_url=/lab --NotebookApp.allow_origin='*'"
application.<name>.stop-cmd#
Label: com.nvidia.workbench.application.<name>.stop-cmd
Description: Shell command to stop the application gracefully.
Example:
LABEL com.nvidia.workbench.application.jupyterlab.stop-cmd="jupyter lab stop 8888"
application.<name>.health-check-cmd#
Label: com.nvidia.workbench.application.<name>.health-check-cmd
Description: Shell command to check if the application is running and healthy. Should return 0 if healthy, non-zero otherwise.
Example:
LABEL com.nvidia.workbench.application.jupyterlab.health-check-cmd="[ \\$(echo url=\\$(jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d' ' | grep -v 'Currently' | sed \"s@/?@/lab?@g\") | curl -o /dev/null -s -w '%{http_code}' --config -) == '200' ]"
application.<name>.timeout-seconds#
Label: com.nvidia.workbench.application.<name>.timeout-seconds
Description: Number of seconds to wait for health check to complete. Valid values are greater than 0 and less than 3600. Default is 60.
Example:
LABEL com.nvidia.workbench.application.jupyterlab.timeout-seconds="90"
application.<name>.user-msg#
Label: com.nvidia.workbench.application.<name>.user-msg
Description: Optional message displayed to the user when the application starts. For webapps, you can use the placeholder {{.URL}} which will be replaced with the application URL.
Example:
LABEL com.nvidia.workbench.application.myapp.user-msg="Application is running at {{.URL}}"
application.<name>.icon-url#
Label: com.nvidia.workbench.application.<name>.icon-url
Description: Optional URL to an icon image for the application.
Example:
LABEL com.nvidia.workbench.application.jupyterlab.icon-url="https://example.com/jupyter-icon.png"
Webapp-Specific Labels#
These labels are required for applications with class="webapp":
application.<name>.webapp.port#
Label: com.nvidia.workbench.application.<name>.webapp.port
Description: Port number the web application listens on.
Example:
LABEL com.nvidia.workbench.application.jupyterlab.webapp.port="8888"
application.<name>.webapp.autolaunch#
Label: com.nvidia.workbench.application.<name>.webapp.autolaunch
Description: Whether to automatically open the application URL in a browser. Valid values: “true” or “false”.
Example:
LABEL com.nvidia.workbench.application.jupyterlab.webapp.autolaunch="true"
application.<name>.webapp.url#
Label: com.nvidia.workbench.application.<name>.webapp.url
Description: Static URL for the application. Use this if the URL is predictable and doesn’t change.
Example:
LABEL com.nvidia.workbench.application.tensorboard.webapp.url="http://localhost:6006"
application.<name>.webapp.url-cmd#
Label: com.nvidia.workbench.application.<name>.webapp.url-cmd
Description: Shell command to discover the application URL dynamically. Use this when the URL changes (e.g., includes a token). The command output is treated as the URL.
Example:
LABEL com.nvidia.workbench.application.jupyterlab.webapp.url-cmd="jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d' ' | grep -v 'Currently'"
application.<name>.webapp.proxy.trim-prefix#
Label: com.nvidia.workbench.application.<name>.webapp.proxy.trim-prefix
Description: Whether the AI Workbench reverse proxy should remove the application-specific URL prefix before forwarding requests. Valid values: “true” or “false”.
Example:
LABEL com.nvidia.workbench.application.myapp.webapp.proxy.trim-prefix="true"
Process-Specific Labels#
These labels are for applications with class="process":
application.<name>.process.wait-until-finished#
Label: com.nvidia.workbench.application.<name>.process.wait-until-finished
Description: Whether the Desktop App should wait for the process to complete. If true, the app notifies you when the process finishes. The CLI always waits. Valid values: “true” or “false”.
Example:
LABEL com.nvidia.workbench.application.trainer.process.wait-until-finished="true"
Complete Application Examples#
JupyterLab webapp example:
# JupyterLab application
LABEL com.nvidia.workbench.application.jupyterlab.type="jupyterlab"
LABEL com.nvidia.workbench.application.jupyterlab.class="webapp"
LABEL com.nvidia.workbench.application.jupyterlab.start-cmd="jupyter lab --allow-root --port 8888 --ip 0.0.0.0 --no-browser --NotebookApp.base_url=\\$PROXY_PREFIX --NotebookApp.default_url=/lab --NotebookApp.allow_origin='*'"
LABEL com.nvidia.workbench.application.jupyterlab.stop-cmd="jupyter lab stop 8888"
LABEL com.nvidia.workbench.application.jupyterlab.health-check-cmd="[ \\$(echo url=\\$(jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d' ' | grep -v 'Currently' | sed \"s@/?@/lab?@g\") | curl -o /dev/null -s -w '%{http_code}' --config -) == '200' ]"
LABEL com.nvidia.workbench.application.jupyterlab.timeout-seconds="90"
LABEL com.nvidia.workbench.application.jupyterlab.webapp.port="8888"
LABEL com.nvidia.workbench.application.jupyterlab.webapp.autolaunch="true"
LABEL com.nvidia.workbench.application.jupyterlab.webapp.url-cmd="jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d' ' | grep -v 'Currently'"
TensorBoard webapp example:
# TensorBoard application
LABEL com.nvidia.workbench.application.tensorboard.type="tensorboard"
LABEL com.nvidia.workbench.application.tensorboard.class="webapp"
LABEL com.nvidia.workbench.application.tensorboard.start-cmd="tensorboard --logdir \\$TENSORBOARD_LOGS_DIRECTORY --path_prefix=\\$PROXY_PREFIX --bind_all"
LABEL com.nvidia.workbench.application.tensorboard.health-check-cmd="[ \\$(curl -o /dev/null -s -w '%{http_code}' http://localhost:\\$TENSORBOARD_PORT\\$PROXY_PREFIX/) == '200' ]"
LABEL com.nvidia.workbench.application.tensorboard.timeout-seconds="90"
LABEL com.nvidia.workbench.application.tensorboard.webapp.port="6006"
LABEL com.nvidia.workbench.application.tensorboard.webapp.autolaunch="true"
LABEL com.nvidia.workbench.application.tensorboard.webapp.url="http://localhost:6006"
Display and Search Labels#
These labels control how the base image appears in the AI Workbench UI and enable search functionality.
labels#
Label: com.nvidia.workbench.labels
Optional: Yes
Description: Comma-separated list of search keywords or tags for the base image. Used for filtering and searching in the Desktop App.
Example:
LABEL com.nvidia.workbench.labels="cuda12.2, pytorch2.1, python3, jupyterlab"
icon#
Label: com.nvidia.workbench.icon
Optional: Yes
Description: URL to an icon image for the base image. Displayed in the Desktop App when selecting base images.
Example:
LABEL com.nvidia.workbench.icon="https://example.com/my-base-image-icon.png"
build-timestamp#
Label: com.nvidia.workbench.build-timestamp
Optional: Yes
Description: Timestamp of when the base image was built, in YYYYMMDDHHMMSS format. AI Workbench automatically updates this when building project containers, so you typically don’t need to set it manually.
Example:
LABEL com.nvidia.workbench.build-timestamp="20241109153045"
Container Behavior Labels#
entrypoint-script#
Label: com.nvidia.workbench.entrypoint-script
Optional: Yes
Description: Path to an entrypoint script in the base image that should be preserved. When building the project container, AI Workbench wraps this base entrypoint with its own entrypoint, ensuring both run correctly.
This is useful when your base image has initialization logic that needs to run on container startup.
Example:
LABEL com.nvidia.workbench.entrypoint-script="/home/rapids/entrypoint.sh"
How it works: AI Workbench sets the environment variable NVWB_BASE_ENV_ENTRYPOINT to this path in the project container, and its entrypoint wrapper calls it before executing the main command.
Registry Support#
AI Workbench supports pulling base images from the following container registries:
Supported Registries#
NVIDIA GPU Cloud (NGC): nvcr.io
Docker Hub: docker.io (default registry)
GitHub Container Registry: ghcr.io
GitLab Container Registry: registry.gitlab.com
Self-hosted GitLab: Custom GitLab registry URLs
Authentication#
For private registries, configure authentication in AI Workbench before using the base image:
Open the AI Workbench Desktop App
Go to Settings (gear icon in top right)
Select Integrations
Connect to your registry:
NGC: Sign in with your NVIDIA account
GitHub: Authenticate with GitHub OAuth
GitLab: Authenticate with GitLab OAuth
Self-hosted GitLab: Add custom GitLab URL and authenticate
Once authenticated, AI Workbench can pull private images from that registry when creating projects.