Use a Custom Container Image#

Overview#

You can use any public Debian image as the starting point for a project environment.

NVIDIA provides default base images, but you can use others images through the Custom Container feature. This feature only works with images that have some AI Workbench specific metadata fields.

You must attach the metadata to the image through “image labels”.

If an image lacks those fields, you can add them by rebuilding it with a containerfile. Then you push the container to a registry and use it when you create a new project.

You don’t need to add all possible image labels, but the more the better.

AI Workbench has basic required image labels just to pull and build the base images. The recommended and optional labels enable AI Workbench to add things to the base image.

  • Required: Pull and build the labeled image

  • Recommended environment: Enables adding image layers through scripts and package lists

  • Recommended application: Enables managing applications already in the image

  • Optional environment: Add package managers like conda or use virtual environments

Key Concepts#

Container Registry

A service that stores and distributes container images. Examples are NVIDIA NGC, GitHub Container Registry and DockerHub. AI Workbench downloads base images from a registry.

Image Labels

Metadata key-value pairs added to a container image with the LABEL command. AI Workbench reads these labels to know what is in the base image.

Containerfile or Dockerfile

A script that provides instructions to build a container image. You use it to add labels with the LABEL command.

Use a Custom Container#

Prerequisites: A URL for a public container image with at least the required image labels.

If the container does not have these, AI Workbench will not pull and build it.

Step One: Create a new project.
  1. Select Location Manager > Location Card

  2. Select Location Window > New Project

Step Two: Fill in project details.
  1. Enter New Project > Name (lowercase alphanumeric and hyphens)

  2. (optional) Enter New Project > Description

  3. (optional) Alter the default path New Project > Path

  4. Click New Project > Next

Step Three: Select your custom container.
  1. Select Custom Container

  2. Enter the full container URL: registry.example.com/namespace/image:labeled

  3. Select Create

Success: The project tab opens and the container starts building.

Custom Build Labels#

Step One: Pull the unlabeled image and start a terminal in the container.
  1. Pull the image from the registry and namespace with the command:

    docker pull <registry.example.com/namespace/image:tag>
    
  2. Start a bash terminal in the container with the command:

    docker exec -it <image-name:tag> bash
    
Step Two: Verify and locate Linux distribution and environment information in the container.
  1. Find the distribution and release number with the command:

    cat /etc/os-release
    
  2. Verify Python is installed in the container

    python3 --version
    
  3. Find the path to pip with the command:

    which pip3
    
  4. Find the path to apt with the command:

    which apt
    
  5. Detect the installed CUDA version with the command:

    nvcc --version
    
  6. Keep the values as <distro>, <distro-release>, <pip-path>, <apt-path> and <cuda-version>

Step Three: Create the Dockerfile.
  1. Exit the container shell and get back on your local system

  2. Create a local folder for the Dockerfile and create it there

    mkdir <your-folder-name>
    cd <your-folder-name>
    touch Dockerfile
    
Step Four: Enter the base image, required labels and the recommended labels in the Dockerfile.
# Reference to original image
FROM <image:tag>

# Required Labels - Minimum for AI Workbench to pull the base image.
LABEL com.nvidia.workbench.name="<your-short-container-name>"
LABEL com.nvidia.workbench.description="<your-short-container-description>"
LABEL com.nvidia.workbench.schema-version="v2"
LABEL com.nvidia.workbench.image-version="1.0.0"
LABEL com.nvidia.workbench.os="linux"
LABEL com.nvidia.workbench.os-distro="<distro>"
LABEL com.nvidia.workbench.os-distro-release="<distro-release>"

# Enter CUDA version if it's installed, "" if CUDA is not installed.
LABEL com.nvidia.workbench.cuda-version="<cuda-version>"

# Minimum recommended labels for managed build

# Recommended package manager labels - Needed for pip/apt installs during build and the package manager UI
LABEL com.nvidia.workbench.package-manager.pip.binary="<pip-path>"
LABEL com.nvidia.workbench.package-manager.apt.binary="<apt-path>"

# Recommended programming language labels - Needed recognize the *requirements.txt* files and other things for builds
LABEL com.nvidia.workbench.programming-languages="python3"
Step Five: Rebuild the container using the Dockerfile.
docker run --provenance=false -t <image:tag> .
Step Six: Tag the rebuilt image and push it to the public repository.
  1. Add a tag to indicate the image is labeled with the command:

    docker tag <image:tag> <registry.example.com/namespace/image>:labeled
    
  2. Push the image to the repository with the command:

    docker push <registry.example.com/namespace/image>:labeled
    

Application Labels#

Add these labels to add any existing application in the image to the Application Launcher.

You don’t need to add this information to the image because you can do it through the Desktop App after the project is created. However, image labels ensure things are setup without you needing to configure it each time you use the image.

Step One: Verify the application is in the image and .
  1. Specify the application name, type, and class

  2. Add the command to start the application

    # Application - JupyterLab Example
    LABEL com.nvidia.workbench.application.jupyterlab.type="jupyterlab"
    LABEL com.nvidia.workbench.application.jupyterlab.class="webapp"
    LABEL com.nvidia.workbench.application.jupyterlab.start-cmd="jupyter lab --allow-root --port 8888 --ip 0.0.0.0 --no-browser --NotebookApp.base_url=\\$PROXY_PREFIX --NotebookApp.default_url=/lab --NotebookApp.allow_origin='*'"
    
Step Two: Add webapp-specific labels.
  1. For web applications, specify the port and URL configuration

  2. Set autolaunch to true if the app should open automatically

    # Webapp Configuration
    LABEL com.nvidia.workbench.application.jupyterlab.webapp.port="8888"
    LABEL com.nvidia.workbench.application.jupyterlab.webapp.autolaunch="true"
    LABEL com.nvidia.workbench.application.jupyterlab.webapp.url-cmd="jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d' ' | grep -v 'Currently'"
    
Step Three: (Optional) Add health check and stop commands.
  1. Add a command to verify the application is running

  2. Add a command to stop the application gracefully

    # Health Check and Stop Command
    LABEL com.nvidia.workbench.application.jupyterlab.health-check-cmd="[ \\$(echo url=\\$(jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d' ' | grep -v 'Currently' | sed \"s@/?@/lab?@g\") | curl -o /dev/null -s -w '%{http_code}' --config -) == '200' ]"
    LABEL com.nvidia.workbench.application.jupyterlab.stop-cmd="jupyter lab stop 8888"
    LABEL com.nvidia.workbench.application.jupyterlab.timeout-seconds="90"
    

Success: Your application will appear in the AI Workbench Application Launcher.

Optional Labels#

Add these labels to add any existing application in the image to the Application Launcher.

You don’t need to add this information to the image because you can do it through the Desktop App after the project is created. However, image labels ensure things are setup without you needing to configure it each time you use the image.

(optional) Add user information labels to use a specific user during the build or at runtime.
  1. Add these labels to the Dockerfile if there is a user you want available in the running container

    # Optional User Information - Only needed if your base image has an existing user to preserve
    LABEL com.nvidia.workbench.user.uid="1001"
    LABEL com.nvidia.workbench.user.gid="1000"
    LABEL com.nvidia.workbench.user.username="<user-name>"
    
(optional) Add notable packages already installed by package manager.
  1. Add these labels to make sure they are displayed in the project environment in the Desktop App

    # Optional Package Lists - Broken down by package manager
    LABEL com.nvidia.workbench.package-manager.apt.installed-packages="curl git git-lfs python3 gcc python3-dev python3-pip vim"
    LABEL com.nvidia.workbench.package-manager.pip.installed-packages="jupyterlab==4.3.5"
    

For conda or venv environments, add package-manager-environment labels.

If your container uses a virtual environment or Conda environment:

# For Conda environments
LABEL com.nvidia.workbench.package-manager-environment.type="conda"
LABEL com.nvidia.workbench.package-manager-environment.target="/opt/conda"

# For Python venv
LABEL com.nvidia.workbench.package-manager-environment.type="venv"
LABEL com.nvidia.workbench.package-manager-environment.target="/opt/venv"