Use Your Own Base Container

User Guide (Latest)

When you create a new Workbench Project, you typically choose one of the NVIDIA-provided base container environments available from the NVIDIA NGC Catalog.

If you want to use one of the pre-built containers and make simple customizations, such as adding a package, see Customize Your Environment Quickstart.

Use this documentation in the following cases:

  • You want to create a fully-custom container for reuse — If you want to create a fully-custom container that you can use for you own projects, or that you can publish and share with other AI Worbench users, see the section Create a fully-custom container for AI Workbench. This is an advanced scenario.

  • You want to customize the base container for a single project — If you want to change the behavior of the base container for a single project, see the section Customize a base container.

To create a fully-custom container to use as a base environment for AI Workbench projects, you build and expose the metadata for the image by creating Docker labels for your image.

Warning

Before you create a custom container for AI Workbench, understand the requirements for the container that you want to use.

An image can have multiple labels, however each key in a label must be unique. Labels in parent images are inherited but can be overridden in child images. For more information, see Docker object labels.

To allow for easy parsing of label keys and values, while avoiding inadvertent overriding of values, a strict label schema convention is defined. The convention is <domain-name>.<spec-field>, where domain-name is com.nvidia.workbench and spec-field is the base environment specification’s field name. For example:

  • com.nvidia.workbench.programming_languages

It is your responsibility as the base image creator to gather the required information from the parent images, and cumulate and append the information from the parent images while defining these labels. The string defined in the com.nvidia.workbench.image-version label is used to sort multiple images from newest to oldest. These strings represent the semantic versions of the image, and are used to sort and order image versions in the UI/ CLI. If this label is not specified, the image tags are string-sorted to determine the display order.

Use the information in the following table to create the Docker labels for your custom container image. For a description of each field, see Modify your spec.yaml file.

Suggested Docker Label

Example Usage

com.nvidia.workbench.build-timestamp com.nvidia.workbench.build-timestamp = “20221206090342"
com.nvidia.workbench.name com.nvidia.workbench.name = “Pytorch with CUDA”
com.nvidia.workbench.cuda-version com.nvidia.workbench.cuda-version = “11.2"
com.nvidia.workbench.description com.nvidia.workbench.description = “A minimal Base containing Python 2.7 and JupyterLab.”
com.nvidia.workbench.entrypoint-script com.nvidia.workbench.entrypoint-script = "/home/workbench/entrypoint.sh”
com.nvidia.workbench.labels com.nvidia.workbench.labels = "<comma separated list of labels>"
com.nvidia.workbench.programming-languages com.nvidia.workbench.programming-languages = “python3"
com.nvidia.workbench.icon-url com.nvidia.workbench.icon-url = “https://assets.nvidiagrid.net/ngc/logos/img.png”
com.nvidia.workbench.image-version com.nvidia.workbench.image-version = “1.0.0"
com.nvidia.workbench.os com.nvidia.workbench.os = “linux”
com.nvidia.workbench.os-distro com.nvidia.workbench.os-distro = “ubuntu”
com.nvidia.workbench.os-distro-release com.nvidia.workbench.os-distro-release = “16.04"
com.nvidia.workbench.schema-version com.nvidia.workbench.schema-version = “v2"
com.nvidia.workbench.user.uid com.nvidia.workbench.user.uid = “1001"
com.nvidia.workbench.user.gid com.nvidia.workbench.user.gid = “1001"
com.nvidia.workbench.user.username com.nvidia.workbench.user.username = “appuser”
com.nvidia.workbench.package-manager..binary com.nvidia.workbench.package-managers.pip = "/usr/local/bin/pip”
com.nvidia.workbench.package-manager..installed-packages com.nvidia.workbench.package-manager.conda3.installed-packages="scipy=1.3* sympy=1.4* numpy=1.16*"
com.nvidia.workbench.package-manager-environment.type com.nvidia.workbench.package-manager-environment.type = “conda” or “venv”
com.nvidia.workbench.package-manager-environment.target com.nvidia.workbench.package-manager-environment.target = "/opt/conda”
com.nvidia.workbench.application..type com.nvidia.workbench.application.jupyterlab.type = “jupyter”
com.nvidia.workbench.application..class com.nvidia.workbench.application.jupyterlab.class = “webapp”
com.nvidia.workbench.application..start-cmd com.nvidia.workbench.application.jupyterlab.start-cmd = “jupyter notebook --allow-root --port 8888 --ip 0.0.0.0 --no-browser”
com.nvidia.workbench.application..health-check-cmd com.nvidia.workbench.application.jupyterlab.health-check-cmd="<your command>"
com.nvidia.workbench.application..stop-command com.nvidia.workbench.application.jupyterlab.stop-command = “jupyter notebook stop 8888"
com.nvidia.workbench.application..user-msg com.nvidia.workbench.application.jupyterlab.user-msg= “Application {{.Name}} is running at {{.URL}}”
com.nvidia.workbench.application..icon-url com.nvidia.workbench.application.jupyterlab.icon-url = “https://assets.nvidiagrid.net/ngc/logos/jupyterlab.png”
com.nvidia.workbench.application..webapp.autolaunch com.nvidia.workbench.application.jupyterlab.webapp.autolaunch=true
com.nvidia.workbench.application..webapp.port com.nvidia.workbench.application.jupyterlab.webapp.port = “8888"
com.nvidia.workbench.application..webapp.proxy.trim-prefix com.nvidia.workbench.application.myapp.webapp.proxy.trim-prefix=true
com.nvidia.workbench.application..webapp.url com.nvidia.workbench.application.jupyterlab.webapp.url = “http://localhost:6006"
com.nvidia.workbench.application..webapp.url-cmd com.nvidia.workbench.application.jupyterlab.webapp.url-cmd = “jupyter notebook list | head -n 2 | tail -n 1 | cut -f1 -d’ ’"
com.nvidia.workbench.application..process.wait-until-finished com.nvidia.workbench.application.jupyterlab.process.wait-until-finished=true

If you want to change the behavior of the base container for a single project, you can manually edit the metadata contained in the spec.yaml file. For an example spec.yaml file, see Example spec file.

First you modify your spec.yaml file, and then you rebuild your project environment.

Modify your spec.yaml file

To use your own container, navigate to the .project/spec.yaml file of your project, and scroll to the environment section. For an example spec.yaml file, see Example spec file.

Use the following list to determine what changes you must make to your spec file.

  • Change Required — Changes that must occur to customize the project container. Without these changes, your project does not build correctly.

  • Change Recommended — Best-practice changes that you should make to the project container. These are fields that are used by the NVIDIA AI Workbench client software. Your project might still build, and components might still function; however, your experience working on the project inside the AI Workbench UI might be negatively impacted.

  • Change Optional — Change these fields only if they are relevant to your project.

Use the information in the following table to edit your spec file.

Field

Change?

Description

Recommended Action

Example Usage

registry Required The container registry containing the base image. AI Workbench will use this as part of the container build process. Specify the container registry for your container of interest; if you have the Dockerfile, you may need to first build the container and push it to a registry. registry: nvcr.io
image Required The container image on top of which the Project container will be built. Specify the container image (and tag, if any) for your container of interest; if you have the Dockerfile, you may need to first build the container and push it to a registry. Also include the namespace if needed, but do not include the registry. image: nvidia/pytorch:23.12-py3
build_timestamp Optional A timestamp of the last image build in Y/m/d/H/M/S format. No manual updates are necessary. AI Workbench updates this field when you build your environment. “20231212000523"
name Required Name of the base container. Specify a name for the container. AI Workbench displays the old container information in the UI if left unchanged. name: PyTorch
supported_architectures Recommended A list of supported architectures that the base environment image is built to be compatible with for metadata purposes. Specify a list of supported architectures for your container, or leave as a blank list if not applicable.
Copy
Copied!
            

supported_architectures: - "amd64" - "arm64"

cuda_version Required The version of CUDA installed in the base environment, if applicable. Specify the version of CUDA installed in this container, or leave blank if not applicable. AI Workbench matches drivers incorrectly if not updated. cuda_version: “12.2"
description Required A description of the base container. Specify a brief and informative description of the container. AI Workbench displays the old container information in the UI if left unchanged. description: A Pytorch 2.1 Base with CUDA 12.2
entrypoint_script Optional The path to a script that runs when the project container starts. If your project requires any custom action when the container starts, we provide a location for an entrypoint script you may want to use. Specify the location to the script here. entrypoint_script: /path/to/script.sh
labels Recommended A list of labels for the base environment. Specify a list of search term keywords or labels to be attached alongside your container. Consider these as search term keywords or descriptors for your container.
Copy
Copied!
            

labels: - cuda12.2 - pytorch2.1

apps Recommended A list of objects describing the applications installed in the base environment. For details, see Specify apps in your container.
Copy
Copied!
            

apps: - name: jupyterlab type: jupyterlab ...

programming_languages Recommended A list of programming languages installed in the base environment for metadata purposes. Specify the programming languages installed in this container, or leave blank if not applicable.
Copy
Copied!
            

programming_languages: - python3

icon_url Optional A link to the icon or image for the base environment. If you would like AI Workbench to display an icon as part of this base environment, link the URL to the icon image here or leave it blank for defaults. icon_url: https://my-website.com/my-image.png
image_version Recommended The version number of the container image used by AI Workbench for metadata purposes. Specify the version number of the container image, if any. image_version: 1.0.3
os Recommended The name of the base environment operating system used by AI Workbench for metadata purposes. Specify the name of the base environment operating system. os: linux
os_distro Recommended The name of the base environment operating system distribution used by AI Workbench for metadata purposes. Specify the name of the base environment operating system distribution. os_distro: ubuntu
os_distro_release Recommended The release version of the base environment operating system distribution used by AI Workbench for metadata purposes. Specify the release version of the base environment operating system distribution. os_distro_release: “22.04"
schema_version Optional Metadata for the version of the base environment container label schema currently read by AI Workbench. There is no need to update this field. However, if incorrect the project will break. schema_version: v2
user_info Recommended Information about the user that the container processes should run as. AI Workbench automatically provisions a user for you when you run the container, but this field overrides that user. For example, depending on the base container you may need to run the project as a root user. Set uid to “0", gid to “0", and username to “root” to force the container to run as root.
Copy
Copied!
            

user_info: uid: "0" gid: "0" username: "root"

package_managers Recommended A list of objects with information about the installed package managers. This information is used by AI Workbench to track project package installation in the base container image. For each package manager in the container, specify the name of the package manager, the complete path to the manager binary, and a comma-separated list of packages installed by that manager. If this remains unchanged, the package manager widget will likely not work, especially if you are working with a venv or conda environment.
Copy
Copied!
            

package_managers: - name: conda binary_path: /opt/conda/bin/conda installed_packages: - python=3.9.18 - pip - name: apt binary_path: /usr/bin/apt installed_packages: - ca-certificates - curl - name: pip binary_path: /opt/conda/bin/pip installed_packages: []

Specify apps in your container

If your custom base container has applications pre-installed, use the information in the following table to specify them in your spec file.

Field

Description

Example Usage

name The name of the application. This name appears in the user interface. name: jupyterlab
type The type of application, used to determine what application-specific automation will be run. type: jupyterlab
class The class of application, used to determine what optional configuration options are available, eg. webapp, process, or native. class: webapp
start_command The shell command used to start the application. Must not be a blocking command. start_command: jupyter lab --allow-root --port 8888 --ip 0.0.0.0 --no-browser --NotebookApp.base_url=\$PROXY_PREFIX --NotebookApp.default_url=/lab --NotebookApp.allow_origin='*'
health_check_command The shell command used to check the health or status of the application. A return of zero means the application is running and healthy. A return of non-zero means that the application is not running or unhealthy. health_check_command: '[ \$(echo url=\$(jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d’’ '' | grep -v ''Currently’’ | sed “s@/?@/lab?@g”) | curl -o /dev/null -s -w ''%{http_code}'' --config -) == ''200'’ ]’
stop_command The shell command used to stop the application. stop_command: jupyter lab stop 8888
user_msg An optional message that appears to the user when the application is running. user_msg: ""
icon_url An optional link to the icon or image used for the application. icon_url: ""
webapp_options If class is specified as a webapp, the following options are available.
  • autolaunch - True if AI Workbench should automatically open the application URL for the user; otherwise, false.
  • port - The port that the application runs on.
  • proxy - If specified include trim_prefix - True if the AI Workbench reverse proxy should remove the application-specific Url prefix before forwarding the request to the application; otherwise, false.
  • url - The static URI used to access the application.
  • Or url_command - The shell command used to get the URI for the application. The output from this command is considered the URL.
Copy
Copied!
            

webapp_options: autolaunch: true port: "8888" proxy: trim_prefix: false url_command: <your command>

Copy
Copied!
            

webapp_options: autolaunch: true port: "6006" proxy: trim_prefix: false url: http://localhost:6006

Rebuild your project environment

Whenever you directly change the spec.yaml file, you must perform a complete rebuild of the project container before you can access the changes.

Rebuild by using the desktop application

To rebuild your environment by using the AI Workbench desktop application, do the following.

  1. In AI Workbench, on the Environment page, if your environment is running, click Stop Environment.

    Wait until you see Build Required next to Environment.

  2. Click Start Build.

    The project builds. Wait until you see Build Ready in the status bar.

  3. Click Start Environment.

Rebuild by using the CLI

To rebuild your environment by using the AI Workbench CLI, do the following.

  1. Run the following command to stop your container environment. If the container is already stopped you see a message that the container is not running. Continue to the next step.

    Copy
    Copied!
                

    nvwb stop --container

  2. Run the following command to rebuild your project.

    Copy
    Copied!
                

    nvwb build

    The project builds. Wait until you see Container build complete and then go to the next section to test your new package.

  3. Run the following command to start your container environment.

    Copy
    Copied!
                

    nvwb start --container

Example spec file

The following is one example of a spec file for a Workbench Project.

Copy
Copied!
            

specVersion: ... meta: ... layout: ... environment: base: registry: nvcr.io image: vjrv0zpbsw9c/internal/pytorch:1.0.3 build_timestamp: "20231212000523" name: PyTorch supported_architectures: [] cuda_version: "12.2" description: A Pytorch 2.1 Base with CUDA 12.2 entrypoint_script: "" labels: - cuda12.2 - pytorch2.1 apps: - name: jupyterlab type: jupyterlab class: webapp start_command: jupyter lab --allow-root --port 8888 --ip 0.0.0.0 --no-browser --NotebookApp.base_url=\$PROXY_PREFIX --NotebookApp.default_url=/lab --NotebookApp.allow_origin='*' health_check_command: '[\$(echourl=\$(jupyterlablist|head-n2|tail -n1|cut-f1-d''''|grep-v''Currently''|sed"s@/?@/lab?@g")|curl -o/dev/null-s-w''%{http_code}''--config-)==''200'']' stop_command: jupyter lab stop 8888 user_msg: "" icon_url: "" webapp_options: autolaunch: true port: "8888" proxy: trim_prefix: false url_command: jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d' ' | grep -v 'Currently' - name: tensorboard type: tensorboard class: webapp start_command: tensorboard --logdir \$TENSORBOARD_LOGS_DIRECTORY --path_prefix=\$PROXY_PREFIX --bind_all health_check_command: '[\$(curl-o/dev/null-s-w''%{http_code}''http://localhost:\$TENSORBOARD_PORT\$PROXY_PREFIX/) ==''200'']' stop_command: "" user_msg: "" icon_url: "" webapp_options: autolaunch: true port: "6006" proxy: trim_prefix: false url: http://localhost:6006 programming_languages: - python3 icon_url: "" image_version: 1.0.3 os: linux os_distro: ubuntu os_distro_release: "22.04" schema_version: v2 user_info: uid: "" gid: "" username: "" package_managers: - name: apt binary_path: /usr/bin/apt installed_packages: - curl - git - git-lfs - vim - name: pip binary_path: /usr/local/bin/pip installed_packages: - jupyterlab==4.0.7 package_manager_environment: name: "" target: "" execution: ...

Previous Default Base Environments
Next Applications
© Copyright © 2024, NVIDIA Corporation. Last updated on Jun 10, 2024.