User Guide (Latest)
User Guide (Latest)

Use Your Own Base Container

When you create a new AI Workbench project, you typically choose one of the NVIDIA-provided base container environments available from the NVIDIA NGC Catalog.

Use this documentation when you want to create a fully custom container that you can use for you own projects, or that you can publish and share with other AI Workbench users. This is an advanced scenario. For documentation that walks you through this process, see Advanced Walkthrough: Use Your Own Container.

Note

If you want to use one of the pre-built containers and make simple customizations, such as adding packages, see Walkthrough: Customize Your Environment and Environment Configuration instead.

If you want to change the behavior of the base container for a single project, see Customize Your Container instead.

Use this documentation to learn about the following:

To create a fully custom container to use as a base environment for AI Workbench projects, you build and expose the metadata for the image by creating Docker labels for your image.

Warning

Before you create a custom container for AI Workbench, understand the requirements for the container that you want to use.

An image can have multiple labels, however each key in a label must be unique. Labels in parent images are inherited but can be overridden in child images. For more information, see Docker object labels.

To enable parsing of label keys and values, while avoiding inadvertent overriding of values, a strict label schema convention is defined. The convention is <domain-name>.<spec-field>, where domain-name is com.nvidia.workbench and spec-field is the base environment specification’s field name. For example:

  • com.nvidia.workbench.programming_languages

It is your responsibility as the base image creator to gather the required information from the parent images, and cumulate and append the information from the parent images while defining these labels. The string defined in the com.nvidia.workbench.image-version label is used to sort multiple images from newest to oldest. These strings represent the semantic versions of the image, and are used to sort and order image versions in the UI/ CLI. If this label is not specified, the image tags are string-sorted to determine the display order.

After you apply labels to your image, you can publish it to a supported container registry. After your image is published, you can create a new project and specify your image on the Custom Container tab.

Use the information in the following table to create the Docker labels for your custom container image. For a description of each field, see AI Workbench Project Spec Definition.

Suggested Docker Label

Example Usage

com.nvidia.workbench.build-timestamp com.nvidia.workbench.build-timestamp = "20221206090342"
com.nvidia.workbench.name com.nvidia.workbench.name = "Pytorch with CUDA"
com.nvidia.workbench.cuda-version com.nvidia.workbench.cuda-version = "11.2"
com.nvidia.workbench.description com.nvidia.workbench.description = "A minimal Base containing Python 2.7 and JupyterLab."
com.nvidia.workbench.entrypoint-script com.nvidia.workbench.entrypoint-script = "/home/workbench/entrypoint.sh"
com.nvidia.workbench.labels com.nvidia.workbench.labels = "<comma separated list of labels>"
com.nvidia.workbench.programming-languages com.nvidia.workbench.programming-languages = "python3"
com.nvidia.workbench.icon-url com.nvidia.workbench.icon-url = "https://assets.nvidiagrid.net/ngc/logos/img.png"
com.nvidia.workbench.image-version com.nvidia.workbench.image-version = "1.0.0"
com.nvidia.workbench.os com.nvidia.workbench.os = "linux"
com.nvidia.workbench.os-distro com.nvidia.workbench.os-distro = "ubuntu"
com.nvidia.workbench.os-distro-release com.nvidia.workbench.os-distro-release = "16.04"
com.nvidia.workbench.schema-version com.nvidia.workbench.schema-version = "v2"
com.nvidia.workbench.user.uid com.nvidia.workbench.user.uid = "1001"
com.nvidia.workbench.user.gid com.nvidia.workbench.user.gid = "1001"
com.nvidia.workbench.user.username com.nvidia.workbench.user.username = "appuser"
com.nvidia.workbench.package-manager..binary com.nvidia.workbench.package-managers.pip.binary = "/usr/local/bin/pip"
com.nvidia.workbench.package-manager..installed-packages com.nvidia.workbench.package-manager.pip.installed-packages="jupyterlab==4.1.2"
com.nvidia.workbench.package-manager-environment.type com.nvidia.workbench.package-manager-environment.type = "conda" or "venv"
com.nvidia.workbench.package-manager-environment.target com.nvidia.workbench.package-manager-environment.target = "/opt/conda"
com.nvidia.workbench.application..type com.nvidia.workbench.application.jupyterlab.type = "jupyter"
com.nvidia.workbench.application..class com.nvidia.workbench.application.jupyterlab.class = "webapp"
com.nvidia.workbench.application..start-cmd com.nvidia.workbench.application.jupyterlab.start-cmd = "jupyter notebook --allow-root --port 8888 --ip 0.0.0.0 --no-browser"
com.nvidia.workbench.application..health-check-cmd com.nvidia.workbench.application.jupyterlab.health-check-cmd = "<your command>"
com.nvidia.workbench.application..timeout-seconds com.nvidia.workbench.application.jupyterlab.timeout-seconds = "90"
com.nvidia.workbench.application..stop-command com.nvidia.workbench.application.jupyterlab.stop-command = "jupyter notebook stop 8888"
com.nvidia.workbench.application..user-msg com.nvidia.workbench.application.jupyterlab.user-msg= "Application {{.Name}} is running at {{.URL}}"
com.nvidia.workbench.application..icon-url com.nvidia.workbench.application.jupyterlab.icon-url = "https://assets.nvidiagrid.net/ngc/logos/jupyterlab.png"
com.nvidia.workbench.application..webapp.autolaunch com.nvidia.workbench.application.jupyterlab.webapp.autolaunch=true
com.nvidia.workbench.application..webapp.port com.nvidia.workbench.application.jupyterlab.webapp.port = "8888"
com.nvidia.workbench.application..webapp.proxy.trim-prefix com.nvidia.workbench.application.myapp.webapp.proxy.trim-prefix=true
com.nvidia.workbench.application..webapp.url com.nvidia.workbench.application.jupyterlab.webapp.url = "http://localhost:6006"
com.nvidia.workbench.application..webapp.url-cmd com.nvidia.workbench.application.jupyterlab.webapp.url-cmd = "jupyter notebook list | head -n 2 | tail -n 1 | cut -f1 -d' '"
com.nvidia.workbench.application..process.wait-until-finished com.nvidia.workbench.application.jupyterlab.process.wait-until-finished=true
Previous AI Workbench Project Specification
Next AI Workbench Integrations
© Copyright © 2024, NVIDIA Corporation. Last updated on Sep 17, 2024.