Advanced Walkthrough: Use Your Own Container#
Before starting this walkthrough, we recommend completing the Tutorial Project, available in your AI Workbench interface. The Tutorial introduces key AI Workbench concepts, such as working with projects, environments, and containerized applications.
By default, NVIDIA AI Workbench provides a selection of Default Base Containers for your projects. However, in some cases, you might want to create your own custom base container for reuse.
To do this, you need to: 1. Build a custom container image that meets AI Workbench compatibility requirements. 2. Apply image labels to ensure proper integration with AI Workbench. 3. Publish the container to a registry for use across projects.
After publishing your custom container, you can create a new AI Workbench project using it as the environment.
In this walkthrough, you set up a simple, non-default container to use as a base environment for AI Workbench projects.
You use the Morpheus container (nvcr.io/nvidia/morpheus/morpheus:23.11-runtime
) from NVIDIA’s NGC Catalog as the base for your custom container.
What You’ll Do in This Walkthrough#
This walkthrough guides you through setting up a non-default container for AI Workbench projects.
You’ll use the Morpheus container (nvcr.io/nvidia/morpheus/morpheus:23.11-runtime
) from
NVIDIA’s NGC Catalog as the base for your custom container.
In this walkthrough, you will:
Start Your Dockerfile – Create a Dockerfile for your custom environment.
Add Image Labels – Add AI Workbench-compatible labels to your container.
Completed Dockerfile – Build and test your custom container.
Build and Publish Your Container – Publish the container to a registry.
Access and Use Your Container – Use the custom container in a new AI Workbench project.
Start Your Dockerfile#
Open a new, empty Dockerfile and specify the container image and tag as follows. You see the full Dockerfile in a later step.
ARG BASE_IMAGE=nvcr.io/nvidia/morpheus/morpheus:23.11-runtime FROM ${BASE_IMAGE} as base ARG TARGETARCH ...
Install any system level dependencies you want in your base container, such as Git, Git LFS, and Vim.
... ENV SHELL=/bin/bash # Install system level dependencies RUN apt update \ && apt install -yqq --no-install-recommends \ curl \ git \ git-lfs \ vim \ python3-pip \ && rm -rf /var/lib/apt/lists/* ...
Add any applications that you want to add to your custom container environment. In this example, you add a JupyterLab IDE.
... ENV SHELL=/bin/bash ENV JUPYTER_PORT 8888 # Add the Jupyter Port ... # System Dependencies From Step 3 # Install jupyterlab RUN pip install jupyterlab==4.1.2 # Disable the announcement banner RUN /opt/conda/envs/morpheus/bin/jupyter labextension disable "@jyupyterlab/apputils-extension:announcements" ...
Add Image Labels#
You must provide image labels for your container in the Dockerfile. Labels are metadata that give information about your container to AI Workbench. For details, see Use Your Own Container.
Container image labels typically fall into one of a few categories:
Fairly standardized and can essentially be copy-pasted. For example,
schema-version
andjupyterlab
.Fairly open-ended and up to user preference. For example,
name
anddescription
.Require some simple container introspection. For example:
CUDA Version Labels
# Run the container nvidia@ai-workbench:~$ docker run --rm -ti --runtime=nvidia --gpus=all nvcr .io/nvidia/morpheus/morpheus:23.11-runtime bash # Inside the container, check for the CUDA version. This container supports version 11.8. (morpheus) root@6ba686c455a2:/workspace# nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Wed_Sep_21_10:33:58_PDT_2022 Cuda compilation tools, release 11.8, V11.8.89 Build cuda_11.8.r11.8/compiler.31833905_0 (morpheus) root@6ba686c455a2:/workspace#
Package Manager Labels (apt)
# Run the container nvidia@ai-workbench:~$ docker run --rm -ti --runtime=nvidia --gpus=all nvcr .io/nvidia/morpheus/morpheus:23.11-runtime bash # Check for the apt binary. It is located at /usr/bin/apt in this container. (morpheus) root@6ba686c455a2:/workspace# which apt /usr/bin/apt (morpheus) root@6ba686c455a2:/workspace#
Package Manager Labels (pip)
# Run the container nvidia@ai-workbench:~$ docker run --rm -ti --runtime=nvidia --gpus=all nvcr .io/nvidia/morpheus/morpheus:23.11-runtime bash # Check for the pip binary. It is located at /opt/conda/envs/morpheus/bin/pip. (morpheus) root@6ba686c455a2:/workspace# which pip /opt/conda/envs/morpheus/bin/pip (morpheus) root@6ba686c455a2:/workspace#
Add the image labels to your Dockerfile.
... ARG BUILD_TIMESTAMP # Base environment labels LABEL com.nvidia.workbench.application.jupyterlab.class="webapp" LABEL com.nvidia.workbench.application.jupyterlab.health-check-cmd="[ \\$(echo url=\\$(jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d' ' | grep -v 'Currently' | sed \"s@/?@/lab?@g\") | curl -o /dev/null -s -w '%{http_code}' --config -) == '200' ]" LABEL com.nvidia.workbench.application.jupyterlab.timeout-seconds="90" LABEL com.nvidia.workbench.application.jupyterlab.start-cmd="jupyter lab --allow-root --port $JUPYTER_PORT --ip 0.0.0.0 --no-browser --NotebookApp.base_url=\\\$PROXY_PREFIX --NotebookApp.default_url=/lab --NotebookApp.allow_origin='*'" LABEL com.nvidia.workbench.application.jupyterlab.webapp.port="$JUPYTER_PORT" LABEL com.nvidia.workbench.application.jupyterlab.stop-cmd="jupyter lab stop $JUPYTER_PORT" LABEL com.nvidia.workbench.application.jupyterlab.type="jupyterlab" LABEL com.nvidia.workbench.application.jupyterlab.webapp.url-cmd="jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d' ' | grep -v 'Currently'" LABEL com.nvidia.workbench.application.jupyterlab.webapp.autolaunch="true" LABEL com.nvidia.workbench.build-timestamp="${BUILD_TIMESTAMP}" LABEL com.nvidia.workbench.cuda-version="11.8" LABEL com.nvidia.workbench.description="A Morpheus Base with CUDA 11.8" LABEL com.nvidia.workbench.entrypoint-script="" LABEL com.nvidia.workbench.image-version="1.0.0" LABEL com.nvidia.workbench.labels="cuda11.8" LABEL com.nvidia.workbench.name="Morpheus with CUDA 11.8" LABEL com.nvidia.workbench.os="linux" LABEL com.nvidia.workbench.os-distro="ubuntu" LABEL com.nvidia.workbench.os-distro-release="22.04" LABEL com.nvidia.workbench.package-manager.apt.binary="/usr/bin/apt" LABEL com.nvidia.workbench.package-manager.apt.installed-packages="curl git git-lfs vim" LABEL com.nvidia.workbench.package-manager.pip.binary="/opt/conda/envs/morpheus/bin/pip" LABEL com.nvidia.workbench.package-manager.pip.installed-packages="jupyterlab==4.1.2" LABEL com.nvidia.workbench.programming-languages="python3" LABEL com.nvidia.workbench.schema-version="v2"
Completed Dockerfile#
The complete Dockerfile for making the NVIDIA Morpheus container compatible with AI Workbench now looks like the following.
1 ARG BASE_IMAGE=nvcr.io/nvidia/morpheus/morpheus:23.11-runtime
2
3 FROM ${BASE_IMAGE} as base
4 ARG TARGETARCH
5
6 ENV SHELL=/bin/bash
7 ENV JUPYTER_PORT 8888
8
9 # Install system level dependencies
10 RUN apt update \
11 && apt install -yqq --no-install-recommends \
12 curl \
13 git \
14 git-lfs \
15 vim \
16 python3-pip \
17 && rm -rf /var/lib/apt/lists/*
18
19 # Install jupyterlab
20 RUN pip install jupyterlab==4.1.2
21
22 # Disable the announcement banner
23 RUN /opt/conda/envs/morpheus/bin/jupyter labextension disable "@jyupyterlab/apputils-extension:announcements"
24
25 ARG BUILD_TIMESTAMP
26
27 # Base environemt labels
28 LABEL com.nvidia.workbench.application.jupyterlab.class="webapp"
29 LABEL com.nvidia.workbench.application.jupyterlab.health-check-cmd="[ \\$(echo url=\\$(jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d' ' | grep -v 'Currently' | sed \"s@/?@/lab?@g\") | curl -o /dev/null -s -w '%{http_code}' --config -) == '200' ]"
30 LABEL com.nvidia.workbench.application.jupyterlab.timeout-seconds="90"
31 LABEL com.nvidia.workbench.application.jupyterlab.start-cmd="jupyter lab --allow-root --port $JUPYTER_PORT --ip 0.0.0.0 --no-browser --NotebookApp.base_url=\\\$PROXY_PREFIX --NotebookApp.default_url=/lab --NotebookApp.allow_origin='*'"
32 LABEL com.nvidia.workbench.application.jupyterlab.webapp.port="$JUPYTER_PORT"
33 LABEL com.nvidia.workbench.application.jupyterlab.stop-cmd="jupyter lab stop $JUPYTER_PORT"
34 LABEL com.nvidia.workbench.application.jupyterlab.type="jupyterlab"
35 LABEL com.nvidia.workbench.application.jupyterlab.webapp.url-cmd="jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d' ' | grep -v 'Currently'"
36 LABEL com.nvidia.workbench.application.jupyterlab.webapp.autolaunch="true"
37 LABEL com.nvidia.workbench.build-timestamp="${BUILD_TIMESTAMP}"
38 LABEL com.nvidia.workbench.cuda-version="11.8"
39 LABEL com.nvidia.workbench.description="A Morpheus Base with CUDA 11.8"
40 LABEL com.nvidia.workbench.entrypoint-script=""
41 LABEL com.nvidia.workbench.image-version="1.0.0"
42 LABEL com.nvidia.workbench.labels="cuda11.8"
43 LABEL com.nvidia.workbench.name="Morpheus with CUDA 11.8"
44 LABEL com.nvidia.workbench.os="linux"
45 LABEL com.nvidia.workbench.os-distro="ubuntu"
46 LABEL com.nvidia.workbench.os-distro-release="22.04"
47 LABEL com.nvidia.workbench.package-manager.apt.binary="/usr/bin/apt"
48 LABEL com.nvidia.workbench.package-manager.apt.installed-packages="curl git git-lfs vim"
49 LABEL com.nvidia.workbench.package-manager.pip.binary="/opt/conda/envs/morpheus/bin/pip"
50 LABEL com.nvidia.workbench.package-manager.pip.installed-packages="jupyterlab==4.1.2"
51 LABEL com.nvidia.workbench.programming-languages="python3"
52 LABEL com.nvidia.workbench.schema-version="v2"
Build and Publish Your Container#
Use the following procedure to build your container image and publish it to a container registry.
Note
The build command builds an image that is compatible with your computer architecture. For example, if you build your image on an ARM macOS, you can’t use the image an AMD Ubuntu computer.
Run the following commands. Specify your registry and namespace. You can edit the image name and tag.
cd /path/to/Dockerfile && docker build --network=host -t sample-registry.io/sample-namespace/morpheus-test-base:1.0 . docker push sample-registry.io/sample-namespace/morpheus-test-base:1.0
Copy the image tag. Here, we assume you have published the container to the private registry location
nvcr.io/nvidian/morpheus-test-base:1.0
.
Access and Use Your Container#
Inside AI Workbench, open a location and then click New Project.
Specify a Name and Description, and then click Next.
Click Custom Container.
Enter the location of your desired container registry, image, and tag, and then click Create Project
AI Workbench automatically starts the container build process.
The container build process can take a few minutes depending on the size of your custom container.
If you encounter a build error, there may be something wrong in your image labeling. Edit the autogenerated
.project/spec.yaml
to correct any errors in the build, and then transfer those changes to the container image labels of the Dockerfile to persist those fixes across all future projects using the container.
Next Steps#
Do another walkthrough
See how to work with example projects on Github
Explore the NVIDIA Developer Forum