User Guide (Latest)
User Guide (Latest)

Advanced Walkthrough: Use Your Own Container

NVIDIA AI Workbench provides you with a selection of Default Base Containers for your projects. In some cases, you might want to create your own custom base container environment for reuse.

To create a custom base environment to use in your AI Workbench projects, create a container image that meets the minimum requirements to be compatible with AI Workbench. You apply image labels to your container image and publish it to a container registry for access by you and other users. After your container is published, you can create a new project that uses your custom image as the environment for your project.

In this walkthrough, you set up a simple, non-default container to use as a base environment for AI Workbench projects. You use the Morpheus container (nvcr.io/nvidia/morpheus/morpheus:23.11-runtime) from NVIDIA’s NGC Catalog as the base for your custom container.

In this quickstart, you perform the following tasks:

  1. Start Your Dockerfile

  2. Add Image Labels

  3. Completed Dockerfile

  4. Build and Publish Your Container

  5. Access and Use Your Container

  1. Open a new, empty Dockerfile and specify the container image and tag as follows. You see the full Dockerfile in a later step.

    Copy
    Copied!
                

    ARG BASE_IMAGE=nvcr.io/nvidia/morpheus/morpheus:23.11-runtime FROM ${BASE_IMAGE} as base ARG TARGETARCH ...

  2. Install any system level dependencies you want in your base container, such as Git, Git LFS, and Vim.

    Copy
    Copied!
                

    ... ENV SHELL=/bin/bash # Install system level dependencies RUN apt update \ && apt install -yqq --no-install-recommends \ curl \ git \ git-lfs \ vim \ python3-pip \ && rm -rf /var/lib/apt/lists/* ...

  3. Add any applications that you want to add to your custom container environment. In this example, you add a JupyterLab IDE.

    Copy
    Copied!
                

    ... ENV SHELL=/bin/bash ENV JUPYTER_PORT 8888 # Add the Jupyter Port ... # System Dependencies From Step 3 # Install jupyterlab RUN pip install jupyterlab==4.1.2 # Disable the announcement banner RUN /opt/conda/envs/morpheus/bin/jupyter labextension disable "@jyupyterlab/apputils-extension:announcements" ...

You must provide image labels for your container in the Dockerfile. Labels are metadata that give information about your container to AI Workbench. For details, see Use Your Own Base Container.

Container image labels typically fall into one of a few categories:

  • Fairly standardized and can essentially be copy-pasted. For example, schema-version and jupyterlab.

  • Fairly open-ended and up to user preference. For example, name and description.

  • Require some simple container introspection. For example:

    • CUDA Version Labels

      Copy
      Copied!
                  

      # Run the container nvidia@ai-workbench:~$ docker run --rm -ti --runtime=nvidia --gpus=all nvcr .io/nvidia/morpheus/morpheus:23.11-runtime bash # Inside the container, check for the CUDA version. This container supports version 11.8. (morpheus) root@6ba686c455a2:/workspace# nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Wed_Sep_21_10:33:58_PDT_2022 Cuda compilation tools, release 11.8, V11.8.89 Build cuda_11.8.r11.8/compiler.31833905_0 (morpheus) root@6ba686c455a2:/workspace#

    • Package Manager Labels (apt)

      Copy
      Copied!
                  

      # Run the container nvidia@ai-workbench:~$ docker run --rm -ti --runtime=nvidia --gpus=all nvcr .io/nvidia/morpheus/morpheus:23.11-runtime bash # Check for the apt binary. It is located at /usr/bin/apt in this container. (morpheus) root@6ba686c455a2:/workspace# which apt /usr/bin/apt (morpheus) root@6ba686c455a2:/workspace#

    • Package Manager Labels (pip)

      Copy
      Copied!
                  

      # Run the container nvidia@ai-workbench:~$ docker run --rm -ti --runtime=nvidia --gpus=all nvcr .io/nvidia/morpheus/morpheus:23.11-runtime bash # Check for the pip binary. It is located at /opt/conda/envs/morpheus/bin/pip. (morpheus) root@6ba686c455a2:/workspace# which pip /opt/conda/envs/morpheus/bin/pip (morpheus) root@6ba686c455a2:/workspace#

  1. Add the image labels to your Dockerfile.

    Copy
    Copied!
                

    ... ARG BUILD_TIMESTAMP # Base environment labels LABEL com.nvidia.workbench.application.jupyterlab.class="webapp" LABEL com.nvidia.workbench.application.jupyterlab.health-check-cmd="[ \\$(echo url=\\$(jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d' ' | grep -v 'Currently' | sed \"s@/?@/lab?@g\") | curl -o /dev/null -s -w '%{http_code}' --config -)== '200' ]" LABEL com.nvidia.workbench.application.jupyterlab.timeout-seconds="90" LABEL com.nvidia.workbench.application.jupyterlab.start-cmd="jupyter lab --allow-root --port$JUPYTER_PORT--ip 0.0.0.0 --no-browser --NotebookApp.base_url=\\\$PROXY_PREFIX --NotebookApp.default_url=/lab --NotebookApp.allow_origin='*'" LABEL com.nvidia.workbench.application.jupyterlab.webapp.port="$JUPYTER_PORT" LABEL com.nvidia.workbench.application.jupyterlab.stop-cmd="jupyter lab stop$JUPYTER_PORT" LABEL com.nvidia.workbench.application.jupyterlab.type="jupyterlab" LABEL com.nvidia.workbench.application.jupyterlab.webapp.url-cmd="jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d' ' | grep -v 'Currently'" LABEL com.nvidia.workbench.application.jupyterlab.webapp.autolaunch="true" LABEL com.nvidia.workbench.build-timestamp="${BUILD_TIMESTAMP}" LABEL com.nvidia.workbench.cuda-version="11.8" LABEL com.nvidia.workbench.description="A Morpheus Base with CUDA 11.8" LABEL com.nvidia.workbench.entrypoint-script="" LABEL com.nvidia.workbench.image-version="1.0.0" LABEL com.nvidia.workbench.labels="cuda11.8" LABEL com.nvidia.workbench.name="Morpheus with CUDA 11.8" LABEL com.nvidia.workbench.os="linux" LABEL com.nvidia.workbench.os-distro="ubuntu" LABEL com.nvidia.workbench.os-distro-release="22.04" LABEL com.nvidia.workbench.package-manager.apt.binary="/usr/bin/apt" LABEL com.nvidia.workbench.package-manager.apt.installed-packages="curl git git-lfs vim" LABEL com.nvidia.workbench.package-manager.pip.binary="/opt/conda/envs/morpheus/bin/pip" LABEL com.nvidia.workbench.package-manager.pip.installed-packages="jupyterlab==4.1.2" LABEL com.nvidia.workbench.programming-languages="python3" LABEL com.nvidia.workbench.schema-version="v2"

The complete Dockerfile for making the NVIDIA Morpheus container compatible with AI Workbench now looks like the following.

Copy
Copied!
            

ARG BASE_IMAGE=nvcr.io/nvidia/morpheus/morpheus:23.11-runtime FROM ${BASE_IMAGE} as base ARG TARGETARCH ENV SHELL=/bin/bash ENV JUPYTER_PORT 8888 # Install system level dependencies RUN apt update \ && apt install -yqq --no-install-recommends \ curl \ git \ git-lfs \ vim \ python3-pip \ && rm -rf /var/lib/apt/lists/* # Install jupyterlab RUN pip install jupyterlab==4.1.2 # Disable the announcement banner RUN /opt/conda/envs/morpheus/bin/jupyter labextension disable "@jyupyterlab/apputils-extension:announcements" ARG BUILD_TIMESTAMP # Base environemt labels LABEL com.nvidia.workbench.application.jupyterlab.class="webapp" LABEL com.nvidia.workbench.application.jupyterlab.health-check-cmd="[ \\$(echo url=\\$(jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d' ' | grep -v 'Currently' | sed \"s@/?@/lab?@g\") | curl -o /dev/null -s -w '%{http_code}' --config -)== '200' ]" LABEL com.nvidia.workbench.application.jupyterlab.timeout-seconds="90" LABEL com.nvidia.workbench.application.jupyterlab.start-cmd="jupyter lab --allow-root --port$JUPYTER_PORT--ip 0.0.0.0 --no-browser --NotebookApp.base_url=\\\$PROXY_PREFIX --NotebookApp.default_url=/lab --NotebookApp.allow_origin='*'" LABEL com.nvidia.workbench.application.jupyterlab.webapp.port="$JUPYTER_PORT" LABEL com.nvidia.workbench.application.jupyterlab.stop-cmd="jupyter lab stop$JUPYTER_PORT" LABEL com.nvidia.workbench.application.jupyterlab.type="jupyterlab" LABEL com.nvidia.workbench.application.jupyterlab.webapp.url-cmd="jupyter lab list | head -n 2 | tail -n 1 | cut -f1 -d' ' | grep -v 'Currently'" LABEL com.nvidia.workbench.application.jupyterlab.webapp.autolaunch="true" LABEL com.nvidia.workbench.build-timestamp="${BUILD_TIMESTAMP}" LABEL com.nvidia.workbench.cuda-version="11.8" LABEL com.nvidia.workbench.description="A Morpheus Base with CUDA 11.8" LABEL com.nvidia.workbench.entrypoint-script="" LABEL com.nvidia.workbench.image-version="1.0.0" LABEL com.nvidia.workbench.labels="cuda11.8" LABEL com.nvidia.workbench.name="Morpheus with CUDA 11.8" LABEL com.nvidia.workbench.os="linux" LABEL com.nvidia.workbench.os-distro="ubuntu" LABEL com.nvidia.workbench.os-distro-release="22.04" LABEL com.nvidia.workbench.package-manager.apt.binary="/usr/bin/apt" LABEL com.nvidia.workbench.package-manager.apt.installed-packages="curl git git-lfs vim" LABEL com.nvidia.workbench.package-manager.pip.binary="/opt/conda/envs/morpheus/bin/pip" LABEL com.nvidia.workbench.package-manager.pip.installed-packages="jupyterlab==4.1.2" LABEL com.nvidia.workbench.programming-languages="python3" LABEL com.nvidia.workbench.schema-version="v2"

Use the following procedure to build your container image and publish it to a container registry.

Note

The build command builds an image that is compatible with your computer architecture. For example, if you build your image on an ARM macOS, you can’t use the image an AMD Ubuntu computer.

  1. Run the following commands. Specify your registry and namespace. You can edit the image name and tag.

    Copy
    Copied!
                

    cd /path/to/Dockerfile && docker build --network=host -t sample-registry.io/sample-namespace/morpheus-test-base:1.0 . docker push sample-registry.io/sample-namespace/morpheus-test-base:1.0

  2. Copy the image tag. Here, we assume you have published the container to the private registry location nvcr.io/nvidian/morpheus-test-base:1.0.

    quickstart-byoc-2.png


  1. Inside AI Workbench, open a location and then click New Project.

  2. Specify a Name and Description, and then click Next.

  3. Click Custom Container.

  4. Enter the location of your desired container registry, image, and tag, and then click Create Project

    quickstart-byoc-6.png


  5. AI Workbench automatically starts the container build process.

The container build process can take a few minutes depending on the size of your custom container.

If you encounter a build error, there may be something wrong in your image labeling. Edit the autogenerated .project/spec.yaml to correct any errors in the build, and then transfer those changes to the container image labels of the Dockerfile to persist those fixes across all future projects using the container.

Previous Advanced Walkthrough: Hybrid RAG
Next NVIDIA AI Workbench Example Projects
© Copyright © 2024, NVIDIA Corporation. Last updated on Sep 17, 2024.