User Guide (Latest)
User Guide (Latest)

Multiple Container Support With Docker Compose

The default NVIDIA AI Workbench containers are optimized to support your development workflows, but in some cases you might need additional isolated environments for your project. This can happen when different parts of a typical AI application, such as the inference server or embedding model, have different dependencies. AI Workbench supports multiple containers by using Docker Compose.

Warning

Docker Compose is only supported if you are using Docker, not Podman, as your container runtime.

Use this documentation to learn about the following:

To use Docker Compose, you add a compose file to your project, and then start and stop the compose file environment while you are working. The compose file must be located at the root of your project, or in a folder named deploy. For more information, see Sample Docker Compose Files and Compose file reference.

Tip

You can name your compose file one of the following, and AI Workbench uses the first file it finds in this order: compose.yaml, compose.yml, docker-compose.yml, docker-compose.yaml, /deploy/compose.yaml, /deploy/compose.yml, /deploy/docker-compose.yml, /deploy/docker-compose.yaml.

In your compose file, you can define multiple profiles that represent different scenarios. For more information, see Using profiles with Compose.

AI Workbench does not manage setting bind mount values for compose containers. AI Workbench creates a shared volume that all containers can use, including the project container. The mount is available at /nvwb-shared-volume.

Tip

If you run different containers as different users, you might need to modify the permissions of files your project creates, so that all containers can read and write as necessary.

To create compose files and manage your compose environment in the AI Workbench desktop application, use the following procedure.

  1. Open a project in the AI Workbench desktop application.

  2. Click Environment to open the environment page.

  3. Click Compose or scroll to the compose section.

  4. Click Create compose file. The Create compose file window appears.

  5. In the Create compose file window, edit your compose file. When you are done, click Save. For more information, see Sample Docker Compose Files and Compose file reference.

  6. (Optional) For Profile select one or more profiles that you want to use when you start the compose environment.

  7. Click Start to start the compose environment for your project.

  8. Click Stop to stop the compose environment for your project.

The AI Workbench CLI supports Docker Compose through the following commands:

The following examples demonstrate Docker Compose file support in AI Workbench:

Example of a Simple Compose File

The following is a sample Docker compose file that includes a web app service. There is no profile on this service, so it always runs.

Copy
Copied!
            

services: web1: # Using build: builds the image from a local dockerfile in the project image: hashicorp/http-echo environment: # Setting the NVWB_TRIM_PREFIX env var causes this service to be routed through the proxy. # NVWB_TRIM_PREFIX=true trims the proxy prefix. # The env var PROXY_PREFIX is injected into the service if you need it. - NVWB_TRIM_PREFIX=true ports: - '5678:5678' command: ["-text=hellofromservice1"]

Example of a Web App That Requires a GPU

The following is a sample Docker compose file that includes two web app services. Service 1 always runs. Service 2 requires a GPU, and only runs when you select the gpu-service profile.

Copy
Copied!
            

services: web1: # Using build: builds the image from a local dockerfile in the project image: hashicorp/http-echo environment: # Setting the NVWB_TRIM_PREFIX env var causes this service to be routed through the proxy. # NVWB_TRIM_PREFIX=true trims the proxy prefix. # The env var PROXY_PREFIX is injected into the service if you need it. - NVWB_TRIM_PREFIX=true ports: - '5678:5678' command: ["-text=hellofromservice1"] web2: image: hashicorp/http-echo profiles: [gpu-service] environment: - NVWB_TRIM_PREFIX=true ports: - '5679:5679' # Specify GPU requests in this format. # AI Workbench manages reservations and explicitly passes GPUs into each container, # so you don't have to worry about collisions deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] command: ["-text=hellofromservice2", "-listen=:5679"]

Example of a Compose With Environment Variables and Secrets

The following is a sample Docker compose file that includes a web app service. This compose file includes an environment variable and a secret. Create the variable TEST_VAR and the secret TEST_SECRET in your AI Workbench project before you use this example. For more information, see Environment Variables and Secrets (Sensitive Environment Variables).

There is no profile on this service, so it always runs.

Copy
Copied!
            

services: web3: # Using build: builds the image from a local dockerfile in the project image: hashicorp/http-echo environment: # Setting the NVWB_TRIM_PREFIX env var causes this service to be routed through the proxy. # NVWB_TRIM_PREFIX=true trims the proxy prefix. # The env var PROXY_PREFIX is injected into the service if you need it. - NVWB_TRIM_PREFIX=true # Environment variables set in the project in AI Workbench are available by interpolation like this - TEST_ENV_VAR=${TEST_VAR} # Secrets are also available by interpolation if you prefer that over the file - TEST_SECRET_FROM_ENV_VAR=${TEST_SECRET} ports: - '5678:5678' command: ["-text=${TEST_VAR}"]

Example of Compose Secrets

The following is a sample Docker compose file that includes two web app services. Service 1 always runs. Service 4 uses compose secrets, and only runs when you select the compose-secret profile. You need to set the secret TEST_SECRET in your AI Workbench project for this service to run.

Copy
Copied!
            

services: web1: # Using build: builds the image from a local dockerfile in the project image: hashicorp/http-echo environment: # Setting the NVWB_TRIM_PREFIX env var causes this service to be routed through the proxy. # NVWB_TRIM_PREFIX=true trims the proxy prefix. # The env var PROXY_PREFIX is injected into the service if you need it. - NVWB_TRIM_PREFIX=true ports: - '5678:5678' command: ["-text=hellofromservice1"] web4: image: hashicorp/http-echo profiles: [compose-secret] environment: - NVWB_TRIM_PREFIX=true # This is an example of how you can use the secret as a file. # Compose mounts the secret there for you - TEST_SECRET_FILE=/run/secrets/TEST_SECRET ports: - '5680:5680' # To use a compose secret, you must set the secret if you want it active on the service. # It should match the secret name in the AI Workbench project secrets: - TEST_SECRET command: ["-text=hellofromservice4", "-listen=:5680"] # If you want to use compose secrets, set this global value so compose file validation works, # but AI Workbench automatically replaces the value. # The name should match the name in AI Workbench (e.g. TEST_SECRET). secrets: TEST_SECRET: environment: "HOME"

Previous Container Environments for AI Workbench Projects
Next Environment Configuration
© Copyright © 2024, NVIDIA Corporation. Last updated on Nov 4, 2024.