Multiple Container Support With Docker Compose#

The default NVIDIA AI Workbench containers are optimized to support your development workflows, but in some cases you might need additional isolated environments for your project. This can happen when different parts of a typical AI application, such as the inference server or embedding model, have different dependencies. AI Workbench supports multiple containers by using Docker Compose.

Warning

Docker Compose is only supported if you are using Docker, not Podman, as your container runtime.

Use this documentation to learn about the following:

Overview of Docker Compose in AI Workbench#

To use Docker Compose, you add a compose file to your project, and then start and stop the compose file environment while you are working. The compose file must be located at the root of your project, or in a folder named deploy. For more information, see Sample Docker Compose Files and Compose file reference.

Tip

You can name your compose file one of the following, and AI Workbench uses the first file it finds in this order: compose.yaml, compose.yml, docker-compose.yml, docker-compose.yaml, /deploy/compose.yaml, /deploy/compose.yml, /deploy/docker-compose.yml, /deploy/docker-compose.yaml.

In your compose file, you can define multiple profiles that represent different scenarios. For more information, see Using profiles with Compose.

AI Workbench does not manage setting bind mount values for compose containers. AI Workbench creates a shared volume that all containers can use, including the project container. The mount is available at /nvwb-shared-volume.

Tip

If you run different containers as different users, you might need to modify the permissions of files your project creates, so that all containers can read and write as necessary.

Desktop Application Support for Docker Compose#

To create compose files and manage your compose environment in the AI Workbench desktop application, use the following procedure.

  1. Open a project in the AI Workbench desktop application.

  2. Click Environment to open the environment page.

  3. Click Compose or scroll to the compose section.

  4. Click Create compose file. The Create compose file window appears.

  5. In the Create compose file window, edit your compose file. When you are done, click Save. For more information, see Sample Docker Compose Files and Compose file reference.

  6. (Optional) For Profile select one or more profiles that you want to use when you start the compose environment.

  7. Click Start to start the compose environment for your project.

  8. Click Stop to stop the compose environment for your project.

CLI Support for Docker Compose#

The AI Workbench CLI supports Docker Compose through the following commands:

Sample Docker Compose Files#

The following examples demonstrate Docker Compose file support in AI Workbench:

Example of a Simple Compose File#

The following is a sample Docker compose file that includes a web app service. There is no profile on this service, so it always runs.

 1services:
 2
 3   web1:
 4      # Using build: builds the image from a local dockerfile in the project
 5      image: hashicorp/http-echo
 6      environment:
 7         # Setting the NVWB_TRIM_PREFIX env var causes this service to be routed through the proxy.
 8         # NVWB_TRIM_PREFIX=true trims the proxy prefix.
 9         # The env var PROXY_PREFIX is injected into the service if you need it.
10         - NVWB_TRIM_PREFIX=true
11      ports:
12         - '5678:5678'
13      command: ["-text=hello from service 1"]

Example of a Web App That Requires a GPU#

The following is a sample Docker compose file that includes two web app services. Service 1 always runs. Service 2 requires a GPU, and only runs when you select the gpu-service profile.

 1services:
 2
 3   web1:
 4      # Using build: builds the image from a local dockerfile in the project
 5      image: hashicorp/http-echo
 6      environment:
 7         # Setting the NVWB_TRIM_PREFIX env var causes this service to be routed through the proxy.
 8         # NVWB_TRIM_PREFIX=true trims the proxy prefix.
 9         # The env var PROXY_PREFIX is injected into the service if you need it.
10         - NVWB_TRIM_PREFIX=true
11      ports:
12         - '5678:5678'
13      command: ["-text=hello from service 1"]
14
15   web2:
16      image: hashicorp/http-echo
17      profiles: [gpu-service]
18      environment:
19         - NVWB_TRIM_PREFIX=true
20      ports:
21         - '5679:5679'
22      # Specify GPU requests in this format.
23      # AI Workbench manages reservations and explicitly passes GPUs into each container,
24      # so you don't have to worry about collisions
25      deploy:
26         resources:
27            reservations:
28               devices:
29                  - driver: nvidia
30                  count: 1
31                  capabilities: [gpu]
32      command: ["-text=hello from service 2", "-listen=:5679"]

Example of a Compose With Environment Variables and Secrets#

The following is a sample Docker compose file that includes a web app service. This compose file includes an environment variable and a secret. Create the variable TEST_VAR and the secret TEST_SECRET in your AI Workbench project before you use this example. For more information, see Environment Variables and Secrets (Sensitive Environment Variables).

There is no profile on this service, so it always runs.

 1services:
 2
 3   web3:
 4      # Using build: builds the image from a local dockerfile in the project
 5      image: hashicorp/http-echo
 6      environment:
 7         # Setting the NVWB_TRIM_PREFIX env var causes this service to be routed through the proxy.
 8         # NVWB_TRIM_PREFIX=true trims the proxy prefix.
 9         # The env var PROXY_PREFIX is injected into the service if you need it.
10         - NVWB_TRIM_PREFIX=true
11         # Environment variables set in the project in AI Workbench are available by interpolation like this
12         - TEST_ENV_VAR=${TEST_VAR}
13         # Secrets are also available by interpolation if you prefer that over the file
14         - TEST_SECRET_FROM_ENV_VAR=${TEST_SECRET}
15      ports:
16         - '5678:5678'
17      command: ["-text=${TEST_VAR}"]

Example of Compose Secrets#

The following is a sample Docker compose file that includes two web app services. Service 1 always runs. Service 4 uses compose secrets, and only runs when you select the compose-secret profile. You need to set the secret TEST_SECRET in your AI Workbench project for this service to run.

 1services:
 2
 3   web1:
 4      # Using build: builds the image from a local dockerfile in the project
 5      image: hashicorp/http-echo
 6      environment:
 7         # Setting the NVWB_TRIM_PREFIX env var causes this service to be routed through the proxy.
 8         # NVWB_TRIM_PREFIX=true trims the proxy prefix.
 9         # The env var PROXY_PREFIX is injected into the service if you need it.
10         - NVWB_TRIM_PREFIX=true
11      ports:
12         - '5678:5678'
13      command: ["-text=hello from service 1"]
14
15   web4:
16      image: hashicorp/http-echo
17      profiles: [compose-secret]
18      environment:
19         - NVWB_TRIM_PREFIX=true
20         # This is an example of how you can use the secret as a file.
21         # Compose mounts the secret there for you
22         - TEST_SECRET_FILE=/run/secrets/TEST_SECRET
23      ports:
24         - '5680:5680'
25      # To use a compose secret, you must set the secret if you want it active on the service.
26      # It should match the secret name in the AI Workbench project
27      secrets:
28         - TEST_SECRET
29      command: ["-text=hello from service 4", "-listen=:5680"]
30
31
32# If you want to use compose secrets, set this global value so compose file validation works,
33# but AI Workbench automatically replaces the value.
34# The name should match the name in AI Workbench (e.g. TEST_SECRET).
35secrets:
36TEST_SECRET:
37   environment: "HOME"