Environments and AI Workbench#

Overview#

AI Workbench uses containers to provide isolated, reproducible development environments.

Each project has its own container, called the project container. Install packages with pip, conda, and apt without affecting other projects or the host machine.

Environment configuration is versioned with your project code.

Configuration files like requirements.txt, apt.txt, and build scripts travel with your project. Clone a project anywhere and get the same environment.

You can customize environments without knowing much about containers.

Add packages through the Desktop App or by editing simple text files. AI Workbench handles the container build and runtime configuration automatically.

Key Concepts#

Project Container

The container that serves as your development environment. Built from a base image and customized with your packages and configurations.

Base Image

The starting point for your project container. NVIDIA provides pre-configured images with Python, CUDA, and JupyterLab, or you can bring your own.

Build vs Runtime

Build-time configurations install software and customize the container image. Runtime configurations set environment variables and mounts when the container starts.

Package Management#

You can add packages three ways: through the Desktop App, by editing files, or with the CLI.

The Desktop App package manager is fastest for adding packages to a running container. Editing requirements.txt and apt.txt directly gives you full control and version history.

The package manager can add packages without rebuilding the container.

It installs packages in the running container and updates the configuration files. This makes iteration faster when you’re actively developing.

Directly editing package files requires a rebuild but gives you more control.

Pin specific versions in requirements.txt. Manage complex dependency chains. See exactly what gets installed and when.

Build-Time Customization#

Build scripts let you run custom commands during the container build.

AI Workbench provides two optional bash scripts: preBuild.bash and postBuild.bash. These run automatically when the container builds.

Use preBuild.bash for environment setup before package installation.

Add package repositories or signing keys. Install base tools needed by later steps. Create directories or unpack assets.

Use postBuild.bash for final configuration after packages install.

Configure installed software like JupyterLab extensions. Install IDE servers to persist VS Code or Cursor. Set up development environments and tools.

Build scripts have passwordless sudo access and run as the container user.

You can install system packages, modify permissions, and configure services. Changes persist across container restarts because they’re baked into the image.

Runtime Configuration#

Environment variables control software behavior without rebuilding the container.

Set API keys, cache directories, or application settings. Variables travel with the project but sensitive values stay on each host.

Mounts provide access to external file systems.

Host mounts connect to directories on your local machine. Volume mounts create persistent storage managed by Docker. Temp mounts provide temporary scratch space.

Runtime configurations take effect when the container starts.

Stop and restart the container to apply changes. No rebuild required, making iteration fast.

GPU Configuration#

Request 0-8 GPUs for your project container through the Desktop App or CLI.

AI Workbench reserves the requested GPUs when the container starts. If not enough GPUs are available, the container won’t start.

Multi-GPU configurations require shared memory settings.

Set shared memory for inter-GPU communication. Configure this in the Hardware section of the Desktop App.

GPU configuration for multi-container environments works differently.

Configure GPU access in the Docker Compose file for each service. Supports advanced GPU sharing scenarios.

Custom and Multi-Container Environments#

You can use your own container instead of NVIDIA base images.

Bring your existing container as long as it’s Debian-based. Add required Docker labels so AI Workbench can read the configuration. Useful when you need specialized software stacks or company-specific images.

Multi-container environments let you run multiple services together.

Use Docker Compose to define services like databases, web servers, or processing pipelines. Each service runs in its own container with shared networking. Useful for full-stack applications or complex development environments.

The project container is always single-container, but you can add compose services alongside it.

The project container holds your code and primary development environment. Compose services provide supporting infrastructure.