Custom Docker Containers

View as Markdown

Run Docker and Docker Compose workloads with GPU access on your Brev instance for reproducible, isolated development.

Creating an Instance with Containers

You can configure containers when creating a new instance through the Brev console.

1

Start Instance Creation

Click + New in the Brev console to begin creating an instance.

2

Select Container Mode

Choose your runtime mode:

  • Container Mode: Run a single Docker container
  • Docker Compose Mode: Run multi-container applications
  • VM Mode: If you don’t need containers, use this for a plain GPU VM
3

Configure Container Options

Refer to the sections below for Container Mode and Docker Compose Mode configuration.

4

Configure JupyterLab

Choose whether to run a JupyterLab server on the instance. If your container provides its own Jupyter server, select No to avoid conflicts.

5

Select GPU and Deploy

Choose your GPU type and click Deploy. The container builds on that GPU after provisioning.

Container Mode Options

When you select Container Mode, choose from:

OptionDescription
Featured ContainersCurated images for common AI/ML workloads. Use our default container for auto-configured Python and CUDA.
Custom ContainerSpecify any container image. For private registries, provide credentials and an entrypoint command.

Custom Container Caution: Some custom containers may produce unexpected results if they modify host system configurations. Test thoroughly before production use.

Docker Compose Mode Options

When you select Docker Compose Mode, provide your configuration through:

  • Upload: Upload a local docker-compose.yml file
  • URL: Provide a GitHub or GitLab URL pointing to a docker-compose.yml file

Running Containers with GPU (CLI)

Brev instances have Docker and NVIDIA Container Toolkit preinstalled. Use the --gpus all flag to enable GPU access:

$# Run PyTorch container with GPU
$docker run --gpus all -it nvcr.io/nvidia/pytorch:24.01-py3 python -c "import torch; print(torch.cuda.is_available())"
$
$# Run with port mapping
$docker run --gpus all -p 8888:8888 -it nvcr.io/nvidia/pytorch:24.01-py3

Docker Compose with GPU

For multi-container applications, use Docker Compose with GPU support:

1# docker-compose.yml
2services:
3 app:
4 image: nvcr.io/nvidia/pytorch:24.01-py3
5 deploy:
6 resources:
7 reservations:
8 devices:
9 - driver: nvidia
10 count: all
11 capabilities: [gpu]
12 ports:
13 - "8888:8888"
14 volumes:
15 - ./:/workspace
$# Start the stack
$docker compose up -d
$
$# View logs
$docker compose logs -f

NVIDIA NGC Images

NVIDIA provides optimized containers for common AI/ML frameworks:

FrameworkImage
PyTorchnvcr.io/nvidia/pytorch:24.01-py3
TensorFlownvcr.io/nvidia/tensorflow:24.01-tf2-py3
JAXnvcr.io/nvidia/jax:24.01-py3
RAPIDSnvcr.io/nvidia/rapidsai/base:24.02-cuda12.0-py3.10

What’s Next