***
title: Custom Docker Containers
description: Run Docker and Docker Compose workloads with GPU access on NVIDIA Brev.
------------------------------------------------------------------------------------
Run Docker and Docker Compose workloads with GPU access on your Brev instance for reproducible, isolated development.
## Creating an Instance with Containers
You can configure containers when creating a new instance through the Brev console.
Click **+ New** in the Brev console to begin creating an instance.
Choose your runtime mode:
* **Container Mode**: Run a single Docker container
* **Docker Compose Mode**: Run multi-container applications
* **VM Mode**: If you don't need containers, use this for a plain GPU VM
Refer to the sections below for Container Mode and Docker Compose Mode configuration.
Choose whether to run a JupyterLab server on the instance. If your container provides its own Jupyter server, select **No** to avoid conflicts.
Choose your GPU type and click **Deploy**. The container builds on that GPU after provisioning.
### Container Mode Options
When you select Container Mode, choose from:
| Option | Description |
| ----------------------- | --------------------------------------------------------------------------------------------------------- |
| **Featured Containers** | Curated images for common AI/ML workloads. Use our default container for auto-configured Python and CUDA. |
| **Custom Container** | Specify any container image. For private registries, provide credentials and an entrypoint command. |
**Custom Container Caution**: Some custom containers may produce unexpected results if they modify host system configurations. Test thoroughly before production use.
### Docker Compose Mode Options
When you select Docker Compose Mode, provide your configuration through:
* **Upload**: Upload a local `docker-compose.yml` file
* **URL**: Provide a GitHub or GitLab URL pointing to a `docker-compose.yml` file
## Running Containers with GPU (CLI)
Brev instances have Docker and NVIDIA Container Toolkit preinstalled. Use the `--gpus all` flag to enable GPU access:
```bash
# Run PyTorch container with GPU
docker run --gpus all -it nvcr.io/nvidia/pytorch:24.01-py3 python -c "import torch; print(torch.cuda.is_available())"
# Run with port mapping
docker run --gpus all -p 8888:8888 -it nvcr.io/nvidia/pytorch:24.01-py3
```
## Docker Compose with GPU
For multi-container applications, use Docker Compose with GPU support:
```yaml
# docker-compose.yml
services:
app:
image: nvcr.io/nvidia/pytorch:24.01-py3
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
ports:
- "8888:8888"
volumes:
- ./:/workspace
```
```bash
# Start the stack
docker compose up -d
# View logs
docker compose logs -f
```
## NVIDIA NGC Images
NVIDIA provides optimized containers for common AI/ML frameworks:
| Framework | Image |
| ---------- | ---------------------------------------------------- |
| PyTorch | `nvcr.io/nvidia/pytorch:24.01-py3` |
| TensorFlow | `nvcr.io/nvidia/tensorflow:24.01-tf2-py3` |
| JAX | `nvcr.io/nvidia/jax:24.01-py3` |
| RAPIDS | `nvcr.io/nvidia/rapidsai/base:24.02-cuda12.0-py3.10` |
## What's Next
Access NVIDIA's GPU-optimized containers and models.
Deploy NVIDIA Inference Microservices.