Build Container Image in Dev Pod

Docker image is widely used as environment control. In this guide, we will show you how to build a container image in a Dev Pod.

Prerequisites

  • A Node Group with privilege mode enabled

Steps

Make sure privilege mode is enabled

Go the the node group page, and make sure the privilege mode is enabled for the node group you want to use. You can enable the privilege mode by clicking the "Config" button and select the "Privilege Mode" checkbox. Make sure it's enabled for the node group you want to use.

Create a Dev Pod

Follow the create a dev pod guide, and create a dev pod with the node group you want to use. Make sure the dev pod is created with the privilege mode enabled in the Resource selection step. It is recommended to use the whole node for the dev pod with privilege mode enabled. Eg. select the shape with 8 gpus will use the whole node.

Install Docker and Required Plugins

Once the dev pod is created, you can connect to the dev pod via web terminal. The copy the following script to the terminal and run it.

#!/bin/bash

# Exit script on any error
set -e

################
# Install Docker
################

echo "Updating package lists..."
sudo apt-get update

echo "Installing dependencies..."
sudo apt-get install -y ca-certificates curl

echo "Adding Docker's official GPG key..."
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc > /dev/null
sudo chmod a+r /etc/apt/keyrings/docker.asc

echo "Adding Docker repository to Apt sources..."
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

echo "Updating package lists after adding Docker repository..."
sudo apt-get update

echo "Installing Docker packages..."
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

##################################
# Install nvidia-container-toolkit
##################################

echo "Adding NVIDIA container toolkit GPG key and repository..."
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo tee /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg > /dev/null
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
  sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list > /dev/null

echo "Installing NVIDIA container toolkit..."
sudo apt-get install -y nvidia-container-toolkit

##################################
# Run NVIDIA CUDA Docker Container
##################################

# Start the docker daemon process with the following command
service docker start

Once the script is executed, you can verify the GPU setup by running the following command.

# Run a container with GPU support to verify the GPU setup
sudo docker run --gpus all nvidia/cuda:11.7.1-base-ubuntu22.04 /usr/bin/nvidia-smi

You should see the GPU setup information like the following.

Fri May 30 07:08:31 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.124.06             Driver Version: 570.124.06     CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA H100 80GB HBM3          On  |   00000000:61:00.0 Off |                    0 |
| N/A   27C    P0             74W /  700W |       1MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA H100 80GB HBM3          On  |   00000000:62:00.0 Off |                    0 |
| N/A   28C    P0             68W /  700W |       1MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   2  NVIDIA H100 80GB HBM3          On  |   00000000:63:00.0 Off |                    0 |
| N/A   31C    P0             72W /  700W |       1MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   3  NVIDIA H100 80GB HBM3          On  |   00000000:64:00.0 Off |                    0 |
| N/A   26C    P0             66W /  700W |       1MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   4  NVIDIA H100 80GB HBM3          On  |   00000000:6A:00.0 Off |                    0 |
| N/A   26C    P0             70W /  700W |       1MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   5  NVIDIA H100 80GB HBM3          On  |   00000000:6B:00.0 Off |                    0 |
| N/A   28C    P0             70W /  700W |       1MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   6  NVIDIA H100 80GB HBM3          On  |   00000000:6C:00.0 Off |                    0 |
| N/A   29C    P0             70W /  700W |       1MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   7  NVIDIA H100 80GB HBM3          On  |   00000000:6D:00.0 Off |                    0 |
| N/A   25C    P0             71W /  700W |       1MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

Build Your Container Image

Now you can build your container image in the dev pod. First, create a Dockerfile with your desired configuration. Here's a simple sample Dockerfile:

# Create a Dockerfile
cat <<EOF > Dockerfile
FROM python:3.9-alpine

# Set working directory
WORKDIR /app

# Expose the port
EXPOSE 8000

# Set the default command to run the server
CMD ["python3", "-m", "http.server", "8000"]
EOF

Now you can build your container image using the following command:

# Build the image
sudo docker build -t my-http-server .

# Run the container
sudo docker run -tid -p 8000:8000 my-http-server

# Check the container is running
curl http://localhost:8000

You can change the Dockerfile based on your needs to build your own container image. For more information about the Dockerfile, please refer to the Dockerfile reference.

Push to Remote Repository

Once the container image is built, you can push it to a remote repository for further use with docker image push command.

Copyright @ 2025, NVIDIA Corporation.