Method 6: Container Installation#

Recommended for: Isolated development environments, reproducible builds, CI/CD pipelines, cloud deployments

Advantages:

✓ All dependencies pre-packaged ✓ Consistent environment across systems ✓ Easy to deploy and share ✓ No local installation conflicts ✓ Includes development tools and samples

Limitations:

✗ Requires Docker or compatible container runtime ✗ Requires NVIDIA Container Toolkit for GPU access ✗ Larger download size (multi-GB) ✗ Additional container overhead

Platform Support#

Supported Platforms:

  • Linux x86-64

  • Linux ARM64 (NVIDIA Jetson)

Prerequisites:

  • Docker (version 19.03+) or Podman installed

  • NVIDIA Container Toolkit installed and configured

  • NVIDIA GPU with appropriate drivers

Installation Steps#

Step 1: Install NVIDIA Container Toolkit (if not already installed)

Follow the instructions on the NVIDIA Container Toolkit Installation Guide.

Quick installation for Ubuntu/Debian:

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
   sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker

Step 2: Pull the TensorRT NGC container

Find the latest TensorRT container on NVIDIA NGC Catalog.

docker pull nvcr.io/nvidia/tensorrt:<container-tag>

Step 3: Run the container

docker run --gpus all -it --rm \
   nvcr.io/nvidia/tensorrt:<container-tag>

This command:

  • --gpus all: Enables GPU access

  • -it: Interactive terminal

  • --rm: Removes container on exit

Optional: Run with specific GPU(s):

docker run --gpus '"device=0,1"' -it --rm nvcr.io/nvidia/tensorrt:<container-tag>

Verification#

Inside the container, verify TensorRT installation:

Check TensorRT version:

trtexec --version

Python verification:

import tensorrt as trt
print(f"TensorRT version: {trt.__version__}")

Run a sample:

cd /workspace/tensorrt/oss
mkdir build && cd build
cmake .. -DBUILD_PARSERS=OFF -DBUILD_PLUGINS=OFF -DBUILD_SAMPLES=ON
make -j8

./sample_onnx_mnist

Troubleshooting#

For detailed information about the container, refer to the NVIDIA TensorRT Container Release Notes.

Issue: docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]

  • Solution: NVIDIA Container Toolkit is not installed or not configured correctly. Restart Docker daemon after installation:

    sudo systemctl restart docker
    

Issue: Container cannot access GPU

  • Solution: Verify NVIDIA drivers are installed and working:

    nvidia-smi
    

    Ensure NVIDIA Container Toolkit runtime is set as default:

    sudo nvidia-ctk runtime configure --runtime=docker
    sudo systemctl restart docker
    

Issue: Permission denied when mounting volumes

  • Solution: Add your user to the docker group:

    sudo usermod -aG docker $USER
    newgrp docker