What can I help you with?
NVIDIA Optimized Frameworks

Running CUDA DL

Before you can run an NGC deep learning framework container, your Docker environment must support NVIDIA GPUs. To run a container, issue the appropriate command as explained in Running A Container and specify the registry, repository, and tags.

On a system with GPU support for NGC containers, when you run a container, the following occurs when running a container:

  • The container runtime loads the image into a container which runs the software.
  • You define the container's runtime resources by including the additional flags and settings that are used with the command.

    These flags and settings are described in Running A Container.

  • The GPUs are explicitly defined for the Docker® container, which defaults to all GPUs, but can be specified by using the NVIDIA_VISIBLE_DEVICES environment variable.

To run a container, issue the appropriate command as explained in the Running A Container chapter in the NVIDIA Containers For Deep Learning Frameworks User’s Guide and specify the registry, repository, and tags. For more information about using NGC, refer to the NGC Container User Guide.

A typical command to launch the runtime/devel container is:

Copy
Copied!
            

docker run --gpus all -it --rm nvcri.io/nvidia/cuda-dl-base:YY.MM-cuda<xx.y>-<devel|runtime>-ubuntu<YY.MM>

Where:

YY.MM-cuda<xx.y>-<devel|runtime>-ubuntu<YY.MM> is the container version with "YY.MM" as the release number, "cuda<xx.y>" as CUDA version used in this container and "ubuntu<YY.MM>" as OS this container is built for.

For example:

Copy
Copied!
            

docker run --gpus all -it --rm nvcr.io/nvidia/cuda-dl-base:25.03-cuda12.9-<devel|runtime>-ubuntu24.04

© Copyright 2025, NVIDIA. Last updated on Apr 9, 2025.