Pulling And Running NVIDIA AI Enterprise Containers

NVIDIA AI Enterprise 2.0 or later

To get started with NVIDIA AI Enterprise in the Cloud, you will need to pull the necessary NVIDIA AI Enterprise containers, based on your use case, from the Enterprise Catalog on NGC. Instructions for pulling NVIDIA AI Enterprise containers can found in the Installing AI and Data Science Applications and Frameworks of the Virtualization Deployment Guide. The following container pull commands for cloud VMI’s are indicated below:

Copy
Copied!
            

docker pull nvcr.io/nvaie/nvidia-rapids-<NVAIE-MAJOR-VERSION>:<NVAIE-CONTAINER-TAG> docker pull nvcr.io/nvaie/pytorch-<NVAIE-MAJOR-VERSION>:<NVAIE-CONTAINER-TAG> docker pull nvcr.io/nvaie/tao-toolkit-lm-<NVAIE-MAJOR-VERSION>:<NVAIE-CONTAINER-TAG> docker pull nvcr.io/nvaie/tao-toolkit-pyt-<NVAIE-MAJOR-VERSION>:<NVAIE-CONTAINER-TAG> docker pull nvcr.io/nvaie/tao-toolkit-tf-<NVAIE-MAJOR-VERSION>:<NVAIE-CONTAINER-TAG> docker pull nvcr.io/nvaie/tensorflow-<NVAIE-MAJOR-VERSION>:<NVAIE-CONTAINER-TAG> docker pull nvcr.io/nvaie/tensorrt-<NVAIE-MAJOR-VERSION>:<NVAIE-CONTAINER-TAG> docker pull nvcr.io/nvaie/tritonserver-<NVAIE-MAJOR-VERSION>:<NVAIE-CONTAINER-TAG>

Note

OCI VMI’s and instances are supported with NVIDIA AI Enterprise 3.0 or later.

To ensure your cloud instance is GPU accelerated, use docker run –gpus to run NVIDIA AI Enterprise containers in your cloud VMI instance.

  • Example using all GPUs

    Copy
    Copied!
                

    $ docker run --gpus all


  • Example using two GPUs

    Copy
    Copied!
                

    $ docker run --gpus 2


  • Example using specific GPUs

    Copy
    Copied!
                

    $ docker run --gpus "device=1,2" ... $ docker run --gpus "device=UUID-ABCDEF,1"


To run NVIDA AI Enterprise containers in your cloud VMI, refer to the Enterprise Catalog on NGC for the correct pull tag to use for a specific container, then enter the following docker images:

Log in to your cloud VMI instance via a terminal to run the MNIST example.

Note that the PyTorch example will download the MNIST dataset from the web.

  1. Pull and run the PyTorch container:

    Copy
    Copied!
                

    docker pull nvcr.io/nvaie/pytorch-2-0:22.02-nvaie-2.0-py3 docker run --gpus all --rm -it nvcr.io/nvaie/pytorch-2-0:22.02-nvaie-2.0-py3


  2. Run the MNIST example:

    Copy
    Copied!
                

    cd /workspace/examples/upstream/mnist python main.py


© Copyright 2022-2023, NVIDIA. Last updated on Jan 5, 2023.