Examples of Running Containers

Logging Into the NGC Container Registry

You need to log in to the NGC container registry only if you want to access locked containers from the registry. Most of the NGC containers are freely available (unlocked) and do not require an NGC account or NGC API key.  
Note: You do not need to log into the NGC container registry if you are using either the GPU Accelerated Image for TensorFlow or the GPU Accelerated Image for PyTorch and intend to use the containers already built into the image

If necessary, log in to the NGC container registry manually by running the following script from the VMI.

ngc-login.sh <your-NGC-API-key>

From this point you can run Docker commands and access locked NGC containers from the VM instance.

Running a Container

This section explains the basic process for running a container on the GPU Accelerated Image for TensorFlow, the GPU Accelerated Image for PyTorch, and the basic NGC Image

Running the Built-in TensorFlow Container

To run the TensorFlow container in the VM created from the GPU Accelerated Image for TensorFlow, refer to the release notes for the correct tag to use, then enter the following command.

docker run --runtime=nvidia --rm -it nvcr.io/nvidia/tensorflow:<tag>  

Running the Built-in PyTorch Container

To run the PyTorch container in the VM created from the GPU Accelerated Image for PyTorch, refer to the release notes for the correct tag to use, then enter the following command.

docker run --runtime=nvidia --rm -it nvcr.io/nvidia/pytorch:<tag>

Running a Container from the NGC Container Registry

To run containers from the NGC container registry,

  1. If necessary, log in to the NGC container registry as explained in the previous section.

  2. Enter the following commands.

docker pull nvcr.io/nvidia/<container-image>:<tag>
docker run --runtime=nvidia --rm -it nvcr.io/nvidia/<container-image>:<tag>

Example: MNIST Training Run Using PyTorch Container

Once logged in to the NVIDIA GPU Cloud Image instance, you can run the MNIST example under PyTorch.

Note that the PyTorch example will download the MNIST dataset from the web.

  1. Pull and run the PyTorch container:
    docker pull nvcr.io/nvidia/pytorch:18.02-py3
    docker run --runtime=nvidia --rm -it nvcr.io/nvidia/pytorch:18.02-py3.10
  2. Run the MNIST example:
    cd /opt/pytorch/examples/mnist
    python main.py

Example: MNIST Training Run Using TensorFlow Container

Once logged in to the NVIDIA GPU Cloud Image instance, you can run the MNIST example under TensorFlow.

Note that the TensorFlow built-in example will pull the MNIST dataset from the web.

  1. Pull and run the TensorFlow container.
    docker pull nvcr.io/nvidia/tensorflow:18.02-py3
    docker run --runtime=nvidia --rm -it nvcr.io/nvidia/tensorflow:18.02-py3
  2. Follow this tutorial: https://www.tensorflow.org/get_started/mnist/beginners
  3. Run the MNIST_with_summaries example:
    cd /opt/tensorflow/tensorflow/examples/tutorials/mnist
    python mnist_with_summaries.py