Step #4: Running a Docker Container

With our PyTorch image downloaded from NGC, we can now launch a container and investigate the contents. To view a full list of images installed, run docker images.

On your workstation, launch the container while specifying that you want all available GPUs to be included. If you do not have an NVIDIA GPU and did not install the NVIDIA Container Toolkit, remove the --gpus all flag from the docker run command to launch without GPUs.

Copy
Copied!
            

$ docker run --rm -it --gpus all nvcr.io/nvidia/pytorch:22.03-py3 ============= == PyTorch == ============= NVIDIA Release 22.03 (build 33569136) PyTorch Version 1.12.0a0+2c916ef Container image Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. Copyright (c) 2014-2022 Facebook Inc. Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert) Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu) Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu) Copyright (c) 2011-2013 NYU (Clement Farabet) Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston) Copyright (c) 2006 Idiap Research Institute (Samy Bengio) Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz) Copyright (c) 2015 Google Inc. Copyright (c) 2015 Yangqing Jia Copyright (c) 2013-2016 The Caffe contributors All rights reserved. Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved. This container image and its contents are governed by the NVIDIA Deep Learning Container License. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license root@ca44795386ae:/workspace#

If you see a message similar to the above, you are now inside the container.

To familiarize yourself with the image, type ls at the prompt to list the current directories contents. The output should look something like the following:

Copy
Copied!
            

$ ls NVIDIA_Deep_Learning_Container_License.pdf README.md docker-examples examples tutorials

This will very likely look different from the working directory you were in prior to launching the container which indicates that we are inside the container and viewing contents specific to it as anticipated.

To further demonstrate the flexibility containers provide in terms of installing packages, let’s install a new package using Ubuntu’s package manager. In general, images available on NGC are based on the Ubuntu Operating System which might be different from the OS installed on your workstation.

To install a package, run the following inside the container (note that you are running as root by default and sudo is not required. In fact, sudo is not a valid command inside the container):

Copy
Copied!
            

$ apt update $ apt install -y htop

This installs the htop application - a commonly-used process viewer for Linux. Running htop will verify that it was installed and is now usable inside the container. Press “q” on your keyboard to quit out of htop.

If htop was not installed on your workstation prior to running the container, it will remain uninstalled on the physical host (ie. outside of the running container). This is as intended as it allows us to configure a tailored environment for a specific application inside the container without introducing dependency conflicts on the workstation. This is useful when working on different machines as they will very likely have different versions or combinations of packages installed, causing potential conflicts. Installing specific packages inside the container ensures all systems will run the same setup regardless of what is installed on the physical host.

On the flipside, since we are running the base container from NGC, we will need to install applications manually every time we launch the container. To prevent this, we can create a custom image that includes all of our packages, code, settings, and other updates to match exactly what we need. This allows us to launch a container without making manual changes inside.

If desired, a container that has been modified at runtime can be saved as a new image by using the docker commit command, though it is recommended to use Dockerfiles (more on this next) to have a documented approach to reproduce an image for others. If you would like to commit a container as a new image, first find the container name with docker ps in a separate terminal.

Copy
Copied!
            

$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bfee26ccb86e nvcr.io/nvidia/pytorch:22.03-py3 "/opt/nvidia/nvidia_…" 4 seconds ago Up 3 seconds 6006/tcp, 8888/tcp elegant_goldstine

The container was randomly named “elegant_goldstine” by Docker. We will use this name in the next step, though yours will very likely be different.

To save the running container as an image, run the following while changing the names as needed:

Copy
Copied!
            

$ docker commit elegant_goldstine custom_image:1.0.0

Once finished looking at the container, type exit to get out of the container.

© Copyright 2022-2023, NVIDIA. Last updated on Jan 10, 2023.