Containers For Deep Learning Frameworks User Guide
Abstract
This guide provides a detailed overview about containers and step-by-step instructions for pulling and running a container and customizing and extending containers.
Containers encapsulate an application along with its libraries and other dependencies to provide reproducible and reliable execution of applications and services without the overhead of a full virtual machine.
As of Docker release 19.03, NVIDIA GPUs are natively supported as devices in the Docker runtime. This means that the special runtime provided by nvidia-docker2 is no longer necessary.
- Docker image
- A Docker image is simply the software (including the filesystem and parameters) that you run within a Docker container.
- Docker container
- A Docker container is an instance of a Docker image. A Docker container deploys a single application or service per container.
1.1. What Is A Docker Container?
A Docker container is a mechanism for bundling a Linux application with all of its libraries, data files, and environment variables so that the execution environment is always the same, on whatever Linux system it runs and between instances on the same host.
Unlike a VM which has its own isolated kernel, containers use the host system kernel. Therefore, all kernel calls from the container are handled by the host system kernel. DGX™ systems uses Docker containers as the mechanism for deploying deep learning frameworks.
A Docker container is composed of layers. The layers are combined to create the container. You can think of layers as intermediate images that add some capability to the overall container. If you make a change to a layer through a DockerFile (see Building Containers), than Docker rebuilds that layer and all subsequent layers but not the layers that are not affected by the build. This reduces the time to create containers and also allows you to keep them modular.
Docker is also very good about keeping one copy of the layers on a system. This saves space and also greatly reduces the possibility of “version skew” so that layers that should be the same are not duplicated.
A Docker container is the running instance of a Docker image.
1.2. Why Use A Container?
One of the many benefits to using containers is that you can install your application, dependencies and environment variables one time into the container image; rather than on each system you run on. In addition, the key benefits to using containers also include:
- Install your application, dependencies and environment variables one time into the container image; rather than on each system you run on.
- There is no risk of conflict with libraries that are installed by others.
- Containers allow use of multiple different deep learning frameworks, which may have conflicting software dependencies, on the same server.
- After you build your application into a container, you can run it on lots of other places, especially servers, without having to install any software.
- Legacy accelerated compute applications can be containerized and deployed on newer systems, on premise, or in the cloud.
- Specific GPU resources can be allocated to a container for isolation and better performance.
- You can easily share, collaborate, and test applications across different environments.
- Multiple instances of a given deep learning framework can be run concurrently with each having one or more specific GPUs assigned.
- Containers can be used to resolve network-port conflicts between applications by mapping container-ports to specific externally-visible ports when launching the container.
1.3. Hello World For Containers
To make sure you have access to the NVIDIA containers, start with the proverbial “hello world” of Docker commands.
For DGX systems, simply log into the system. See the Frameworks Support Matrix for the current list of DGX systems supported. For NGC consult the NGC documentation for details about your specific cloud provider. In general, you will start a cloud instance with your cloud provider using the NVIDIA Deep Learning Image. After the instance has booted, log into the instance.
Next, you can issue the docker --version
command to list the version of DGX systems. The output of this command tells you the version of Docker on the system (18.06.3-ce, build 89658be
).
At any time, if you are not sure about a Docker command, issue the docker --help
command.
1.4. Logging Into Docker
If you have a DGX system, the first time you login, you are required to set-up access to the NVIDIA NGC container registry (https://ngc.nvidia.com). For more information, see the NGC User Guide.
1.5. Listing Docker Images
Typically, one of the first things you will want to do is get a list of all the Docker images that are currently available on the local computer. Issuing a docker pull
command will download Docker images from the repository onto your local system.
Issue the docker images
command to list the images on the server. Your screen will look similar to the following:
REPOSITORY TAG IMAGE ID
mxnet-dec latest 65a48ellda96
<none> <none> bfc4512ca5f2
nvcr.io/nvidian_general/adlr_pytorch resumes al34a09668a8
<none> <none> 0f4ab6d62241
<none> <none> 97274da5c898
nvcr.io/nvidian_sas/games-libcchem cuda10 3dcl3f8347f9
nvidia/cuda latest 614dcdafa05c
ubuntu latest d355ed3537e9
deeper_photo latest 9326345l4d5a
nvidia/cuda 10.0-devel-centos7 6e3e5b71176e
nvcr.io/nvidia/tensorflow 19.03 56f2980ble37
nvidia/cuda 10.0-cudnn7-devel-ubuntu16.04 22afb0578249
nvidia/cuda 10.0-devel a760a0cfca82
mxnet/python gpu 7e7c9176319c
deep_photo latest ef4510510506
<none> <none> 9124236672fe
nvcr.io/nvidia/cuda 10.0-cudnn7-devel-ubuntu18.04 02910409eb5d
nvcr.io/nvidia/tensorflow 19.05 9dda0d5c344f
In this example, there are a few Docker containers that have been pulled down to this system. Each image is listed along with its tag, the corresponding Image ID, also known as container version. There are two other columns that list when the container was created (approximately), and the approximate size of the image in GB. These columns have been cropped to improve readability.
The output from the command will vary.
At any time, if you need help, issue the docker images --help
command.
About this task
NVIDIA has also developed a set of container images that ensure the best performance for your applications.
NGC containers take full advantage of NVIDIA GPUs. For more information, see the NGC Catalog User Guide.
2.1. Docker Best Practices
You can run a Docker container on any platform that is Docker compatible allowing you to move your application to wherever you need. The containers are platform-agnostic, and therefore, hardware agnostic as well. To get the best performance and to take full advantage of the tremendous performance of an NVIDIA GPU, specific kernel modules and user-level libraries are needed. NVIDIA GPUs introduce some complexity because they require kernel modules and user-level libraries to operate.
One approach to solving this complexity when using containers is to have the NVIDIA drivers installed in the container and have the character devices mapped corresponding to the NVIDIA GPUs such as /dev/nvidia0
. For this to work, the drivers on the host (the system that is running the container), must match the version of the driver installed in the container. This approach drastically reduces the portability of the container.
2.2. docker exec
There are times when you will need to connect to a running container. You can use the docker exec
command to connect to a running container to run commands. You can use the bash
command to start an interactive command line terminal or bash shell.
$ docker exec -it <CONTAINER_ID_OR_NAME> bash
As an example, start a PyTorch Deep Learning GPU Training System™ (DIGITS) container with the following command:
docker run --gpus all -d --name test-pyt \
-u $(id -u):$(id -g) -e HOME=$HOME -e USER=$USER -v $HOME:$HOME \
nvcr.io/nvidia/pytorch:24.05-py3
After the container is running, you can now connect to the container instance.
$ docker exec -it test-pyt bash
test-pyt
is the name of the container. If you don’t specifically name the container, you will have to use the container ID.
By using docker exec
, you can execute a snippet of code, a script, or attach interactively to the container making the docker exec
command very useful.
For detailed usage of the docker exec
command, see docker exec.
2.3. nvcr.io
Building deep learning frameworks can be quite a bit of work and can be very time consuming. Moreover, these frameworks are being updated weekly, if not daily. On top of this, is the need to optimize and tune the frameworks for GPUs. NVIDIA has created a Docker repository, named nvcr.io
, where deep learning frameworks are tuned, optimized, tested, and containerized for your use.
NVIDIA creates an updated set of Docker containers for the frameworks monthly. Included in the container is source (these are open-source frameworks), scripts for building the frameworks, Dockerfiles for creating containers based on these containers, markdown files that contain text about the specific container, and tools and scripts for pulling down datasets that can be used for testing or learning. Customers who purchase a DGX system have access to this repository for pushing containers (storing containers).
To get started with DGX systems, you need to create a system admin account for accessing nvcr.io
. This account should be treated as an admin account so that users cannot access it. Once this account is created, the system admin can create accounts for projects that belong to the account. They can then give users access to these projects so that they can store or share any containers that they create.
2.4. Building Containers
About this task
You can build and store containers in the nvcr.io
registry as a project within your account if you have a DGX system (for example, no one else can access the container unless you give them access).
This section of the document applies to Docker containers in general. You can use this general approach for your own Docker repository as well, but be cautious of the details. Using a DGX system you can either:
- Create your container from scratch
- Base your container on an existing Docker container
- Base your container on containers in
nvcr.io
.
Any one of the three approaches are valid and will work, however, since the goal is to run the containers on a system which has GPUs, it's logical to assume that the applications will be using GPUs. Moreover, these containers are already tuned for GPUs. All of them also include the needed GPU libraries, configuration files, and tools to rebuild the container.
Based on these assumptions, it's recommended that you start with a container from nvcr.io
.
An existing container in nvcr.io
should be used as a starting point. As an example, the TensorFlow 24.05 container will be used and Octave will be added to the container so that some post-processing of the results can be accomplished.
Procedure
- Pull the container from the NGC container registry to the server. See Pulling A Container.
- On the server, create a subdirectory called
mydocker
.Note:This is an arbitrary directory name.
- Inside this directory, create a file called
Dockerfile
(capitalization is important). This is the default name that Docker looks for when creating a container. TheDockerfile
should look similar to the following:[username ~]$ mkdir mydocker [username ~]$ cd mydocker [username mydocker]$ vi Dockerfile [username mydocker]$ more Dockerfile FROM nvcr.io/nvidia/tensorflow:24.05-tf2-py3 RUN apt-get update RUN apt-get install -y octave [username mydocker]$
Dockerfile
.- The first line in the
Dockerfile
tells Docker to start with the containernvcr.io/nvidia/tensorflow:24.05-tf2-py3
. This is the base container for the new container. - The second line in the
Dockerfile
performs a package update for the container. It doesn’t update any of the applications in the container, but updates theapt-get
database. This is needed before we install new applications in the container. - The third and last line in the
Dockerfile
tells Docker to install the packageoctave
into the container usingapt-get
.
The Docker command to create the container is:
$ docker build -t nvcr.io/nvidian_sas/tensorflow_octave:24.05_with_octave
Note:This command uses the default file
Dockerfile
for creating the container.The command starts with
docker build
. The-t
option creates a tag for this new container. Notice that the tag specifies the project in thenvcr.io
repository where the container is to be stored. As an example, the projectnvidian_sas
was used along with the repositorynvcr.io
.Projects can be created by your local administrator who controls access to
nvcr.io
, or they can give you permission to create them. This is where you can store your specific containers and even share them with your colleagues.[username mydocker]$ docker build -t nvcr.io/nvidian_sas/tensorflow_octave:24.05_with_octave. Sending build context to Docker daemon 2.048kB Step 1/3 : FROM nvcr.io/nvidia/tensorflow:24.05-tf2-py3 –--> 56f2980ble37 Step 2/3 : RUN apt-get update –--> Running in 69cffa7bbadd Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB] Get:2 http://ppa.launchpad.net/openjdk-r/ppa/ubuntu xenial InRelease [17.5 kB] Get:3 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB] Get:4 http://ppa.launchpad.net/openjdk-r/ppa/ubuntu xenial/main amd64 Packages [7096 B] Get:5 http://security.ubuntu.com/ubuntu xenial-security/universe Sources [42.0 kB] Get:6 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [380 kB] Get:7 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB] Get:8 http://security.ubuntu.com/ubuntu xenial-security/restricted amd64 Packages [12.8 kB] Get:9 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [178 kB] Get:10 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [2931 B] Get:11 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB] Get:12 http://archive.ubuntu.com/ubuntu xenial/universe Sources [9802 kB] Get:13 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages [1558 kB] Get:14 http://archive.ubuntu.com/ubuntu xenial/restricted amd64 Packages [14.1 kB] Get:15 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages [9827 kB]
In the brief output from the
docker build …
command shown above, each line in the Dockerfile is a Step. In the screen capture, you can see the first and second steps (commands). Docker echos these commands to the standard out (stdout
) so you can watch what it is doing or you can capture the output for documentation.After the image is built, remember that we haven’t stored the image in a repository yet, therefore, it’s a
docker image
. Docker prints out the image id tostdout
at the very end. It also tells you if you have successfully created and tagged the image.If you don’t see
Successfully ...
at the end of the output, examine yourDockerfile
for errors (perhaps try to simplify it) or try a very simpleDockerfile
to ensure that Docker is working properly. - The first line in the
- Verify that Docker successfully created the image.
$ docker images
[username mydocker]$ docker images REPOSITORY TAG IMAGE ID CREATED nvcr.io/nvidian_sas/tensorflow_octave 24.05_with_octave 67c448c6fe37 About a minute ago nvcr.io/nvidian_general/adlr_pytorch resumes 17f2398a629e 47 hours ago <none> <none> 0c0f174e3bbc 9 days ago nvcr.io/nvidian_sas/pushed-hshin latest c026c5260844 9 days ago <none> <none> al34a09668a8 2 weeks ago <none> <none> 0f4ab6d62241 2 weeks ago nvidia/cuda 10.0-cudnn7-devel-ubuntu16.04 a995cebf5782 2 weeks ago keras_ae latest 92ab2bed8348 3 weeks ago nvidia/cuda latest 6l4dcdafa05c 3 weeks ago ubuntu latest d355ed3537e9 3 weeks ago deeper_photo latest f4e395972368 4 weeks ago <none> <none> 0e8208a5e440 4 weeks ago nvcr.io/nvidia/tensorflow 19.03 56f2980ble37 5 weeks ago mxnet/python gpu 7e7c9176319c 6 weeks ago deep_photo latest ef4510510506 7 weeks ago <none> <none> 9124236672fe 8 weeks ago nvcr.io/nvidia/cuda 10.0-cudnn7-devel-ubuntu18.04 02910409eb5d 8 weeks ago nvcr.io/nvidia/tensorflow 19.03 9dda0d5c344f 2 months ago nvcr.io/nvidia/tensorflow 19.03 121558cb5849 3 months ago
The very first entry is the new image (about 1 minute old).
- Push the image into the repository, creating a container.
docker push <name of image>
[username mydocker]$ docker push nvcr.io/nvidian_sas/tensorflow_octave:24.05_with_octave The push refers to a repository [nvcr.io/nvidian_sas/tensorflow_octave] lb8lf494d27d: Image successfully pushed 023cdba2c5b6: Image successfully pushed 8dd41145979c: Image successfully pushed 7cbl6b9b8d56: Image already pushed, skipping bd5775db0720: Image already pushed, skipping bc0c86a33aa4: Image already pushed, skipping cc739l3099f7: Image already pushed, skipping d49f214775fb: Image already pushed, skipping 5d6703088aa0: Image already pushed, skipping 7822424b3bee: Image already pushed, skipping e999e9a30273: Image already pushed, skipping e33eae9b4a84: Image already pushed, skipping 4a2ad165539f: Image already pushed, skipping 7efc092a9b04: Image already pushed, skipping 914009c26729: Image already pushed, skipping 4a7ea614f0c0: Image already pushed, skipping 550043e76f4a: Image already pushed, skipping 9327bc0l58ld: Image already pushed, skipping 6ceab726bc9c: Image already pushed, skipping 362a53cd605a: Image already pushed, skipping 4b74ed8a0e09: Image already pushed, skipping lf926986fb96: Image already pushed, skipping 832ac06c43e0: Image already pushed, skipping 4c3abd56389f: Image already pushed, skipping d8b353eb3025: Image already pushed, skipping f2e85bc0b7bl: Image already pushed, skipping fc9ele5e38f7: Image already pushed, skipping f39a3f9c4559: Image already pushed, skipping 6a8bf8c8edbd: Image already pushed, skipping Pushing tag for rev [67c448c6fe37] on {https://nvcro.io/vl/repositories/nvidian_sas/tensorflow_octave
The above sample code is after the
docker push …
command pushes the image to the repository creating a container. At this point, you should log into the NGC container registry at https://ngc.nvidia.com and look under your project to see if the container is there.If you don’t see the container in your project, make sure that the tag on the image matches the location in the repository. If, for some reason, the push fails, try it again in case there was a communication issue between your system and the container registry (
nvcr.io
).To make sure that the container is in the repository, we can pull it to the server and run it. As a test, first remove the image from your DGX system using the command
docker rmi …
. Then, pull the container down to the server usingdocker pull …
. Notice that theoctave
prompt came up so it is installed and functioning correctly within the limits of this testing.
2.5. Using And Mounting File Systems
One of the fundamental aspects of using Docker is mounting file systems inside the Docker container. These file systems can contain input data for the frameworks or even code to run in the container. Docker containers have their own internal file system that is separate from file systems on the rest of the host.
You can copy data into the container file system from outside if you want. However, it’s far easier to mount an outside file system into the container.
Mounting outside file systems is done using the docker run --gpus
command and the -v
option. For example, the following command mounts two file systems:
$ docker run --gpus all --rm -ti ... -v $HOME:$HOME \
-v /datasets:/digits_data:ro \
...
Most of the command has been erased except for the volumes. This command mounts the user’s home directory from the external file system to the home directory in the container (-v $HOME:$HOME
). It also takes the /datasets
directory from the host and mounts it on /digits_data
inside the container (-v /datasets:/digits_data:ro
).
The user has root privileges with Docker, therefore you can mount almost anything from the host system to anywhere in the container.
For this particular command, the volume command takes the form of:
-v <External FS Path>:<Container FS Path>(options) \
The first part of the option is the path for the external file system. To be sure this works correctly, it’s best to use the fully qualified path (FQP). This is also true for the mount point inside the container <Container FS Path>
.
After the last path, various options can be used in parenthesis ()
. In the above example, the second file system is mounted read-only (ro
) inside the container. The various options for the -v
option are discussed here.
The DGX™ systems and the Docker containers use the Overlay2 storage driver to mount external file systems onto the container file system. Overlay2 is a union-mount file system driver that allows you to combine multiple file systems so that all the content appears to be combined into a single file system. It creates a union of the file systems rather than an intersection.
About this task
Before you can pull a container from the NGC container registry, you must have Docker installed. For DGX users, this is explained in Preparing to use NVIDIA Containers.
For users other than DGX, follow the NGC User Guide. There are four repositories where you can find the NGC docker containers.
-
nvcr.io/nvidia
-
The deep learning framework containers are stored in the
nvcr.io/nvidia
repository. -
nvcr.io/hpc
-
The HPC containers are stored in the
nvcr.io/hpc
repository. -
nvcr.io/nvidia-hpcvis
-
The HPC visualization containers are stored in the
nvcr.io/nvidia-hpcvis
repository. -
nvcr.io/partner
-
The partner containers are stored in the
nvcr.io/partner
repository. Currently the partner containers are focused on Deep Learning or Machine Learning, but that doesn’t mean they are limited to those types of containers.
3.1. Key Concepts
To issue the pull and run commands, ensure that you are familiar with the following concepts. A pull command looks similar to:
docker pull nvcr.io/nvidia/tensorflow:24.05-tf2-py3
A run command looks similar to:
docker run --gpus all -it --rm –v local_dir:container_dir nvcr.io/nvidia/tensorflow:<xx.xx>-tf2-py3
The base command docker run --gpu all
assumes that your system has Docker 19.03-CE and the NVIDIA runtime packages installed. See the section Enabling GPU Support For NGC Containers for the command to use for earlier versions of Docker.
The following concepts describe the separate attributes that make up the both commands.
-
nvcr.io
-
The name of the container registry, which for the NGC container registry is
nvcr.io
. -
nvidia
-
The name of the space within the registry that contains the deep learning container. For containers provided by NVIDIA, the registry space is
nvidia
. -
-it
- You want to run the container in interactive mode.
-
--rm
- You want to delete the container when finished.
-
–v
- You want to mount the directory.
-
local_dir
-
The directory or file from your host system (absolute path) that you want to access from inside your container. For example, the
local_dir
in the following path is/home/jsmith/data/mnist
.-v /home/jsmith/data/mnist:/data/mnist
If you are inside the container, for example, using the command
ls /data/mnist
, you will see the same files as if you issued the ls/home/jsmith/data/mnist
command from outside the container. -
container_dir
-
The target directory when you are inside your container. For example,
/data/mnist
is the target directory in the example:-v /home/jsmith/data/mnist:/data/mnist
-
<xx.xx>
-
The container version. For example,
24.05
. -
py<x>
-
The Python version. For example,
py3
.
3.2. Accessing And Pulling From The NGC container registry
Before you begin
Follow User Guides on the NGC site to be able to access NGC software including obtaining the NGC API key. Ensure you are logged in to your client computer with the privileges required to run Docker containers. After you get access to NGC, you can access the NGC container registry from one of two ways:
- Pulling A Container From The NGC container registry Using The Docker CLI
- Pulling A Container Using The NGC Web Interface
About this task
A Docker registry is the service that stores Docker images. The service can be on the internet, on the company intranet, or on a local machine. For example, nvcr.io
is the location of the NGC container registry for Docker images.
All nvcr.io
Docker images use explicit container-version-tags to avoid tagging issues which can result from using the latest tag. For example, a locally tagged “latest” version of an image may actually override a different “latest” version in the registry.
Procedure
- Log in to the NGC container registry.
$ docker login nvcr.io
- When prompted for your username, enter the following text:
$oauthtoken
$oauthtoken
username is a special user name that indicates that you will authenticate with an API key and not a username and password. - When prompted for your password, enter your NGC API key.
Username: $oauthtoken Password: k7cqFTUvKKdiwGsPnWnyQFYGnlAlsCIRmlP67Qxa
Tip:When you get your API key, copy it to the clipboard so that you can paste the API key into the command shell when you are prompted for your password. Also, be sure to store it somewhere safe because it’s possible you may need it later.
3.2.1. Pulling A Container From The NGC container registry Using The Docker CLI
Before you begin
Before pulling a container, ensure that the following prerequisites are met:
- You have read access to the registry space that contains the container.
- You are logged into the NGC container registry as explained in Accessing And Pulling From The NGC container registry and you have your API key stored somewhere safe that is also accessible.
- Your account is a member of the
docker
group, which enables you to use Docker commands.
To browse the available containers in the NGC container registry use a web browser to log into your NGC container registry account on the NGC website.
Procedure
- Pull the container that you want from the registry. For example, to pull the PyTorch™ 21.02 container:
$ docker pull nvcr.io/nvidia/pytorch:21.02-py3
- List the Docker images on your system to confirm that the container was pulled.
$ docker images
What to do next
After pulling a container, you can run jobs in the container to run scientific workloads, train neural networks, deploy deep learning models, or perform AI analytics.
3.2.2. Pulling A Container Using The NGC Web Interface
Before you begin
Before you can pull a container from the NGC container registry, you must have Docker and nvidia-docker2 installed as explained in Preparing To Use NVIDIA Containers. You must also have access to the NGC container registry as explained in NGC User Guides.
About this task
- You have a cloud instance system and it is connected to the Internet.
- Your instance has Docker and nvidia-docker2 installed.
- You have access to a browser to the NGC container registry at https://ngc.nvidia.com and your NGC account is activated.
- You want to pull a container onto your cloud instance.
Procedure
- Log into the NGC container registry at https://ngc.nvidia.com.
- Click Registry in the left navigation. Browse the NGC container registry page to determine which Docker repositories and tags are available to you.
- Click one of the repositories to view information about that container image as well as the available tags that you will use when running the container.
- In the Pull column, click the icon to copy the Docker pull command.
- Open a command prompt and paste the Docker pull command. The pulling of the container image begins. Ensure the pull completes successfully.
- After you have the Docker container file on your local system, load the container into your local Docker registry.
- Verify that the image is loaded into your local Docker registry.
$ docker images
For more information pertaining to your specific container, refer to the
/workspace/README.md
file inside the container.
3.3. Verifying
After a Docker image is running, you can verify by using the classic *nix
option ps
. For example, issue the $ docker ps -a
command.
[username ~]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
12a4854ba738 nvcr.io/nvidia/tensorflow:21.02 "/usr/local/bin/nv..." 35 seconds ago
Without the -a
option, only running instances are listed.
It is best to include the -a
option in case there are hung jobs running or other performance problems.
You can also stop a running container if you want. For example:
[username ~]$ docker ps -a
CONTAINER ID IMAGE COMMAND PORTS NAMES
12a4854ba738 nvcr.io/nvidia/tensorflow:21.02 "/usr/local/bin/nv..." 6006/tcp brave_neumann
[username ~]$
[username ~]$ docker stop 12a4854ba738
12a4854ba738
[username ~]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED NAMES
Notice that you need the Container ID of the image you want to stop. This can be found using the $ docker ps -a
command.
Another useful command or Docker option is to remove the image from the server. Removing or deleting the image saves space on the server. For example, issue the following command:
$ docker rmi nvcr.io/nvidia.tensorflow:21.02
If you list the images, $ docker images
, on the server, then you will see that the image is no longer there.
NGC containers are hosted in a repository called nvcr.io
. As you read in the previous section, these containers can be “pulled” from the repository and used for GPU accelerated applications such as scientific workloads, visualization, and deep learning.
A Docker image is simply a file-system that a developer builds. Each layer depends on the layer below it in the stack.
From a Docker image, a container is created when the docker image is “run” or instantiated . When creating a container, you add a writable layer on top of the stack. A Docker image with a writable container layer added to it is a container. A container is simply a running instance of that image. All changes and modifications made to the container are made to the writable layer. You can delete the container; however, the Docker image remains untouched.
Figure 1 depicts the stack for the DGX family of systems. Notice that the NVIDIA Container Toolkit sits above the host OS and the NVIDIA Drivers. The tools are used to create, manage, and use NVIDIA containers - these are the layers above the nvidia-docker
layer. These containers have applications, deep learning SDKs, and the CUDA Toolkit. The NVIDIA containerization tools take care of mounting the appropriate NVIDIA Drivers.
Figure 1. Docker containers encapsulate application dependencies to provide reproducible and reliable execution. The nvidia-docker utility mounts the user mode components of the NVIDIA driver and the GPUs into the Docker container at launch.
4.1. NGC Images Versions
Each release of a Docker image is identified by a version “tag”. For simpler images this version tag usually contains the version of the major software package in the image. More complex images which contain multiple software packages or versions may use a separate version solely representing the containerized software configuration. One common scheme is using tags defined by the year and month of the image release. For example, the 21.02 release of an image was released in February, 2021. A complete image name consists of two parts separated by a colon. The first part is the name of the container in the repository and the second part is the “tag” associated with the container. These two pieces of information are shown in Figure 2, which is the output from issuing the docker images command.
Figure 2. Output from docker images
command
Figure 2 shows simple examples of image names, such as:
-
nvidia-cuda:8.0-devel
-
ubuntu:latest
-
nvcr.io/nvidia/tensorflow:21.01
If you choose not to add a tag to an image, by default the word “latest ” is added as the tag, however all NGC containers have an explicit version tag.
In the next sections, you will use these image names for running containers. Later in the document there is also a section on creating your own containers or customizing and extending existing containers.
Before you begin
Before you can run an NGC deep learning framework container, your Docker environment must support NVIDIA GPUs. To run a container, issue the appropriate command as explained in this chapter, specifying the registry, repository, and tags.
5.1. Enabling GPU Support For NGC Containers
On a system with GPU support for NGC containers, the following occurs when running a container:
- The Docker engine loads the image into a container which runs the software.
- You define the runtime resources of the container by including additional flags and settings that are used with the command. These flags and settings are described in the following sections.
- The GPUs are explicitly defined for the Docker container (defaults to all GPUs, can be specified using
NV_GPU
environment variable).
The method implemented in your system depends on the DGX OS version installed (for DGX systems), the specific NGC Cloud Image provided by a Cloud Service Provider, or the software that you have installed in preparation for running NGC containers on TITAN PCs, Quadro PCs, or vGPUs. Refer to the following table to assist in determining which method is implemented in your system.
Each method is invoked by using specific Docker commands, described as follows.
Using Native GPU support
If Docker is updated to 19.03 on a system which already has nvidia-docker
or nvidia-docker2
installed, then the corresponding methods can still be used.
- To use the native support on a new installation of Docker, first enable the new GPU support in Docker.
$ sudo apt-get install -y docker nvidia-container-toolkit
nvidia-docker2
installed. The native support will be enabled automatically. - Use
docker run --gpus
to run GPU-enabled containers.- Example using all GPUs:
$ docker run --gpus all ...
- Example using two GPUs:
$ docker run --gpus 2 ...
- Examples using specific GPUs:
$ docker run --gpus "device=1,2" ... $ docker run --gpus "device=UUID-ABCDEF,1" ...
- Example using all GPUs:
Example: Running A Container
Procedure
- As a user, run the container interactively.
$ docker run --gpus all -it --rm –v local_dir:container_dir nvcr.io/nvidia/<repository>:<xx.xx>
Note:The base command
docker run --gpu all
assumes that your system has Docker 19.03-CE installed. See the section Enabling GPU Support For NGC Containers for the command to use for earlier versions of Docker.$ docker run --gpus all --rm -ti nvcr.io/nvidia/pytorch:21.02-py3 ================== == NVIDIA PyTorch == ================== NVIDIA Release 21.02 (build 11032) Container image Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. Copyright (c) 2014 - 2019, The Regents of the University of California (Regents) All rights reserved. Various files include modifications (c) NVIDIA CORPORATION. All rights reserved. NVIDIA modifications are covered by the license terms that apply to the underlying project or file. root@df57eb8e0100:/workspace#
- From within the container, start the job that you want to run. The precise command to run depends on the deep learning framework in the container that you are running and the job that you want to run. For details see the
/workspace/README.md
file for the container. Example: The following example runs thepytorch time
command on one GPU to measure the execution time of thedeploy.prototxt
model.# pytorch time -model models/bvlc_alexnet/ -solver deploy.prototxt -gpu=0
- Optional: Run the February 2021 release (21.02) of the same NVIDIA PyTorch container but in non-interactive mode.
% docker run --gpus all -it --rm -v local_dir:container_dir nvcr.io/nvidia/pytorch:<xx.xx>-py3 <command>
5.3. Specifying A User
Unless otherwise specified, the user inside the container is the root user. When running within the container, files created on the host operating system or network volumes can be accessed by the root user. This is unacceptable for some users and they will want to set the ID of the user in the container. For example, to set the user in the container to be the currently running user, issue the following:
% docker run --gpus all -ti --rm -u $(id -u):$(id -g) nvcr.io/nvidia/<repository>:<container version>
Typically, this results in warnings due to the fact that the specified user and group do not exist in the container. You might see a message similar to the following:
groups: cannot find name for group ID 1000I have no name! @c177b61e5a93:/workspace$
The warning can usually be ignored.
5.4. Setting The Remove Flag
By default, Docker containers remain on the system after being run. Repeated pull or run operations use up more and more space on the local disk, even after exiting the container. Therefore, it is important to clean up the containers after exiting.
Do not use the --rm
flag if you have made changes to the container that you want to save, or if you want to access job logs after the run finishes.
To automatically remove a container when exiting, add the --rm
flag to the run command.
% docker run --gpus all --rm nvcr.io/nvidia/<repository>:<container version>
5.5. Setting The Interactive Flag
By default, containers run in batch mode; that is, the container is run once and then exited without any user interaction. Containers can also be run in interactive mode as a service.
To run in interactive mode, add the -ti
flag to the run command.
% docker run --gpus all -ti --rm nvcr.io/nvidia/<repository>:<container version>
5.6. Setting The Volumes Flag
There are no datasets included with the containers, therefore, if you want to use data sets, you need to mount volumes into the container from the host operating system. For more information, see Manage data in containers.
Typically, you would use either Docker volumes or host data volumes. The primary difference between host data volumes and Docker volumes is that Docker volumes are private to Docker and can only be shared amongst Docker containers. Docker volumes are not visible from the host operating system, and Docker manages the data storage. Host data volumes are any directory that is available from the host operating system. This can be your local disk or network volumes.
- Example 1
-
Mount a directory
/raid/imagedata
on the host operating system as/images
in the container.% docker run --gpus all -ti --rm -v /raid/imagedata:/images nvcr.io/nvidia/<repository>:<container version>
- Example 2
-
Mount a local docker volume named
data
(must be created if not already present) in the container as/imagedata
.% docker run --gpus all -ti --rm -v data:/imagedata nvcr.io/nvidia/<repository>:<container version>
5.7. Setting The Mapping Ports Flag
Applications such as Deep Learning GPU Training System™ (DIGITS) open a port for communications. You can control whether that port is open only on the local system or is available to other computers on the network outside of the local system. Using DIGITS as an example, in DIGITS 5.0 starting in container image 16.12, by default the DIGITS server is open on port 5000. However, after the container is started, you may not easily know the IP address of that container. To know the IP address of the container, you can choose one of the following ways:
- Expose the port using the local system network stack (
--net=host
) where port 5000 of the container is made available as port 5000 of the local system.
or
- Map the port (
-p 8080:5000
) where port 5000 of the container is made available as port 8080 of the local system.
In either case, users outside the local system have no visibility that DIGITS is running in a container. Without publishing the port, the port is still available from the host, however not from the outside.
5.8. Setting The Shared Memory Flag
Certain applications, such as PyTorch™ , use shared memory buffers to communicate between processes. Shared memory can also be required by single process applications, such as NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet™ and TensorFlow™ , which use the NVIDIA® Collective Communications Library ™ (NCCL).
By default, Docker containers are allotted 64MB of shared memory. This can be insufficient, particularly when using all 8 GPUs. To increase the shared memory limit to a specified size, for example 1GB
, include the --shm-size=1g
flag in your docker run
command.
Alternatively, you can specify the --ipc=host
flag to re-use the host’s shared memory space inside the container. Though this latter approach has security implications as any data in shared memory buffers could be visible to other containers.
5.9. Setting The Restricting Exposure Of GPUs Flag
From inside the container, the scripts and software are written to take advantage of all available GPUs. To coordinate the usage of GPUs at a higher level, you can use this flag to restrict the exposure of GPUs from the host to the container. For example, if you only want GPU 0 and GPU 1 to be seen in the container, you would issue the following:
Using native GPU support
$ docker run --gpus
"device=0,1
" ...
Using nvidia-docker2
$ NV_GPU=0,1 docker run --runtime=nvidia ...
Using nvidia-docker
$ NV_GPU=0,1 nvidia-docker run ...
This flag creates a temporary environment variable that restricts which GPUs are used.
Specified GPUs are defined per container using the Docker device-mapping feature, which is currently based on Linux cgroups
.
5.10. Container Lifetime
The state of an exited container is preserved indefinitely if you do not pass the --rm
flag to the docker run --gpus
command. You can list all of the saved exited containers and their size on the disk with the following command:
$ docker ps --all --size --filter Status=exited
The container size on the disk depends on the files created during the container execution, therefore the exited containers take only a small amount of disk space. You can permanently remove an exited container by issuing:
docker rm [CONTAINER ID]
By saving the state of containers after they have exited, you can still interact with them using the standard Docker commands. For example:
- You can examine logs from a past execution by issuing the
docker logs
command.$ docker logs 9489d47a054e
- You can extract the files using the
docker cp
command.$ docker cp 9489d47a054e:/log.txt .
- You can restart a stopped container using the
docker restart
command.$ docker restart <container name>
$ docker restart pytorch
- You can save your changes by creating a new image using the
docker commit
command. For more information, see Example 3: Customizing a Container usingdocker commit
.Note:Use care when committing Docker container changes, as data files created during use of the container will be added to the resulting image. In particular, core dump files and logs can dramatically increase the size of the resulting image.
The NVIDIA Deep Learning Software Developer Kit (SDK) contains everything that is on the NVIDIA registry area for DGX systems; including CUDA Toolkit, DIGITS and all of the deep learning frameworks. The NVIDIA Deep Learning SDK accelerates widely-used deep learning frameworks such as NVIDIA Optimized Deep Learning Framework, powered by Apache MXNet, PyTorch, and TensorFlow.
Starting in the 18.09 container release, the Caffe2, Microsoft Cognitive Toolkit, Theano™ , and Torch™ frameworks are no longer provided within a container image.
The software stack provides containerized versions of these frameworks optimized for the system. These frameworks, including all necessary dependencies, are pre-built, tested, tuned, and ready to run. For users who need more flexibility to build custom deep learning solutions, each framework container image also includes the framework source code to enable custom modifications and enhancements, along with the complete software development stack.
The design of the platform software is centered around a minimal OS and driver installed on the server, and provisioning of all application and SDK software in the containers through the NGC container registry for DGX systems.
All NGC container images are based on the platform layer (nvcr.io/nvidia/cuda
). This image provides a containerized version of the software development stack underpinning all other NGC containers, and is available for users who need more flexibility to build containers with custom applications.
6.1. OS Layer
Within the software stack, the lowest layer (or base layer) is the user space of the OS. The software in this layer includes all of the security patches that are available within the month of the release.
6.2. CUDA Layer
CUDA® is a parallel computing platform and programming model created by NVIDIA to give application developers access to the massive parallel processing capability of GPUs. CUDA is the foundation for GPU acceleration of deep learning as well as a wide range of other computation and memory-intensive applications ranging from astronomy to molecular dynamics simulation, to computational finance. For more information about CUDA, see the CUDA documentation.
6.2.1. CUDA Runtime
The CUDA runtime layer provides the components needed to execute CUDA applications in the deployment environment. The CUDA runtime is packaged with the CUDA Toolkit and includes all of the shared libraries, but none of the CUDA compiler components.
6.2.2. CUDA Toolkit
The CUDA Toolkit provides a development environment for developing optimized GPU-accelerated applications. With the CUDA Toolkit, you can develop, optimize and deploy your applications to GPU-accelerated embedded systems, desktop workstations, enterprise data-centers and the cloud. The CUDA Toolkit includes libraries, tools for debugging and optimization, a compiler and a runtime library to deploy your application. The following library provides GPU-accelerated primitives for deep neural networks:
- CUDA® Basic Linear Algebra Subroutines library™ (cuBLAS) (cuBLAS)
- cuBLAS is a GPU-accelerated version of the complete standard BLAS library that delivers significant speedup running on GPUs. The cuBLAS generalized matrix-matrix multiplication (GEMM) routine is a key computation used in deep neural networks, for example in computing fully connected layers. For more information about cuBLAS, see the cuBLAS documentation.
6.3. Deep Learning Libraries Layer
The following libraries are critical to deep learning on NVIDIA GPUs. These libraries are a part of the NVIDIA Deep Learning Software Development Kit (SDK).
6.3.1. NCCL
The NVIDIA® Collective Communications Library ™ (NCCL) (NCCL, pronounced “Nickel”) is a library of multi-GPU collective communication primitives that are topology-aware and can be easily integrated into applications. Collective communication algorithms employ many processors working in concert to aggregate data. NCCL is not a full-blown parallel programming framework; rather, it is a library focused on accelerating collective communication primitives. The following collective operations are currently supported:
-
AllReduce
-
Broadcast
-
Reduce
-
AllGather
-
ReduceScatter
Tight synchronization between communicating processors is a key aspect of collective communication. CUDA based collectives would traditionally be realized through a combination of CUDA memory copy operations and CUDA kernels for local reductions. NCCL, on the other hand, implements each collective in a single kernel handling both communication and computation operations. This allows for fast synchronization and minimizes the resources needed to reach peak bandwidth.
NCCL conveniently removes the need for developers to optimize their applications for specific machines. NCCL provides fast collectives over multiple GPUs both within and across nodes. It supports a variety of interconnect technologies including PCIe, NVLink™ , InfiniBand Verbs, and IP sockets. NCCL also automatically patterns its communication strategy to match the system’s underlying GPU interconnect topology. Next to performance, ease of programming was the primary consideration in the design of NCCL. NCCL uses a simple C API, which can be easily accessed from a variety of programming languages. NCCL closely follows the popular collectives API defined by MPI (Message Passing Interface). Anyone familiar with MPI will thus find NCCL’s API very natural to use. In a minor departure from MPI, NCCL collectives take a “stream” argument which provides direct integration with the CUDA programming model. Finally, NCCL is compatible with virtually any multi-GPU parallelization model, for example:
- single-threaded
- multi-threaded, for example, using one thread per GPU
- multi-process, for example, MPI combined with multi-threaded operation on GPUs
NCCL has found great application in deep learning frameworks, where the AllReduce
collective is heavily used for neural network training. Efficient scaling of neural network training is possible with the multi-GPU and multi node communication provided by NCCL.
For more information about NCCL, see the NCCL documentation.
6.3.2. cuDNN Layer
The CUDA® Deep Neural Network library™ (cuDNN) (cuDNN) provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers.
Frameworks do not all progress at the same rate and the lack of backward compatibility within the cuDNN library forces it to be in its own container. This means that there will be multiple CUDA and cuDNN containers available, but they will each have their own tag which the framework will need to specify in its Dockerfile.
For more information about cuDNN, see the cuDNN documentation.
6.4. Framework Containers
The framework layer includes all of the requirements for the specific deep learning framework. The primary goal of this layer is to provide a basic working framework. The frameworks can be further customized by a Platform Container layer specification. Within the frameworks layer, you can choose to:
- Run a framework exactly as delivered by NVIDIA; in which case, the framework is built and ready to run inside that container image.
- Start with the framework as delivered by NVIDIA and modify it a bit; in which case, you can start from NVIDIA’s container image, apply your modifications and recompile it inside the container.
- Start from scratch and build whatever application you want on top of the CUDA and cuDNN and NCCL layer that NVIDIA provides.
In the next section, the NVIDIA deep learning framework containers are presented.
For more information about frameworks, see the frameworks documentation.
A deep learning framework is part of a software stack that consists of several layers. Each layer depends on the layer below it in the stack. This software architecture has many advantages:
- Because each deep learning framework is in a separate container, each framework can use different versions of libraries such as the C standard library known as
libc
, cuDNN, and others, and not interfere with each other. - A key reason for having layered containers is that one can target the experience for what the user requires.
- As deep learning frameworks are improved for performance or bug fixes, new versions of the containers are made available in the registry.
- The system is easy to maintain, and the OS image stays clean since applications are not installed directly on the OS.
- Security updates, driver updates and OS patches can be delivered seamlessly.
The following sections present the framework containers that are in nvcr.io
.
7.1. Why Use a Deep Learning Software Framework?
Frameworks have been created to make researching and applying deep learning more accessible and efficient. The key benefits of using frameworks include:
- Frameworks provide highly optimized GPU enabled code specific to the computations required for training Deep Neural Networks (DNN).
- NVIDIA frameworks are tuned and tested for the best possible GPU performance.
- Frameworks provide access to code through simple command line or scripting language interfaces such as Python.
- Many powerful DNNs can be trained and deployed using these frameworks without ever having to write any GPU or complex compiled code but while still benefiting from the training speed-up afforded by GPU acceleration.
7.2. Kaldi
The Kaldi Speech Recognition Toolkit project began in 2009 at Johns Hopkins University with the intent of developing techniques to reduce both the cost and time required to build speech recognition systems. While originally focused on ASR support for new languages and domains, the Kaldi project has steadily grown in size and capabilities, enabling hundreds of researchers to participate in advancing the field. Now the de-facto speech recognition toolkit in the community, Kaldi helps to enable speech services used by millions of people every day.
For information about the optimizations and changes that have been made to Kaldi, see the Deep Learning Frameworks Kaldi Release Notes.
7.3. TensorFlow
TensorFlow™ is an open-source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) that flow between them. This flexible architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code.
TensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research. The system is general enough to be applicable in a wide variety of other domains, as well.
For visualizing TensorFlow results, this particular Docker image also contains TensorBoard. TensorBoard is a suite of visualization tools. For example, you can view the training histories as well as what the model looks like.
For information about the optimizations and changes that have been made to TensorFlow, see the Deep Learning Frameworks Release Notes.
Running The TensorFlow Container
An efficient way to run TensorFlow on the GPU system involves setting up a launcher script to run the code using a TensorFlowDocker container. For an example of how to run CIFAR-10 on multiple GPUs on system using cifar10_multi_gpu_train.py
, see TensorFlow models.
If you prefer to use a script for running TensorFlow, see the run_tf_cifar10.sh
script in the run_tf_cifar10.sh section. It is a bash script that you can run on a system. It assumes you have pulled the Docker container from the nvcr.io
repository to the system. It also assumes you have the CIFAR-10 data stored in /datasets/cifar
on the system and are mapping it to /datasets/cifar
in the container. You can also pass arguments to the script such as the following:
$./run_tf_cifar10.sh --data_dir=/datasets/cifar --num_gpus=8
The details of the run_tf_cifar10.sh
script parameterization is explained in the Keras section of this document (see ). You can modify the /datasets/cifar
path in the script for the site specific location to CIFAR data. If the CIFAR-10 dataset for TensorFlow is not available, then run the example with writeable volume -v /datasets/cifar:/datasets/cifar
(without ro
) and the data will be downloaded on the first run.
If you want to parallelize the CIFAR-10 training, basic data-parallelization for TensorFlow via Keras can be done as well. Refer to the example cifar10_cnn_mgpu.py
on GitHub.
A description of orchestrating a Python script with Docker containers is described in the run_tf_cifar10.sh script.
7.4. PyTorch
PyTorch is a GPU accelerated tensor computational framework with a Python front end. PyTorch is designed to be deeply integrated with Python. It is used naturally as you would use NumPy, SciPy and scikit-learn, or any other Python extension. You can even write the neural network layers in Python using libraries such as Cython and Numba. Acceleration libraries such as NVIDIA’s cuDNN and NCCL, along with Intel’s MKL are included to maximize performance.
PyTorch also includes standard defined neural network layers, deep learning optimizers, data loading utilities, and multi-GPU and multi-node support. Functions are executed immediately instead of enqueued in a static graph, improving ease of use and a sophisticated debugging experience.
For information about the optimizations and changes that have been made to PyTorch, see the Deep Learning Frameworks PyTorch Release Notes.
As part of DGX systems, NVIDIA makes available tuned, optimized, tested, and ready to run Docker containers for the major deep learning frameworks. These containers are made available via the NGC container registry, nvcr.io
, so that you can use them directly or use them as a basis for creating your own containers.
This section presents tips for efficiently using these frameworks. For best practices regarding how to use Docker, see Docker And Container Best Practices. To get started with NVIDIA containers, see Preparing To Use NVIDIA Containers.
8.1. Extending Containers
There are a few general best practices around the containers (the frameworks) in nvcr.io
. As mentioned earlier, it’s possible to use one of the containers and build upon it (extend it). By doing this, you are in a sense fixing the new container to a specific framework and container version. This approach works well if you are creating a derivative of a framework or adding some capability that doesn’t exist in the framework or container.
However, if you extend a framework understand that in a few months time, the framework will have likely changed. This is due to the speed of development of deep learning and deep learning frameworks. By extending a specific framework, you have locked the extensions into that particular version of the framework. As the framework evolves, you will have to add your extensions to these new versions, increasing your workload. If possible, it’s highly recommended to not tie the extensions to a specific container but keep them outside. If the extensions are invasive, then it is recommended to discuss the patches with the framework team for inclusion.
8.2. Datasets And Containers
You might be tempted to extend a container by putting a dataset into it. But once again, you are now fixing that container to a specific version. If you go to a new version of a framework or a new framework you will have to copy the data into it. This makes keeping up with the fast paced development of frameworks very difficult.
A best practice is to not put datasets in a container. If possible also avoid storing business logic code in a container. The reason is because by storing datasets or business logic code within a container, it becomes difficult to generalize the usage of the container.
Instead, one can mount file systems into a container that contain only the desired data sets and directories with business logic code to run. Decoupling the container from specific datasets and business logic enables you to easily change containers, such as framework or version of a container, without having to rebuild the container to hold the data or code.
The subsequent sections briefly present some best practices around the major frameworks that are in containers on the container registry (nvcr.io
). There is also a section that discusses how to use Keras, a very popular high-level abstraction of deep learning frameworks, with some of the containers.
8.3. Working With Containerized VNC Desktop Environment
The need for a containerized desktop varies depending on the data center setup. If the systems are set up behind a login node or a head node for an on-premise system, typically data centers will provide a VNC login node or run X Windows on the login node to facilitate running visual tools such as text editors or an IDE (integrated development environment).
For a cloud based system (NGC), there may already be firewalls and security rules available. In this case, you may want to ensure that the proper ports are open for VNC or something similar.
If the system serves as the primary resource for both development and computing, then it is possible to setup a desktop-like environment on it via containerized desktop. The instructions and Dockerfile
for this can be found here.
You can download the latest release of the container to the system. The next step is to modify the Dockerfile
by changing the FROM
field to be:
FROM nvcr.io/nvidia/cuda:11.0-cudnn6-devel-ubuntu20.04
This is not an officially supported container by the NVIDIA DGX product team, in other words, it is not available on nvcr.io
and was provided as an example of how to setup a desktop-like environment on a system for convenient development with eclipse
or sublime-text
(suggestion, try visual studio code which is very like sublime text but free) or any other GUI driven tool.
The build_run_dgxdesk.sh
example script is available on the GitHub site to build and run a containerized desktop as shown in the Scripts section. Other systems such as the DGX Station and NGC would follow a similar process.
To connect to the system, you can download a VNC client for your system from RealVnc, or use a web-browser.
=> connect via VNC viewer hostip:5901, default password: vncpassword
=> connect via noVNC HTML5 client: http://hostip:6901/?password=vncpassword
HPC Visualization Containers
In addition to accessing the NVIDIA optimized frameworks and HPC containers, the NGC container registry also hosts scientific visualization containers for HPC. These containers rely on the popular scientific visualization tool called ParaView. Visualization in an HPC environment typically requires remote visualization, that is, data resides and is processed on a remote HPC system or in the cloud, and the user graphically interacts with this application from their workstation. As some visualization containers require specialized client applications, the HPC visualization containers consist of two components:
- Server container
- The server container needs access to the files on your server system. Details on how to grant this access are provided below. The server container can run both in serial mode or in parallel. For this alpha release, we are focusing on the serial node configuration. If you are interested in parallel configuration, contact hpcviscontainer@nvidia.com.
- Client container
- To ensure matching versions of the client application and the server container, NVIDIA provides the client application in a container. Similarly, to the server container, the client container needs access to some of the ports to establish connection with the server container.
In addition, the client container needs access to the users’ X server for displaying the graphical user interface.
NVIDIA recommends to map a host file system into the client container in order to enable saving of the visualization products or other data. In addition, the connection between the client and server container needs to be opened.
For a list of available HPC visualization containers and steps on how to use them, see the NGC Container User Guide.
NVIDIA Docker images come prepackaged, tuned, and ready to run; however, you may want to build a new image from scratch or augment an existing image with custom code, libraries, data, or settings for your corporate infrastructure. This section will guide you through exercises that will highlight how to create a container from scratch, customize a container, extend a deep learning framework to add features, develop some code using that extended framework from the developer environment, then package that code as a versioned release.
By default, you do not need to build a container. The NGC container registry, nvcr.io
, has a number of containers that can be used immediately. These include containers for deep learning, scientific computing and visualization, as well as containers with just the CUDA Toolkit.
One of the great things about containers is that they can be used as starting points for creating new containers. This can be referred to as “customizing” or “extending” a container. You can create a container completely from scratch, however, since these containers are likely to run on a GPU system, it is recommended that you are least start with a nvcr.io
container that contains the OS and CUDA. However, you are not limited to this and can create a container that runs on the CPUs in the system which does not use the GPUs. In this case, you can start with a bare OS container from Docker. However, to make development easier, you can still start with a container with CUDA - it is just not used when the container is used.
In the case of DGX systems, you can push or save your modified/extended containers to the NGC container registry, nvcr.io
. They can also be shared with other users of the DGX system but this requires some administrator help.
It is important to note that all deep learning framework images include the source to build the framework itself as well as all of the prerequisites.
Do not install an NVIDIA driver into the Docker image at Docker build time.
10.1. Customizing A Container
NVIDIA provides a large set of images in the NGC container registry that are already tested, tuned, and are ready to run. You can pull any one of these images to create a container and add software or data of your choosing.
A best-practice is to avoid docker commit
usage for developing new docker images, and to use Dockerfiles instead. The Dockerfile method provides visibility and capability to efficiently version-control changes made during development of a docker image. The docker commit method is appropriate for short-lived, disposable images only (see Example 3: Customizing A Container Using docker commit for an example).
For more information on writing a Docker file, see the best practices documentation.
10.1.1. Benefits And Limitations To Customizing A Container
You can customize a container to fit your specific needs for numerous reasons; for example, you depend upon specific software that is not included in the container that NVIDIA provides. No matter your reasons, you can customize a container.
The container images do not contain sample data-sets or sample model definitions unless they are included with the framework source. Be sure to check the container for sample data-sets or models.
10.1.2. Example 1: Building A Container From Scratch
About this task
Docker uses Dockerfiles to create or build a Docker image. Dockerfiles are scripts that contain commands that Docker uses successively to create a new Docker image. Simply put, a Dockerfile is the source code for the container image. Dockerfiles always start with a base image to inherit from even if you are just using a base OS.
For best practices on writing Dockerfiles, see Best practices for writing Dockerfiles.
As an example, let’s create a container from a Dockerfile that uses Ubuntu 20.04 as a base OS. Let’s also update the OS when we create our container.
Procedure
- Create a working directory on your local hard-drive.
- In that directory, open a text editor and create a file called
Dockerfile
. Save the file to your working directory. - Open your
Dockerfile
and include the following:FROM ubuntu:20.04 RUN apt-get update && apt-get install -y curl CMD echo "hello from inside a container"
CMD
, executes the indicated command when creating the container. This is a way to check that the container was built correctly.In this example, we are also pulling the container from the Docker repository and not the NGC repository. There will be subsequent examples using the NVIDIA® repository.
- Save and close your
Dockerfile
. - Build the image. Issue the following command to build the image and create a tag.
$ docker build -t <new_image_name>:<new_tag> .
Note:This command was issued in the same directory where the
Dockerfile
is located.The output from the docker build process lists "Steps"; one for each line in the
Dockerfile
. For example, let's name the containertest1
and tag it withlatest
. Also, for illustrative purposes, let's assume our private DGX system repository is callednvidian_sas
(the exact name depends upon how you registered the DGX. This is typically the company name in some fashion.) The command below builds the container. Some of the output is shown below so you know what to expect.$ docker build -t test1:latest . Sending build context to Docker daemon 8.012 kB Step 1/3 : FROM ubuntu:20.04 14.04: Pulling from library/ubuntu ... Step 2/3 : RUN apt-get update && apt-get install -y curl ... Step 3/3 : CMD echo "hello from inside a container" ---> Running in 1f391b9285d8 ---> 934785072daf Removing intermediate container 1f391b9285d8 Successfully built 934785072daf
For information about building your image, see docker build. For information about tagging your image, see docker tag.
- Verify that the build was successful. You should see a message similar to the following:
Successfully built 934785072daf
Note:The number,
934785072daf
, is assigned when the image is built and is random. - Confirm you can view your image. Issue the following command to view your container.
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE test1 latest 934785072daf 19 minutes ago 222 MB
Note:The container is local to this DGX system. If you want to store the container in your private repository, follow the next step.
Note:You need to have a DGX system to do this.
- Store the container in your private Docker repository by pushing it.
- The first step in pushing it, is to tag it.
$ docker tag test1 nvcr.io/nvidian_sas/test1:latest
- Now that the image has been tagged, you can push it, for example, to a private project on
nvcr.io
namednvidian_sas
.$ docker push nvcr.io/nvidian_sas/test1:latest The push refers to a repository [nvcr.io/nvidian_sas/test1] …
- Verify that the container appears in the
nvidian_sas
repository.
- The first step in pushing it, is to tag it.
10.1.3. Example 2: Customizing A Container Using Dockerfile
About this task
This example uses a Dockerfile to customize the PyTorch container in nvcr.io
. Before customizing the container, you should ensure the PyTorch 21.02 container has been loaded into the registry using the docker pull
command before proceeding.
$ docker pull nvcr.io/nvidia/pytorch:21.02-py3
As mentioned earlier in this document, the Docker containers on nvcr.io
also provide a sample Dockerfile that explains how to patch a framework and rebuild the Docker image. In the directory /workspace/docker-examples
, there are two sample Dockerfiles. For this example, we will use the Dockerfile.customcaffe
file as a template for customizing a container.
Procedure
- Create a working directory called
my_docker_images
on your local hard drive. - Open a text editor and create a file called
Dockerfile
. Save the file to your working directory. - Open your
Dockerfile
again and include the following lines in the file:FROM nvcr.io/nvidia/pytorch:21.02 # APPLY CUSTOMER PATCHES TO PYTORCH # Bring in changes from outside container to /tmp # (assumes my-pytorch-modifications.patch is in same directory as Dockerfile) #COPY my-pytorch-modifications.patch /tmp # Change working directory to PyTorch source path WORKDIR /opt/pytorch # Apply modifications #RUN patch -p1 < /tmp/my-pytorch-modifications.patch # Note that the default workspace for caffe is /workspace RUN mkdir build && cd build && \ cmake -DCMAKE_INSTALL_PREFIX:PATH=/usr/local -DUSE_NCCL=ON -DUSE_CUDNN=ON -DCUDA_ARCH_NAME=Manual -DCUDA_ARCH_BIN="35 52 60 61" -DCUDA_ARCH_PTX="61" .. && \ make -j"$(nproc)" install && \ make clean && \ cd .. && rm -rf build # Reset default working directory WORKDIR /workspace
- Build the image using the
docker build
command and specify the repository name and tag. In the following example, the repository name is corp/pytorch and the tag is 21.02.1PlusChanges .. For this case, the command would be the following:$ docker build -t corp/pytorch:21.02.1PlusChanges .
- Run the Docker image.
docker run --gpus all -ti --rm corp/pytorch:21.02.1PlusChanges .
10.1.4. Example 3: Customizing A Container Using docker commit
About this task
This example uses the docker commit command to flush the current state of the container to a Docker image. This is not a recommended best practice, however, this is useful when you have a container running to which you have made changes and want to save them. In this example, we are using the apt-get
tag to install packages which requires that the user run as root.
- The NVCaffe image release 17.04 is used in the example instructions for illustrative purposes.
- Do not use the
--rm
flag when running the container. If you use the--rm
flag when running the container, your changes will be lost when exiting the container.
Procedure
- Pull the Docker container from the
nvcr.io
repository to the DGX system. For example, the following command will pull the NVCaffe container:$ docker pull nvcr.io/nvidia/caffe:17.04
- Run the container on the DGX system.
docker run --gpus all -ti nvcr.io/nvidia/caffe:17.04
================== == NVIDIA Caffe == ================== NVIDIA Release 17.04 (build 26740) Container image Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved. Copyright (c) 2014, 2015, The Regents of the University of California (Regents) All rights reserved. Various files include modifications (c) NVIDIA CORPORATION. All rights reserved. NVIDIA modifications are covered by the license terms that apply to the underlying project or file. NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be insufficient for NVIDIA Caffe. NVIDIA recommends the use of the following flags: docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 ... root@1fe228556a97:/workspace#
- You should now be the root user in the container (notice the prompt). You can use the command
apt
to pull down a package and put it in the container.Note:The NVIDIA containers are built using Ubuntu which uses the
apt-get
package manager. Check the container release notes Deep Learning Documentation for details on the specific container you are using.# apt-get update # apt install octave
Note:You have to first issue
apt-get update
before you install Octave usingapt
. - Exit the workspace.
# exit
- Display the list of containers using
docker ps -a
. As an example, here is a snippet of output from thedocker ps -a
command:$ docker ps -a CONTAINER ID IMAGE CREATED ... 1fe228556a97 nvcr.io/nvidia/caffe:17.04 3 minutes ago ...
- Now you can create a new image from the container that is running where you have installed Octave. You can commit the container with the following command.
$ docker commit 1fe228556a97 nvcr.io/nvidian_sas/caffe_octave:17.04 sha256:0248470f46e22af7e6cd90b65fdee6b4c6362d08779a0bc84f45de53a6ce9294
- Display the list of images.
$ docker images REPOSITORY TAG IMAGE ID ... nvidian_sas/caffe_octave 17.04 75211f8ec225 ...
- To verify, let's run the container again and see if Octave is actually there.
Note:
This only works for the DGX-1 and the DGX Station.
docker run --gpus all -ti nvidian_sas/caffe_octave:17.04
================== == NVIDIA Caffe == ================== NVIDIA Release 17.04 (build 26740) Container image Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved. Copyright (c) 2014, 2015, The Regents of the University of California (Regents) All rights reserved. Various files include modifications (c) NVIDIA CORPORATION. All rights reserved. NVIDIA modifications are covered by the license terms that apply to the underlying project or file. NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be insufficient for NVIDIA Caffe. NVIDIA recommends the use of the following flags: docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 ... root@2fc3608ad9d8:/workspace# octave octave: X11 DISPLAY environment variable not set octave: disabling GUI features GNU Octave, version 4.0.0 Copyright (C) 2015 John W. Eaton and others. This is free software; see the source code for copying conditions. There is ABSOLUTELY NO WARRANTY; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. For details, type 'warranty'. Octave was configured for "x86_64-pc-linux-gnu". Additional information about Octave is available at http://www.octave.org. Please contribute if you find this software useful. For more information, visit http://www.octave.org/get-involved.html Read http://www.octave.org/bugs.html to learn how to submit bug reports. For information about changes from previous versions, type 'news'. octave:1>
Since the Octave prompt displayed, Octave is installed.
- If you want to save the container into your private repository (Docker uses the phrase “push”), then you can use the command
docker push ...
.$ docker push nvcr.io/nvidian_sas/caffe_octave:17.04
Results
The new Docker image is now available for use. You can check your local Docker repository for it.
10.1.5. Example 4: Developing A Container Using Docker
About this task
There are two primary use cases for a developer to extend a container:
- Create a development image that contains all of the immutable dependencies for the project, but not the source code itself.
- Create a production or testing image that contains a fixed version of the source and all of the software dependencies.
The datasets are not packaged in the container image. Ideally, the container image is designed to expect volume mounts for datasets and results.
In these examples, we mount our local dataset from /raid/datasets
on our host to /dataset
as a read-only volume inside the container. We also mount a job specific directory to capture the output from a current run.
In these examples, we will create a timestamped output directory on each container launch and map that into the container at /output
. Using this method, the output for each successive container launch is captured and isolated.
Including the source into a container for developing and iterating on a model has many challenges that can over complicate the entire workflow. For instance, if your source code is in the container, then your editor, version control software, dotfiles, etc. also need to be in the container.
However, if you create a development image that contains everything you need to run your source code, you can map your source code into the container to make use of your host workstation’s developer environment. For sharing a fixed version of a model, it is best to package a versioned copy of the source code and trained weights with the development environment.
As an example, we will work through a development and delivery example for the open source implementation of the work found in Image-to-Image Translation with Conditional Adversarial Networks by Isola et. al. and is available at pix2pix. Pix2Pix is a Torch implementation for learning a mapping from input images to output images using a Conditional Adversarial Network. Since online projects can change over time, we will focus our attention on the snapshot version d7e7b8b557229e75140cbe42b7f5dbf85a67d097
change-set.
In this section, we are using the container as a virtual environment, in that the container has all the programs and libraries needed for our project.
We have kept the network definition and training script separate from the container image. This is a useful model for iterative development because the files that are actively being worked on are persistent on the host and only mapped into the container at runtime.
The differences to the original project can be found here Comparing changes.
If the machine you are developing on is not the same machine on which you will be running long training sessions, then you may want to package your current development state in the container.
Procedure
- Create a working directory on your local hard-drive.
mkdir Projects $ cd ~/Projects
- Git clone the Pix2Pix Git repository.
$ git clone https://github.com/phillipi/pix2pix.git $ cd pix2pix
- Run the git checkout command.
$ git checkout -b devel d7e7b8b557229e75140cbe42b7f5dbf85a67d097
- Download the dataset.
bash ./datasets/download_dataset.sh facades I want to put the dataset on my fast /raid storage. $ mkdir -p /raid/datasets $ mv ./datasets/facades /raid/datasets
- Create a file called
Dockerfile
and add the following lines:FROM nvcr.io/nvidia/torch:17.03 RUN luarocks install nngraph RUN luarocks install https://raw.githubusercontent.com/szym/display/master/display-scm-0.rockspec WORKDIR /source
- Build the development Docker container image (
build-devel.sh
).docker build -t nv/pix2pix-torch:devel .
- Create the following
train.sh
script:#!/bin/bash -x ROOT="${ROOT:-/source}" DATASET="${DATASET:-facades}" DATA_ROOT="${DATA_ROOT:-/datasets/$DATASET}" DATA_ROOT=$DATA_ROOT name="${DATASET}_generation" which_direction=BtoA th train.lua
If you were actually developing this model, you would be iterating by making changes to the files on the host and running the training script which executes inside the container.
- Optional: Edit the files and execute the next step after each change.
- Run the training script (
run-devel.sh
).docker run --gpus all --rm -ti -v $PWD:/source -v /raid/datasets:/datasets nv/pix2pix-torch:devel ./train.sh
Example 4.1: Package The Source Into The Container
About this task
Packaging the model definition and script into the container is very simple. We simply add a COPY
step to the Dockerfile.
We’ve updated the run script to simply drop the volume mounting and use the source packaged in the container. The packaged container is now much more portable than our devel
container image because the internal code is fixed. It would be good practice to version control this container image with a specific tag and store it in a container registry.
The updates to run the container are equally subtle. We simply drop the volume mounting of our local source into the container.
10.2. Customizing a Framework
Each Docker image contains the code required to build the framework so that you can make changes to the framework itself. The location of the framework source in each image is in the /workspace
directory.
For specific directory locations, see the Deep Learning Framework Release Notes for your specific framework.
10.2.1. Benefits and Limitations to Customizing a Framework
Customizing a framework is useful if you have patches or modifications you want to make to the framework outside of the NVIDIA repository or if you have a special patch that you want to add to the framework.
10.2.2. Example 1: Customizing A Framework Using The Command Line
About this task
This Dockerfile example illustrates a method to apply patches to the source code in the NVCaffe container image and to rebuild NVCaffe. The RUN
command included below will rebuild NVCaffe in the same way as it was built in the original image.
By applying customizations through a Dockerfile and docker build
in this manner rather than modifying the container interactively, it will be straightforward to apply the same changes to later versions of the NVCaffe container image.
For more information, see Dockerfile reference.
Procedure
- Create a working directory for the Dockerfile.
$ mkdir docker $ cd docker
- Open a text editor and create a file called
Dockerfile
and add the following lines:FROM nvcr.io/nvidia/caffe:17.04 RUN apt-get update && apt-get install bc
- Bring in changes from outside the container to
/tmp
.Note:This assumes
my-caffe-modifications.patch
is in same directory as Dockerfile.COPY my-caffe-modifications.patch /tmp
- Change your working directory to the NVCaffe source path.
WORKDIR /opt/caffe
- Apply your modifications.
RUN patch -p1 < /tmp/my-caffe-modifications.patch
- Rebuild NVCaffe.
RUN mkdir build && cd build && \ cmake -DCMAKE_INSTALL_PREFIX:PATH=/usr/local -DUSE_NCCL=ON -DUSE_CUDNN=ON \ -DCUDA_ARCH_NAME=Manual -DCUDA_ARCH_BIN="35 52 60 61" -DCUDA_ARCH_PTX="61" .. && \ make -j"$(nproc)" install && \ make clean && \ cd .. && rm -rf build
- Reset the default working directory.
WORKDIR /workspace
Example 2: Customizing A Framework And Rebuilding The Container
About this task
This example illustrates how you can customize a framework and rebuild the container. For this example, we will use the NVCaffe 17.03 framework.
Currently, the NVCaffe framework returns the following output message to stdout
when a network layer is created:
“Creating Layer”
For example, you can see this output by running the following command from a bash shell in a NVCaffe 17.03 container.
# which caffe
/usr/local/bin/caffe
# caffe time --model /workspace/models/bvlc_alexnet/deploy.prototxt
--gpu=0
…
I0523 17:57:25.603410 41 net.cpp:161] Created Layer data (0)
I0523 17:57:25.603426 41 net.cpp:501] data -> data
I0523 17:57:25.604748 41 net.cpp:216] Setting up data
…
The following steps show you how to change the message “Created Layer”
in NVCaffe to “Just Created Layer”
. This example illustrates how you might modify an existing framework.
Before you begin
Ensure you run the framework container in interactive mode.
Procedure
- Locate the NVCaffe 17.03 container from the
nvcr.io
repository.$ docker pull nvcr.io/nvidia/caffe:17.03
- Run the container on the DGX system.
docker run --gpus all --rm -ti nvcr.io/nvidia/caffe:17.03
Note:This will make you the root user in the container. Notice the change in the prompt.
- Edit a file in the NVCaffe source file,
/opt/caffe/src/caffe/net.cpp
. The line you want to change is around line162
.# vi /opt/caffe/src/caffe/net.cpp :162 s/Created Layer/Just Created Layer
Note:This uses vi. Change
“Created Layer”
to“Just Created Layer”
. - Rebuild NVCaffe.
# cd /opt/caffe # cmake -DCMAKE_INSTALL_PREFIX:PATH=/usr/local -DUSE_NCCL=ON -DUSE_CUDNN=ON -DCUDA_ARCH_NAME=Manual -DCUDA_ARCH_BIN="35 52 60 61" -DCUDA_ARCH_PTX="61" .. # make -j"$(proc)" install # make install # ldconfig
- Before running the updated NVCaffe framework, ensure the updated NVCaffe binary is in the correct location, for example,
/usr/local/
.# which caffe /usr/local/bin/caffe
- Run NVCaffe and look for a change in the output to
stdout
:# caffe time --model /workspace/models/bvlc_alexnet/deploy.prototxt --gpu=0 /usr/local/bin/caffe … I0523 18:29:06.942697 7795 net.cpp:161] Just Created Layer data (0) I0523 18:29:06.942711 7795 net.cpp:501] data -> data I0523 18:29:06.944180 7795 net.cpp:216] Setting up data ...
- Save your container to your private DGX repository on
nvcr.io
or your private Docker repository (see Example 2: Customizing A Container Using Dockerfile for an example).
10.3. Optimizing Docker Containers For Size
The Docker container format using layers was specifically designed to limit the amount of data that would need to be transferred when a container image is instantiated. When a Docker container image is instantiated or “pulled” from a repository, Docker may need to copy the layers from the repository to the local host. It checks what layers it already has on the host using the hash for each layer. If it already has it on the local host, it won’t ”re-download” it saving time, and to a smaller degree, network usage.
This is particularly useful for NVIDIA’s NGC because all the containers are built with the same base OS and libraries. If you run one container image from NGC, then run another, it is likely that many of the layers from the first container are used in the second container, reducing the time to pull down the second container image so the container can be started quickly.
You can put almost anything you want into a container allowing users or container developers to create very large (GB+) containers. Even though it is not recommended to put data in your Docker container image, users and developers do this (there are some good reasons). This can further inflate the size of the container image. This increases the amount of time to download a container image or it’s various layers. Users and developers are now asking for ways to reduce the size of the container image or the individual layers.
The following subsections present some options that you can use if the container image or the layer sizes are too large or you want them smaller. There is no single option that works best, so be sure to try them on your container images.
10.3.1. One Line Per RUN
Command
In a Dockerfile, using one line for each RUN
command is very convenient. The code is easy to read since you can see each command. However, Docker will create a layer for each command. Each layer keeps some information (metadata) about its origins, when the layer was created, what is contained in the layer, and a hash for each layer. If you have a large number of commands, you are going to have a large amount of metadata.
A simple way to reduce the size of the container image is to put all of the RUN
commands that you can into a single RUN
statement. This may result in a very large RUN
command, however, it greatly reduces the amount of metadata. It is recommended that you group as many RUN
commands together as possible. Depending upon your Dockerfile, you may not be able to put all RUN
commands into a single RUN
statement. Do your best to reduce the number of RUN
commands but make it logical.
Below is a simple Dockerfile example used to build a container image.
$ cat Dockerfile
FROM ubuntu:20.04
RUN date > /build-info.txt
RUN uname -r >> /build-info.txt
Notice there are two RUN commands in this simple Dockerfile. The container image can be built using the following command and associated output.
$ docker build -t first-image -f Dockerfile .
…
Step 2/3 : RUN date > /build-info.txt
---> Using cache
---> af12c4b34f91
Step 3/3 : RUN uname -r >> /build-info.txt
---> Running in 0f883f37e3c8
…
Notice that the RUN
commands each created a layer in the container image.
Let’s examine the container image for details on the layers.
$ docker run --rm -it first-image cat /build-info.txt
Mon Jan 18 10:14:02 UTC 2021
5.5.115-1.el7.elrepo.x86_64
$ docker history first-image
IMAGE CREATED CREATED BY SIZE
d2c03aa61290 11 seconds ago /bin/sh -c uname -r >> /build-info.txt 57B
af12c4b34f91 16 minutes ago /bin/sh -c date > /build-info.txt 29B
5e8b97a2a082 6 weeks ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
<missing> 6 weeks ago /bin/sh -c mkdir -p /run/systemd && echo 'do… 7B
<missing> 6 weeks ago /bin/sh -c sed -i 's/^#\s*\(deb.*universe\)$… 2.76kB
<missing> 6 weeks ago /bin/sh -c rm -rf /var/lib/apt/lists/* 0B
<missing> 6 weeks ago /bin/sh -c set -xe && echo '#!/bin/sh' > /… 745B
<missing> 6 weeks ago /bin/sh -c #(nop) ADD file:d37ff24540ea7700d… 114MB
The output of this command gives you information about each of the layers. Notice that there is a layer for each RUN
command.
Now, let’s take the Dockerfile and combine the two RUN
commands.
$ cat Dockerfile
FROM ubuntu:20.04
RUN date > /build-info.txt && uname -r >> /build-info.txt
$ docker build -t one-layer -f Dockerfile .
$ docker history one-layer
IMAGE CREATED CREATED BY SIZE
3b1ef5bc19b2 6 seconds ago /bin/sh -c date > /build-info.txt && uname -… 57B
5e8b97a2a082 6 weeks ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
<missing> 6 weeks ago /bin/sh -c mkdir -p /run/systemd && echo 'do… 7B
<missing> 6 weeks ago /bin/sh -c sed -i 's/^#\s*\(deb.*universe\)$… 2.76kB
<missing> 6 weeks ago /bin/sh -c rm -rf /var/lib/apt/lists/* 0B
<missing> 6 weeks ago /bin/sh -c set -xe && echo '#!/bin/sh' > /… 745B
<missing> 6 weeks ago /bin/sh -c #(nop) ADD file:d37ff24540ea7700d… 114MB
Notice that there is now only one layer that has both RUN
commands included.
Another good reason to combine RUN
commands is that if you have multiple layers, it’s easy to modify one layer in the container image without having to modify the entire container image.
10.3.2. Export, Import, And Flatten
If space is at a premium, there is a way to take the existing container image, and get rid of all the history. It can only be done using a running container. Once the container is running, run the following two commands:
# export the container to a tarball
docker export <CONTAINER ID> > /home/export.tar
# import it back
cat /home/export.tar | docker import - some-name:<tag>
This will get rid of the history of each layer but it will preserve the layers (if that is important).
Another option is to “flatten” your image to a single layer. This gets rid of all the redundancies in the layers and creates a single container. Like the previous technique, this one requires a running container as well. With the container running, issue the following command:
docker export <CONTAINER ID> | docker import - some-image-name:<tag>
This pipeline exports the container through the import
command creating a new container that is only one layer. For more information, see this blog post.
10.3.3. docker-squash
A few years ago before Docker, adding the ability to “squash” images via a tool called docker-squash
was created. It hasn’t been updated for a couple of years, however, it is still a popular tool for reducing the size of Docker container images. The tool takes a Docker container image and “squashes” it to a single layer, reducing commonalities between layers and history of the layers producing the smallest possible container image.
The tool retains Docker commands such as PORT
, ENV
, etc. the squashed images work exactly the same as before they were squashed. Moreover, the files that are deleted during the squashing process are actually removed from the image.
A simple example for running docker-squash
is below.
docker save <ID> | docker-squash -t <TAG> [-from <ID>] | docker load
This pipeline takes the current image, saves it, squashes it with a new tag, and reloads the container. The resulting image has all the layers beneath the initial FROM
layer squashed into a single layer. The default options in docker-squash
retains the base image layer so that it does not need to be repeatedly transferred when pushing and pulling updates to the image.
The tool is really designed for containers that are finalized and not likely to be updated. Consequently, there is little need for details about the layers and history. It can then be squashed and put into production. Having the smallest size image will allow users to quickly download the image and get it running because it’s almost as small as possible.
10.3.4. Squash While Building
Not long after Docker came out, people started creating giant images that took a long time to transfer. At that point, users and developers started working on ideas to reduce the container size. Not too long ago, some patches were proposed for Docker to allow it to squash images as they were being built. The squash
option was added in Docker 1.13 (API 1.25), when Docker still followed a different versioning scheme. As of Docker 17.06‑ce the option is still classified as experimental. You can tell Docker to allow the use of experimental options if you want (refer to Docker documentation). However, NVIDIA does not support this option.
The --squash
option is used when the container is built. An example of the command is the following:
docker build --squash -t chamilad/testdocker:0.1 .
This command uses “Dockerfile” as the dockerfile for building the container.
The --squash
option creates an image that has two layers. The first layer results from the FROM
that usually starts off a Dockerfile. The subsequent layers are all “squashed” together into a single layer. This gets rid of the history in all the layers but the first one. It also eliminates redundant files.
Since it is still an experimental feature, the amount you can squeeze the image varies. There have been reports of a 50% reduction in image size.
10.3.5. Additional Options
There are some other options that be used to reduce the size of images, but they are not particularly Docker based (although there are a couple). The rest are classic Linux commands.
There is a Docker build option that deals with building applications in Docker containers. If you want to build an application when the container is created, you may not want to leave the building tools in the image because of its size. This is true when the container is supposed to be executed and not modified when it is run. Recall that Docker containers are built in layers. We can use that fact when building containers to copy binaries from one layer to another. For example, the Docker file below:
$ cat Dockerfile
FROM ubuntu:20.04
RUN apt-get update -y && \
apt-get install -y --no-install-recommends \
build-essential \
gcc && \
rm -rf /var/lib/apt/lists/*
COPY hello.c /tmp/hello.c
RUN gcc -o /tmp/hello /tmp/hello.c
Builds a container, installs gcc
, and builds a simple “hello world” application. Checking the history of the container will give us the size of the layers:
$ docker history hello
IMAGE CREATED CREATED BY SIZE
49fef0e11806 8 minutes ago /bin/sh -c gcc -o /tmp/hello /tmp/hello.c 8.6kB
44a449445055 8 minutes ago /bin/sh -c #(nop) COPY file:8f0c1776b2571c38… 63B
c2e5b659a549 8 minutes ago /bin/sh -c apt-get update -y && apt-get … 181MB
5e8b97a2a082 6 weeks ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
<missing> 6 weeks ago /bin/sh -c mkdir -p /run/systemd && echo 'do… 7B
<missing> 6 weeks ago /bin/sh -c sed -i 's/^#\s*\(deb.*universe\)$… 2.76kB
<missing> 6 weeks ago /bin/sh -c rm -rf /var/lib/apt/lists/* 0B
<missing> 6 weeks ago /bin/sh -c set -xe && echo '#!/bin/sh' > /… 745B
<missing> 6 weeks ago /bin/sh -c #(nop) ADD file:d37ff24540ea7700d… 114MB
Notice that the layer with the build tools is 181MB in size, yet the application layer is only 8.6kB in size. If the build tools aren’t needed in the final container, then we can get rid of it from the image. However, if you simply do a apt-get remove …
command, the build tools are not actually erased.
A solution is to copy the binary from the previous layer to a new layer as in this Dockerfile:
$ cat Dockerfile
FROM ubuntu:16.04 AS build
RUN apt-get update -y && \
apt-get install -y --no-install-recommends \
build-essential \
gcc && \
rm -rf /var/lib/apt/lists/*
COPY hello.c /tmp/hello.c
RUN gcc -o /tmp/hello /tmp/hello.c
FROM ubuntu:16.04
COPY --from=build /tmp/hello /tmp/hello
This can be termed a “multi-stage” build. In this Dockerfile, the first stage starts with the OS and names it “build”. Then the build tools are installed, the source is copied into the container, and the binary is built.
The next layer starts with a fresh OS FROM
command (referred to as a “first stage”). Docker will only save the layers starting with this one and any subsequent layers (in other words, the first layers that installed the build tools won’t be saved) or the “second stage”. The second stage can copy the binary from the first stage. No build tools are included in this stage. Building the container image is the same as before.
If we compare the size of the container with the first Dockerfile to the size using the second Dockerfile, we can see the following:
$ docker images hello
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest 49fef0e11806 21 minutes ago 295MB
$ docker images hello-rt
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-rt latest f0cef59a05dd 2 minutes ago 114MB
The first output is the original Dockerfile. The second output is for the multistage Dockerfile. Notice the difference in size between the two.
An option to reduce the size of the Docker container is to start with a small base image. Usually, the base images for a distribution are fairly lean, but it might be a good idea to see what is installed in the image. If there are things that aren’t needed, you can then try creating your own base image that removes the unneeded tools.
Another option is to run the command apt-get clean
to clean up any package caching that might be in the image.
TensorFlow
11.1.1. run_tf_cifar10.sh
#!/bin/bash
# file: run_tf_cifar10.sh
# run example:
# ./run_kerastf_cifar10.sh --epochs=3 --datadir=/datasets/cifar
# Get usage help via:
# ./run_kerastf_cifar10.sh --help 2>/dev/null
_basedir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# specify workdirectory for the container to run scripts or work from.
workdir=$_basedir
cifarcode=${_basedir}/examples/tensorflow/cifar/cifar10_multi_gpu_train.py
# cifarcode=${_basedir}/examples/tensorflow/cifar/cifar10_train.py
function join { local IFS="$1"; shift; echo "$*"; }
script_args=$(join : "$@")
dname=${USER}_tf
docker run --gpus all --name=$dname -d -t \
--shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 \
-u $(id -u):$(id -g) -e HOME=$HOME -e USER=$USER -v $HOME:$HOME \
-v /datasets/cifar:/datasets/cifar:ro -w $workdir \
-e cifarcode=$cifarcode -e script_args="$script_args" \
nvcr.io/nvidia/tensorflow:17.05
sleep 1 # wait for container to come up
docker exec -it $dname bash -c 'python $cifarcode ${script_args//:/ }'
docker stop $dname && docker rm $dname
11.2. Keras
11.2.1. cifar10_cnn_filesystem.py
#!/usr/bin/env python
# file: cifar10_cnn_filesystem.py
'''
Train a simple deep CNN on the CIFAR10 small images dataset.
'''
from __future__ import print_function
import sys
import os
from argparse import (ArgumentParser, SUPPRESS)
from textwrap import dedent
import numpy as np
# from keras.utils.data_utils import get_file
from keras.utils import to_categorical
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
import keras.layers as KL
from keras import backend as KB
from keras.optimizers import RMSprop
def parser_(desc):
parser = ArgumentParser(description=dedent(desc))
parser.add_argument('--epochs', type=int, default=200,
help='Number of epochs to run training for.')
parser.add_argument('--aug', action='store_true', default=False,
help='Perform data augmentation on cifar10 set.\n')
# parser.add_argument('--datadir', default='/mnt/datasets')
parser.add_argument('--datadir', default=SUPPRESS,
help='Data directory with Cifar10 dataset.')
args = parser.parse_args()
return args
def make_model(inshape, num_classes):
model = Sequential()
model.add(KL.InputLayer(input_shape=inshape[1:]))
model.add(KL.Conv2D(32, (3, 3), padding='same'))
model.add(KL.Activation('relu'))
model.add(KL.Conv2D(32, (3, 3)))
model.add(KL.Activation('relu'))
model.add(KL.MaxPooling2D(pool_size=(2, 2)))
model.add(KL.Dropout(0.25))
model.add(KL.Conv2D(64, (3, 3), padding='same'))
model.add(KL.Activation('relu'))
model.add(KL.Conv2D(64, (3, 3)))
model.add(KL.Activation('relu'))
model.add(KL.MaxPooling2D(pool_size=(2, 2)))
model.add(KL.Dropout(0.25))
model.add(KL.Flatten())
model.add(KL.Dense(512))
model.add(KL.Activation('relu'))
model.add(KL.Dropout(0.5))
model.add(KL.Dense(num_classes))
model.add(KL.Activation('softmax'))
return model
def cifar10_load_data(path):
"""Loads CIFAR10 dataset.
# Returns
Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.
"""
dirname = 'cifar-10-batches-py'
# origin = 'http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz'
# path = get_file(dirname, origin=origin, untar=True)
path_ = os.path.join(path, dirname)
num_train_samples = 50000
x_train = np.zeros((num_train_samples, 3, 32, 32), dtype='uint8')
y_train = np.zeros((num_train_samples,), dtype='uint8')
for i in range(1, 6):
fpath = os.path.join(path_, 'data_batch_' + str(i))
data, labels = cifar10.load_batch(fpath)
x_train[(i - 1) * 10000: i * 10000, :, :, :] = data
y_train[(i - 1) * 10000: i * 10000] = labels
fpath = os.path.join(path_, 'test_batch')
x_test, y_test = cifar10.load_batch(fpath)
y_train = np.reshape(y_train, (7, 1))
y_test = np.reshape(y_test, (6, 1))
if KB.image_data_format() == 'channels_last':
x_train = x_train.transpose(0, 2, 3, 1)
x_test = x_test.transpose(0, 2, 3, 1)
return (x_train, y_train), (x_test, y_test)
def main(argv=None):
'''
'''
main.__doc__ = __doc__
argv = sys.argv if argv is None else sys.argv.extend(argv)
desc = main.__doc__
# CLI parser
args = parser_(desc)
batch_size = 32
num_classes = 10
epochs = args.epochs
data_augmentation = args.aug
datadir = getattr(args, 'datadir', None)
# The data, shuffled and split between train and test sets:
(x_train, y_train), (x_test, y_test) = cifar10_load_data(datadir) \
if datadir is not None else cifar10.load_data()
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# Convert class vectors to binary class matrices.
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
callbacks = None
print(x_train.shape, 'train shape')
model = make_model(x_train.shape, num_classes)
print(model.summary())
# initiate RMSprop optimizer
opt = RMSprop(lr=0.0001, decay=1e-6)
# Let's train the model using RMSprop
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
nsamples = x_train.shape[0]
steps_per_epoch = nsamples // batch_size
if not data_augmentation:
print('Not using data augmentation.')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True,
callbacks=callbacks)
else:
print('Using real-time data augmentation.')
# This will do preprocessing and realtime data augmentation:
datagen = ImageDataGenerator(
# set input mean to 0 over the dataset
featurewise_center=False,
samplewise_center=False, # set each sample mean to 0
# divide inputs by std of the dataset
featurewise_std_normalization=False,
# divide each input by its std
samplewise_std_normalization=False,
zca_whitening=False, # apply ZCA whitening
# randomly rotate images in the range (degrees, 0 to 180)
rotation_range=0,
# randomly shift images horizontally (fraction of total width)
width_shift_range=0.1,
# randomly shift images vertically (fraction of total height)
height_shift_range=0.1,
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
# Compute quantities required for feature-wise normalization
# (std, mean, and principal components if ZCA whitening is applied).
datagen.fit(x_train)
# Fit the model on the batches generated by datagen.flow().
model.fit_generator(datagen.flow(x_train, y_train,
batch_size=batch_size),
steps_per_epoch=steps_per_epoch,
epochs=epochs,
validation_data=(x_test, y_test),
callbacks=callbacks)
if __name__ == '__main__':
main()
For more information about Docker containers, see:
For deep learning frameworks release notes and additional product documentation, see the Deep Learning Documentation website: Deep Learning Frameworks Documentation.
Notice
This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation (“NVIDIA”) makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.
NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.
Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.
NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document.
NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk.
NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.
No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA.
Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices.
THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product.
HDMI
HDMI, the HDMI logo, and High-Definition Multimedia Interface are trademarks or registered trademarks of HDMI Licensing LLC.
OpenCL
OpenCL is a trademark of Apple Inc. used under license to the Khronos Group Inc.
Trademarks
NVIDIA, the NVIDIA logo, and cuBLAS, CUDA, DALI, DGX, DGX-1, DGX-2, DGX Station, DLProf, Jetson, Kepler, Maxwell, NCCL, Nsight Compute, Nsight Systems, NvCaffe, PerfWorks, Pascal, SDK Manager, Tegra, TensorRT, Triton Inference Server, Tesla, TF-TRT, and Volta are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.