Abstract

This NGC Container User Guide provides a detailed overview about how to use the NGC registry and step-by-step instructions for pulling and running a container, as well as customizing containers.

1. Docker Containers

Over the last few years there has been a dramatic rise in the use of software containers for simplifying deployment of data center applications at scale. Containers encapsulate an application along with its libraries and other dependencies to provide reproducible and reliable execution of applications and services without the overhead of a full virtual machine.

Docker® Engine Utility for NVIDIA® GPUs, also known as nvidia-docker, enables GPU-based applications that are portable across multiple machines, in a similar way to how Docker® enables CPU-based applications to be deployed across multiple machines. It accomplishes this through the use of Docker containers.

We will refer to the Docker® Engine Utility for NVIDIA® GPUs simply as nvidia-docker for the remainder of this guide.

Docker container
A Docker container is an instance of a Docker image. A Docker container deploys a single application or service per container.
Docker image
A Docker image is simply the software (including the filesystem and parameters) that you run within a nvidia-docker container.

1.1. What Is A Docker Container?

A Docker container is a mechanism for bundling a Linux application with all of its libraries, data files, and environment variables so that the execution environment is always the same, on whatever Linux system it runs and between instances on the same host.

Unlike a VM which has its own isolated kernel, containers use the host system kernel. Therefore, all kernel calls from the container are handled by the host system kernel. DGX™ systems uses Docker containers as the mechanism for deploying deep learning frameworks.

A Docker container is the running instance of a Docker image.

1.2. Why Use A Container?

One of the many benefits to using containers is that you can install your application, dependencies and environment variables one time into the container image; rather than on each system you run on. In addition, the key benefits to using containers also include:

  • Install your application, dependencies and environment variables one time into the container image; rather than on each system you run on.
  • There is no risk of conflict with libraries that are installed by others.
  • Containers allow use of multiple different deep learning frameworks, which may have conflicting software dependencies, on the same server.
  • After you build your application into a container, you can run it on lots of other places, especially servers, without having to install any software.
  • Legacy accelerated compute applications can be containerized and deployed on newer systems, on premise, or in the cloud.
  • Specific GPU resources can be allocated to a container for isolation and better performance.
  • You can easily share, collaborate, and test applications across different environments.
  • Multiple instances of a given deep learning framework can be run concurrently with each having one or more specific GPUs assigned.
  • Containers can be used to resolve network-port conflicts between applications by mapping container-ports to specific externally-visible ports when launching the container.

2. Prerequisites

HPC containers are available to run on Linux systems equipped with NVIDIA GPUs of architecture Pascal or later. Linux distributions that are supported include Ubuntu 16.04 and Centos 7.

To enable portability in Docker images that leverage GPUs, NVIDIA developed nvidia-docker, an open-source project that provides a command line tool to mount the user mode components of the NVIDIA driver and the GPUs into the Docker container at launch.

NGC containers take full advantage of NVIDIA GPUs and require nvidia-docker.
Note: HPC Visualization Containers have different prerequisites. For more information, see HPC Visualization Containers.
  • DGX™ users should follow the step-by-step instructions on how to install Docker and nvidia-docker in the Preparing to use NVIDIA Containers Getting Started Guide.
  • Amazon Web Services (AWS) P3 users should follow the step-by-step instructions in the Using NGC with AWS Setup Guide.
  • Users other than AWS should follow the nvidia-docker installation documentation at nvidia-docker installation and install the latest NVIDIA Display driver for your GPU product type and series for your operating system. If NVIDIA drivers are not already configured on your system, then install them from here: Download Drivers.
  • Ensure you have an NVIDIA GPU supporting Compute Unified Device Architecture® (CUDA) version with compute capability 6.0.0 or higher. For example, Pascal GPU architecture generation or later.
  • Log into the NVIDIA® GPU Cloud (NGC) Container Registry located at nvcr.io using your NGC API key. For step-by-step instructions on how to gain access and get your API key, see NGC Getting Started Guide.

3. Pulling A Container

Before you can pull a container from the NGC container registry, you must have Docker and nvidia-docker installed. For DGX users is is explained in Preparing to use NVIDIA Containers Getting Started Guide.

You must also have access and logged into the NGC container registry as explained in the NGC Getting Started Guide.

3.1. Key Concepts

In order to issue the pull and run commands, ensure that you are familiar with the following concepts.

A pull command looks similar to:
docker pull nvcr.io/nvidia/caffe2:17.10
A run command looks similar to:
nvidia-docker run -it --rm –v local_dir:container_dir nvcr.io/nvidia/caffe2:<xx.xx>
The following concepts describe the separate attributes that make up the both commands.
nvcr.io
The name of the container registry, which for the NGC container registry and the NVIDIA DGX container registry is nvcr.io.
nvidia
The name of the space within the registry that contains the container. For containers provided by NVIDIA, the registry space is nvidia. For more information, see NGC Container Registry Spaces.
-it
You want to run the container in interactive mode.
--rm
You want to delete the container when finished.
–v
You want to mount the directory.
local_dir
The directory or file from your host system (absolute path) that you want to access from inside your container. For example, the local_dir in the following path is /home/jsmith/data/mnist.
-v /home/jsmith/data/mnist:/data/mnist

If you are inside the container, for example, using the command ls /data/mnist, you will see the same files as if you issued the ls /home/jsmith/data/mnist command from outside the container.

container_dir
The target directory when you are inside your container. For example, /data/mnist is the target directory in the example:
 -v /home/jsmith/data/mnist:/data/mnist
<xx.xx>
The tag. For example, 17.10.

3.2. Accessing And Pulling From The NGC container registry

This section is appropriate if you are using NVIDIA NGC Cloud Services via a cloud provider.

If you are accessing the NVIDIA containers from a Cloud Server Provider such as Amazon Web Services (AWS), then you should first create an account at the NGC container registry located at https://ngc.nvidia.com. After you create an account, the commands to pull containers are the same as if you had a DGX-1 in your own data center. However, currently, you cannot save any containers to the NGC container registry. Instead you have to save the containers to your own Docker repository that is either on-premise or in the Cloud.

You can access the NGC container registry by running a Docker commands from any Linux computer with Internet access on which Docker is installed. After you NGC account is activated, you can access the NGC container registry at nvcr.io through the Docker CLI.

Before accessing the NGC container registry, ensure that the following prerequisites are met. For more information about meeting these requirements, see NGC Getting Started Guide.
  • Your NVIDIA NGC Cloud Services account is activated.
  • You have an NVIDIA NGC Cloud Services API key for authenticating your access to the NGC container registry.
  • You are logged in to your client computer with the privileges required to run nvidia-docker containers.
After your NVIDIA NGC Cloud Services account is activated, you can access the NGC container registry from one of two ways:
A Docker registry is the service that stores Docker images. The service can be on the internet, on the company intranet, or on a local machine. For example, nvcr.io is the location of the NGC container registry for nvidia-docker images.

All nvcr.ioDocker images use explicit version-tags to avoid ambiguous versioning which can result from using the latest tag. For example, a locally tagged “latest” version of an image may actually override a different “latest” version in the registry.

  1. Log in to the NGC container registry.
    $ docker login nvcr.io
  2. When prompted for your user name, enter the following text:
    $oauthtoken
  3. The $oauthtoken username is a special user name that indicates that you will authenticate with an API key and not a username and password.
  4. When prompted for your password, enter your NVIDIA NGC Cloud Services API key as shown in the following example.
    Username: $oauthtoken
    Password: k7cqFTUvKKdiwGsPnWnyQFYGnlAlsCIRmlP67Qxa
    Tip: When you get your API key, copy it to the clipboard so that you can paste the API key into the command shell when you are prompted for your password.

3.2.1. Pulling A Container From The NGC container registry Using The Docker CLI

This section is appropriate if you are using NVIDIA NGC Cloud Services via a cloud provider.
Before pulling an nvidia-docker container, ensure that the following prerequisites are met:

To browse the available containers in the NGC container registry use a web browser to log in to your NGC container registry account on the NVIDIA NGC Cloud Services website, https://ngc.nvidia.com.

  1. Pull the container that you want from the registry. For example, to pull the NAMD container:
    $ docker pull nvcr.io/hpc/namd:2.13
    
  2. List the Docker images on your system to confirm that the container was pulled.
    $ docker images
    For more information pertaining to your specific container, refer to the /workspace/README.md file inside the container.
After pulling a container, you can run jobs in the container to run scientific workloads, train neural networks, deploy deep learning models, or perform AI analytics.

3.2.2. Pulling A Container Using The NGC container registry Web Interface

This section is appropriate if you are using NVIDIA NGC Cloud Services via a cloud provider.
Before you can pull a container from the NGC container registry, you must have Docker and nvidia-docker installed as explained in Preparing To Use NVIDIA Containers Getting Started Guide. You must also have access and logged into the NGC container registry as explained in NGC Getting Started Guide.
This task assumes:
  1. You have a cloud instance system and it is connected to the Internet.
  2. Your instance has Docker and nvidia-docker installed.
  3. You have access to a browser to the NGC container registry at https://ngc.nvidia.com and your NVIDIA NGC Cloud Services account is activated.
  4. You want to pull a container onto your cloud instance.
  1. Log into the NGC container registry at https://ngc.nvidia.com.
  2. Click Registry in the left navigation. Browse the NGC container registry page to determine which Docker repositories and tags are available to you.
  3. Click one of the repositories to view information about that container image as well as the available tags that you will use when running the container.
  4. In the Pull column, click the icon to copy the Docker pull command.
  5. Open a command prompt and paste the Docker pull command. The pulling of the container image begins. Ensure the pull completes successfully.
  6. After you have the Docker container file on your local system, load the container into your local Docker registry.
  7. Verify that the image is loaded into your local Docker registry.
    $ docker images

    For more information pertaining to your specific container, refer to the /workspace/README.md file inside the container.

4. nvidia-docker Images

DGX-1 and NGC containers are hosted in an nvidia-docker repository called nvcr.io. As you read in the previous section, these containers can be “pulled” from the repository and used for GPU accelerated applications such as scientific workloads, visualization, and deep learning.

A Docker image is simply a file-system that a developer builds. An nvidia-docker image serves as the template for the container, and is a software stack that consists of several layers. Each layer depends on the layer below it in the stack.

From a Docker image, a container is formed. When creating a container, you add a writable layer on top of the stack. A Docker image with a writable container layer added to it is a container. A container is simply a running instance of that image. All changes and modifications made to the container are made to the writable layer. You can delete the container; however, the Docker image remains untouched.

Figure 1 depicts the nvidia-docker stack for the DGX-1. Notice that the nvidia-docker tools sit above the host OS and the NVIDIA Drivers. The tools are used to create and use NVIDIA containers - these are the layers above the nvidia-docker layer. These containers have applications, deep learning SDK’s, and the CUDA® Toolkit™ . The nvidia-docker tools take care of mounting the appropriate NVIDIA Drivers.
Figure 1. nvidia-docker mounts the user mode components of the NVIDIA driver and the GPUs into the Docker container at launch. nvidia-docker mounts the user mode components of the NVIDIA driver and the GPUs into the Docker container at launch.

4.1. nvidia-docker Images Versions

Each release of an nvidia-docker image is identified by a version “tag”. For simpler images this version tag usually contains the version of the major software package in the image. More complex images which contain multiple software packages or versions may use a separate version solely representing the containerized software configuration. One common scheme is versioning by the year and month of the image release. For example, the 17.01 release of an image was released in January, 2017.

An image name consists of two parts separated by a colon. The first part is the name of the container in the repository and the second part is the “tag” associated with the container. These two pieces of information are shown in Figure 2, which is the output from issuing the docker images command.

Figure 2. Output from docker images command Output from docker images command
Figure 2 shows simple examples of image names, such as:
  • nvidia-cuda:8.0-devel
  • ubuntu:latest
  • nvcr.io/nvidia/tensorflow:17.01
If you choose not to add a tag to an image, by default the word “latest ” is added as the tag, however all NGC containers have an explicit version tag.

In the next sections, you will use these image names for running containers. Later in the document, there is also a section on creating your own containers or customizing and extending existing containers.

5. Running A Container

To run a container, you must issue the nvidia-docker run command, specifying the registry, repository, and tags.

Before you can run an nvidia-docker deep learning framework container, you must have nvidia-docker installed. For more information, see Preparing To Use NVIDIA Containers Getting Started Guide.
  1. As a user, run the container interactively.
    $ nvidia-docker run -it --rm –v local_dir:container_dir
              nvcr.io/nvidia/<repository>:<xx.xx>

    The following example runs the December 2016 release (16.12) of the NVCaffe container in interactive mode. The container is automatically removed when the user exits the container.

    $ nvidia-docker run --rm -ti nvcr.io/nvidia/caffe:16.12
    
    ===========
    == Caffe ==
    ===========
    
    NVIDIA Release 16.12 (build 6217)
    
    Container image Copyright (c) 2016, NVIDIA CORPORATION.  All rights reserved.
    Copyright (c) 2014, 2015, The Regents of the University of California (Regents)
    All rights reserved.
    
    Various files include modifications (c) NVIDIA CORPORATION.  All rights reserved.
    NVIDIA modifications are covered by the license terms that apply to the underlying project or file.
    root@df57eb8e0100:/workspace#
  2. From within the container, start the job that you want to run. The precise command to run depends on the deep learning framework in the container that you are running and the job that you want to run. For details see the /workspace/README.md file for the container.

    The following example runs the caffe time command on one GPU to measure the execution time of the deploy.prototxt model.

    # caffe time -model models/bvlc_alexnet/ -solver deploy.prototxt -gpu=0
  3. Optional: Run the December 2016 release (16.12) of the same NVCaffe container but in non-interactive mode.
    % nvidia-docker run --rm nvcr.io/nvidia/caffe:16.12 caffe time -model
          /workspace/models/bvlc_alexnet -solver /workspace/deploy.prototxt -gpu=0

5.1. nvidia-docker run

When you run the nvidia-docker run command:

  • The Docker Engine loads the image into a container which runs the software.
  • You define the runtime resources of the container by including additional flags and settings that are used with the command. These flags and settings are described in the following sections.
  • The GPUs are explicitly defined for the Docker container (defaults to all GPUs, can be specified using NV_GPU environment variable).

5.2. Specifying A User

Unless otherwise specified, the user inside the container is the root user.

When running within the container, files created on the host operating system or network volumes can be accessed by the root user. This is unacceptable for some users and they will want to set the ID of the user in the container. For example, to set the user in the container to be the currently running user, issue the following:
% nvidia-docker run -ti --rm -u $(id -u):$(id -g) nvcr.io/nvidia/<repository>:<tag>
Typically, this results in warnings due to the fact that the specified user and group do not exist in the container. You might see a message similar to the following:
groups: cannot find name for group ID 1000I have no name! @c177b61e5a93:/workspace$
The warning can usually be ignored.

5.3. Setting The Remove Flag

By default, Docker containers remain on the system after being run. Repeated pull or run operations use up more and more space on the local disk, even after exiting the container. Therefore, it is important to clean up the nvidia-docker containers after exiting.
Note: Do not use the --rm flag if you have made changes to the container that you want to save, or if you want to access job logs after the run finishes.
To automatically remove a container when exiting, add the --rm flag to the run command.
% nvidia-docker run --rm nvcr.io/nvidia/<repository>:<tag>

5.4. Setting The Interactive Flag

By default, containers run in batch mode; that is, the container is run once and then exited without any user interaction. Containers can also be run in interactive mode as a service.

To run in interactive mode, add the -ti flag to the run command.
% nvidia-docker run -ti --rm nvcr.io/nvidia/<repository>:<tag>

5.5. Setting The Volumes Flag

There are no data sets included with the containers, therefore, if you want to use data sets, you need to mount volumes into the container from the host operating system. For more information, see Manage data in containers.

Typically, you would use either Docker volumes or host data volumes. The primary difference between host data volumes and Docker volumes is that Docker volumes are private to Docker and can only be shared amongst Docker containers. Docker volumes are not visible from the host operating system, and Docker manages the data storage. Host data volumes are any directory that is available from the host operating system. This can be your local disk or network volumes.

Example 1
Mount a directory /raid/imagedata on the host operating system as /images in the container.
% nvidia-docker run -ti --rm -v /raid/imagedata:/images
        nvcr.io/nvidia/<repository>:<tag>
Example 2
Mount a local docker volume named data (must be created if not already present) in the container as /imagedata.
% nvidia-docker run -ti --rm -v data:/imagedata nvcr.io/nvidia/<repository>:<tag>

5.6. Setting The Mapping Ports Flag

Applications such as Deep Learning GPU Training System™ (DIGITS) open a port for communications. You can control whether that port is open only on the local system or is available to other computers on the network outside of the local system.

Using DIGITS as an example, in DIGITS 5.0 starting in container image 16.12, by default the DIGITS server is open on port 5000. However, after the container is started, you may not easily know the IP address of that container. To know the IP address of the container, you can choose one of the following ways:
  • Expose the port using the local system network stack (--net=host) where port 5000 of the container is made available as port 5000 of the local system.
or
  • Map the port (-p 8080:5000) where port 5000 of the container is made available as port 8080 of the local system.

In either case, users outside the local system have no visibility that DIGITS is running in a container. Without publishing the port, the port is still available from the host, however not from the outside.

5.7. Setting The Shared Memory Flag

Certain applications, such as PyTorch™ and the Microsoft® Cognitive Toolkit™ , use shared memory buffers to communicate between processes. Shared memory can also be required by single process applications, such as MXNet™ and TensorFlow™ , which use the NVIDIA® Collective Communications Library ™ (NCCL) (NCCL).

By default, Docker containers are allotted 64MB of shared memory. This can be insufficient, particularly when using all 8 GPUs. To increase the shared memory limit to a specified size, for example 1GB, include the --shm-size=1g flag in your docker run command.

Alternatively, you can specify the --ipc=host flag to re-use the host’s shared memory space inside the container. Though this latter approach has security implications as any data in shared memory buffers could be visible to other containers.

5.8. Setting The Restricting Exposure Of GPUs Flag

From inside the container, the scripts and software are written to take advantage of all available GPUs. To coordinate the usage of GPUs at a higher level, you can use this flag to restrict the exposure of GPUs from the host to the container. For example, if you only want GPU 0 and GPU 1 to be seen in the container, you would issue the following:
$ NV_GPU=0,1 nvidia-docker run ...

This flag creates a temporary environment variable that restricts which GPUs are used.

Specified GPUs are defined per container using the Docker device-mapping feature, which is currently based on Linux cgroups.

5.9. Container Lifetime

The state of an exited container is preserved indefinitely if you do not pass the --rm flag to the nvidia-docker run command. You can list all of the saved exited containers and their size on the disk with the following command:
$ docker ps --all --size --filter Status=exited

The container size on the disk depends on the files created during the container execution, therefore the exited containers take only a small amount of disk space.

You can permanently remove a exited container by issuing:
docker rm [CONTAINER ID]
By saving the state of containers after they have exited, you can still interact with them using the standard Docker commands. For example:
  • You can examine logs from a past execution by issuing the docker logs command.
    $ docker logs 9489d47a054e
  • You can extract files using the docker cp command.
    $ docker cp 9489d47a054e:/log.txt .
  • You can restart a stopped container using the docker restart command.
    $ docker restart <container name>
    For the NVCaffe™ container, issue this command:
    $ docker restart caffe
  • You can save your changes by creating a new image using the docker commit command. For more information, see Example 3: Customizing a Container using docker commit.
    Note: Use care when committing Docker container changes, as data files created during use of the container will be added to the resulting image. In particular, core dump files and logs can dramatically increase the size of the resulting image.

6. NGC Container Registry Spaces

The NGC container registry uses spaces to group nvidia-docker image repositories for related applications. These spaces appear in the image URL as a nvcr.io/<space>/image-name:tag, when used in pulling, running, or layering additional software on top of NGC container images.

nvcr.io/nvidia

This space contains a catalog of fully integrated and optimized deep learning framework containers that take full advantage of NVIDIA GPUs in both single GPU and multi-GPU configurations. They include CUDA Toolkit, DIGITS workflow, and the following deep learning frameworks: NVCaffe, Caffe2, Microsoft Cognitive Toolkit (CNTK), MXNet, PyTorch, TensorFlow, Theano, and Torch. These framework containers are delivered ready-to-run, including all necessary dependencies such as CUDA runtime, NVIDIA libraries, and an operating system.

Each framework container image also includes the framework source code to enable custom modifications and enhancements, along with the complete software development stack.

NVIDIA updates these deep learning containers monthly to ensure they continue to provide peak performance.

nvcr.io/nvidia-hpcvis

This space contains a catalog of HPC visualization containers, currently available in beta, featuring the industry’s leading visualization tools, including ParaView with NVIDIA IndeX volume renderer, NVIDIA Optix ray-tracing library and NVIDIA Holodeck for interactive real-time visualization and high-quality visuals.

nvcr.io/hpc

This space contains a catalog of popular third-party GPU ready HPC application container provided by partners, including GAMESS, GROMACS, LAMMPS, NAMD and RELION. All third-party containers conform to NGC container standards and best practices, making it easy to get the latest GPU optimized HPC software up and get running quickly.

7. CUDA Toolkit Container

The CUDA Toolkit provides a development environment for creating high performance GPU-accelerated applications. The toolkit includes GPU-accelerated libraries, debugging and optimization tools, a C/C++ compiler and a runtime library to deploy your application.

All NGC Container images are based on the CUDA platform layer (nvcr.io/nvidia/cuda). This image provides a containerized version of the software development stack underpinning all other NGC containers, and is available for users who need more flexibility to build containers with custom CUDA applications.

7.1. OS Layer

Within the software stack, the lowest layer (or base layer) is the user space of the OS. The software in this layer includes all of the security patches that are available within the month of the release.

7.2. CUDA Layer

Compute Unified Device Architecture® (CUDA) is a parallel computing platform and programming model created by NVIDIA to give application developers access to the massive parallel processing capability of GPUs. CUDA is the foundation for GPU acceleration of deep learning as well as a wide range of other computation and memory-intensive applications ranging from astronomy, to molecular dynamics simulation, to computational finance.

7.2.1. CUDA Runtime

The CUDA runtime layer provides the components needed to execute CUDA applications in the deployment environment. The CUDA runtime is packaged with the CUDA Toolkit and includes all of the shared libraries, but none of the CUDA compiler components.

7.2.2. CUDA Toolkit

The CUDA Toolkit provides a development environment for developing optimized GPU-accelerated applications. It includes GPU-accelerated CUDA libraries which enable drop-in acceleration across multiple domains such as linear algebra, image and video processing, deep learning and graph analytics. For developing custom algorithms, you can use available integrations with commonly used languages and numerical packages as well as well-published development APIs.

8. HPC Visualization Containers

In addition to accessing the NVIDIA optimized frameworks and HPC containers, the NVIDIA GPU Cloud (NGC) container registry also hosts the following scientific visualization containers for HPC. These containers rely on the popular scientific visualization tool called ParaView.

Visualization in an HPC environment typically requires remote visualization, that is, data resides and is processed on a remote HPC system or in the cloud, and the user graphically interacts with this application from their workstation. As some visualization containers require specialized client applications, the HPC visualization containers consist of two components:
server container
The server container needs access to the files on your server system. Details on how to grant this access are provided below. The server container can run both in serial mode or in parallel. For this alpha release, we are focusing on the serial node configuration. If you are interested in parallel configuration, contact hpcviscontainer@nvidia.com.
client container
To ensure matching versions of the client application and the server container, NVIDIA provides the client application in a container. Similarly, to the server container, the client container needs access to some of the ports to establish connection with the server container.
In addition, the client container needs access to the users’ X server for displaying the graphical user interface.
ParaView with NVIDIA Holodeck
Enables graphically rich scientific visualizations; bridging between ParaView and high-end rendering engines such as NVIDIA Holodeck.

ParaView with NVIDIA IndeX
Offers the NVIDIA IndeX scalable volume rendering technology within the popular scientific visualization tool called ParaView.

ParaView with NVIDIA OptiX
Provides GPU accelerated ray-tracing technology within ParaView; offering enhanced visual cues and high performance rendering for large scale scenes.

8.1. Prerequisites For HPC Visualization Containers

  • Install docker-ce and nvidia-docker2. First install docker-ce, then install nvidia-docker2 for your operating system and Docker version. For a script to install nvidia-docker2, see Installing NVIDIA Docker 2.0.
    Note: If you already have nvidia-docker1 installed and intend to keep it, you can install nvidia-container-runtime.
  • Install the NVIDIA Display driver version 384.57 or onwards depending on your GPU product type and series for your operating system. For more information, see Download Drivers.
  • Ensure you have an NVIDIA GPU supporting Compute Unified Device Architecture® (CUDA) version with compute capability 6.0.0 or higher. For example, Pascal GPU architecture generation or later.
  • Log into the NVIDIA® GPU Cloud (NGC) Container Registry located at nvcr.io using your NGC API key. For step-by-step instructions on how to gain access and get your API key, see NGC Getting Started Guide.

8.1.1. Installing NVIDIA Docker 2.0

The following script installs NVIDIA Docker 2.0 which is a prerequisite to pulling the ParaView with NVIDIA IndeX HPC visualization container.

Full support for concurrent graphics and compute capabilities in containers is supported in NVIDIA Docker 2.0. Current installations of NGC run on NVIDIA Docker 1.0. Prior to using a container on any of these instances, NVIDIA Docker 2.0 must be installed.

Use the following script below to install NVIDIA Docker 2.0 on your instance.
# Install NVIDIA Docker 2.0
docker volume ls -q -f driver=nvidia-docker | \
  xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo apt-get purge -y nvidia-docker
curl -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
sudo tee /etc/apt/sources.list.d/nvidia-docker.list <<< \
"deb https://nvidia.github.io/libnvidia-container/ubuntu16.04/amd64 /
deb https://nvidia.github.io/nvidia-container-runtime/ubuntu16.04/amd64 /
deb https://nvidia.github.io/nvidia-docker/ubuntu16.04/amd64 /"

sudo apt-get -y update
sudo apt-get install -y nvidia-docker2
sudo pkill -SIGHUP dockerd

# Tests
#docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi

8.2. ParaView With NVIDIA Holodeck

Currently, the ParaView with NVIDIA Holodeck container requires a running X server both on the server host and the client host. Therefore, only a single container image is required.

Pull the docker image on the server host and on the client host as follows:
docker pull nvcr.io/nvidia-hpcvis/paraview-holodeck:glx-17.11.13-beta

8.2.1. Running The ParaView With NVIDIA Holodeck Container

  1. Create X-forwarding variables for your container.
    XSOCK=/tmp/.X11-unix; XAUTH=/tmp/.docker.xauth;
    touch /tmp/.docker.xauth;
    xauth nlist :0 | sed -e 's/^..../ffff/' | xauth -f /tmp/.docker.xauth nmerge -
    
  2. On the server host, start the ParaView Holodeck server:
    docker run --rm -it --runtime=nvidia \
    -v /tmp/.X11-unix:/tmp/.X11-unix -v /tmp/.docker.xauth:/tmp/.docker.xauth \
    -e XAUTHORITY=/tmp/.docker.xauth -e DISPLAY=:0 \
    -p 11111:11111 \
    --shm-size=4g \
    nvcr.io/nvidia-hpcvis/paraview-holodeck:glx-17.11.13-beta \
    ./service.sh externalvis pvserver

    The Holodeck render window showing a space scene displays.

    The server container is ready after you receive a message similar to the following:
    “Accepting connection(s): [...]:11111”
  3. Set up X access and start the client container on the client host. Ensure you replace your_server_hostname.
    XSOCK=/tmp/.X11-unix; XAUTH=/tmp/.docker.xauth 
    touch /tmp/.docker.xauth
    xauth nlist :0 | sed -e 's/^..../ffff/' \
    | xauth -f /tmp/.docker.xauth nmerge -docker run --rm -it --runtime=nvidia \
    -v /tmp/.X11-unix:/tmp/.X11-unix -v /tmp/.docker.xauth:/tmp/.docker.xauth \
    -e XAUTHORITY=/tmp/.docker.xauth -e DISPLAY=:0 \
    nvcr.io/nvidia-hpcvis/paraview-holodeck:glx-17.11.13-beta \
    sh -c paraview\ --server-url=cs://your_server_hostname:11111 

    The ParaView user interface displays.

  4. To enable rendering in Holodeck, replace ParaView’s default view. Remove the default view by closing the layout:
  5. Insert a new External Visualization view:
  6. The ParaView Holodeck container is now ready to display a visualization pipeline. For a simple test scene, add a Wavelet Source:
  7. Adjust the Wavelet Sources extents from -60 to 60 in all three dimensions, then click Apply:
  8. Add a Contour filter, then click Apply:
  9. Hide the Wavelet Source from the view to prevent the bounding box from blocking the iso surface:
  10. Enable rendering through Holodeck using the Enable External Visualization button:

8.3. ParaView With NVIDIA IndeX

To support both X-enabled and headless hosts, the ParaView IndeX container image is available with GLX and EGL support. The following section shows how to launch the IndeX container with different use cases.

For more information about ParaView, see the ParaView User’s Guide and the NVIDIA IndeX SDK.

8.3.1. Single-Machine With GLX

  1. Login to the docker repository and pull the X display-enabled container on your workstation:
    docker pull nvcr.io/nvidia-hpcvis/paraview-index:glx-17.11.13-beta
  2. Specify X-forwarding variables:
    XSOCK=/tmp/.X11-unix; XAUTH=/tmp/.docker.xauth
    touch /tmp/.docker.xauth
    xauth nlist :0 | sed -e 's/^..../ffff/' \
    | xauth -f /tmp/.docker.xauth nmerge 
  3. Run the image. In this example, host system data in the current directory $(pwd) are mounted to both /work in the container. This should be modified as desired by the user.
    docker run --rm -it --runtime=nvidia \
    -v /tmp/.X11-unix:/tmp/.X11-unix -v /tmp/.docker.xauth:/tmp/.docker.xauth \
    -v $(pwd):/work -e XAUTHORITY=/tmp/.docker.xauth -e DISPLAY=:0 \
    nvcr.io/nvidia-hpcvis/paraview-index:glx-17.11.13-beta \
    sh -c paraview

8.3.2. Server Container With EGL

In a typical client-server setup, one container acting as the server will run remotely on a display-less machine, connected to a second container that runs locally on a workstation and provides the graphical front end.
Use the following command to pull the EGL-enabled, no-display container from the NGC registry on the server host:
docker pull nvcr.io/nvidia-hpcvis/paraview-index:egl-17.11.13-beta
Run the server component on the server host. We listen on the default port 11111:
docker run --runtime=nvidia -p 11111:11111 --rm -it \
nvcr.io/nvidia-hpcvis/paraview-index:egl-17.11.13-beta sh -c pvserver

8.3.3. GLX Client Connecting To A Server

Pull the X display-enabled container on your workstation:
docker pull nvcr.io/nvidia-hpcvis/paraview-index:glx-17.11.13-beta
Set up X access and launch the client application container (make sure to replace your_server_hostname with the address of your ParaView server host):
XSOCK=/tmp/.X11-unix; XAUTH=/tmp/.docker.xauth
touch /tmp/.docker.xauth
xauth nlist :0 | sed -e 's/^..../ffff/' \
| xauth -f /tmp/.docker.xauth nmerge -
docker run --rm -it --runtime=nvidia \
-v /tmp/.X11-unix:/tmp/.X11-unix -v /tmp/.docker.xauth:/tmp/.docker.xauth \
-e XAUTHORITY=/tmp/.docker.xauth -e DISPLAY=:0 \
nvcr.io/nvidia-hpcvis/paraview-index:glx-17.11.13-beta \
sh -c paraview\ --server-url=cs://your_server_hostname:11111

8.3.4. Example ParaView Pipeline With NVIDIA IndeX

  1. Exit the splash screen.
  2. To set up a test scene, add a Wavelet Source, then click on Apply.
  3. Change the display mode from Outline to NVIDIA IndeX.
  4. Change the coloring from Solid Color to RTData.
    The result is ParaView’s Wavelet source, rendered on the server GPU by ParaView’s IndeX library:

8.4. ParaView With NVIDIA OptiX

The ParaView with NVIDIA OptiX container is designed to run ParaView as a user normally would outside a container. The following sections show how to launch the OptiX container with different use cases.

For more information about ParaView see the ParaView User’s Guide and the NVIDIA OptiX SDK.

8.4.1. Single-Machine Container With GLX

On systems with a physical display, or when running a ParaView client, users will wish to launch a container with GLX support. This can be done as follows.
  1. Pull the docker image:
    docker pull nvcr.io/nvidia-hpcvis/paraview-optix:glx-17.11.13-beta
  2. Set up X11 forwarding variables:
    XSOCK=/tmp/.X11-unix; XAUTH=/tmp/.docker.xauth;
    touch /tmp/.docker.xauth;
    xauth nlist :0 | sed -e 's/^..../ffff/' | xauth -f /tmp/.docker.xauth nmerge -
    
  3. Run the image. In this example, host system data in the current directory $(pwd) are mounted to both /work in the container. This should be modified as desired.
    docker run --rm -it --runtime=nvidia -v /tmp/.X11-unix:/tmp/.X11-unix -v /tmp/.docker.xauth:/tmp/.docker.xauth -e XAUTHORITY=/tmp/.docker.xauth -e DISPLAY=:0 -v $(pwd):/work:rw nvcr.io/nvidia-hpcvis/paraview-optix-glx-beta:17.11.10 sh -c paraview
    

8.4.2. Server Container With EGL

Launching a ParaView server on GPU HPC resources often requires EGL support, requiring a separate build of ParaView for which we have a separate container.
  1. Pull the container:
    docker pull nvcr.io/nvidia-hpcvis/paraview-optix:egl-17.11.13-beta
  2. Specify the connection port and launch the container as follows (in this example, we listen on the default port 11111):
    docker run --runtime=nvidia -p 11111:11111 --rm -it \
    nvcr.io/nvidia-hpcvis/paraview-optix:egl-17.11.13-beta sh -c pvserver
  3. For users who wish to run the server on a GLX-capable workstation, it is equally possible to use the GLX image with the pvserver argument.

8.4.3. Running The GLX Client And Attaching To The Server

With the server launched, it is then straightforward to use the GLX image to run a client, and connect to the server as follows. Here we assume the server is listening on port 11111, addressable at your.server.address.
docker pull nvcr.io/nvidia-hpcvis/paraview-optix:glx-17.11.13-beta

XSOCK=/tmp/.X11-unix; XAUTH=/tmp/.docker.xauth
touch /tmp/.docker.xauth
xauth nlist :0 | sed -e 's/^..../ffff/' \
| xauth -f /tmp/.docker.xauth nmerge -

docker run --rm -it --runtime=nvidia \
-v /tmp/.X11-unix:/tmp/.X11-unix -v /tmp/.docker.xauth:/tmp/.docker.xauth \
-e XAUTHORITY=/tmp/.docker.xauth -e DISPLAY=:0 \
nvcr.io/nvidia-hpcvis/paraview-optix:glx-17.11.13-beta \
sh -c paraview\ --server-url=cs://your.server.address:11111

8.4.4. Optional: Using The ParaView .config File

It is helpful to reuse ParaView configuration files to maintain settings across ParaView sessions. To do this, first create a new directory for ParaView to store its settings.

mkdir pvsettings

When issuing the docker run command, add the following command as an argument:

-v $(pwd)/pvsettings:/home/paraview/.config/ParaView

Insert the command before the image URL. For example,

docker run --rm -it --runtime=nvidia \
 -v /tmp/.X11-unix:/tmp/.X11-unix -v /tmp/.docker.xauth:/tmp/.docker.xauth \
 -e XAUTHORITY=/tmp/.docker.xauth -e DISPLAY=:0 \
 nvcr.io/nvidia-hpcvis/paraview-optix:glx-17.11.13-beta \
 -v $(pwd)/pvsettings:/home/paraview/.config/ParaView \
 sh -c paraview\ --server-url=cs://your.server.address:11111 

8.4.5. Example ParaView Pipeline With NVIDIA OptiX

  1. Exit the splash screen.
  2. Click Sources > Wavelet on the top pull-down menu. Click Apply on the left pane.
  3. Select Filter > Common > Contour from the top pull-down menu. Click Apply again.
  4. Select Filter > Common > Clip from the top pull down menu. Move the clip plane to the desired position and click Apply.
  5. Under the Plane Parameters sub-pane on the left pane, uncheck Show Plane to hide the clip plane.
  6. Scroll down on the left pane and select Enable OptiX.
  7. Optional: Enable Shadows OptiX.
  8. Optional: Enable 4 samples per pixel and 4 ambient samples in OptiX.
  9. Optional: Click Add Light on the left pane and modify as desired. The result should appear as follows:

9. Customizing Containers

The nvidia-docker images come prepackaged, tuned, and ready to run; however, you may want to build a new image from scratch or augment an existing image with custom code, libraries, data, or settings for your corporate infrastructure. This section will guide you through exercises that will highlight how to create a container from scratch, customize a container, extend a deep learning framework to add features, develop some code using that extended framework from the developer environment, then package that code as a versioned release.

By default, you do not need to build a container. The NGC container registry, nvcr.io, has a number of containers that can be used immediately. These include containers for deep learning, scientific computing and visualization, as well as containers with just the CUDA Toolkit.

One of the great things about containers is that they can be used as starting points for creating new containers. This can be referred to as “customizing” or “extending” a container. You can create a container completely from scratch, however, since these containers are likely to run on a GPU system, it is recommended that you are least start with a nvcr.io container that contains the OS and CUDA. However, you are not limited to this and can create a container that runs on the CPUs in the system which does not use the GPUs. In this case, you can start with a bare OS container from Docker. However, to make development easier, you can still start with a container with CUDA - it is just not used when the container is used.

In the case of the DGX-1 and the DGX Station, you can push or save your modified/extended containers to the NVIDIA DGX container registry, nvcr.io. They can also be shared with other users of the DGX system but this requires some administrator help.

Currently, you cannot save customized containers from the NGC container registry (cloud based) solution to nvcr.io. The customized or extended containers can be saved to a user’s private container repository. The customized or extended containers can be saved to a user’s private container repository.

It is important to note that all nvidia-docker deep learning framework images include the source to build the framework itself as well as all of the prerequisites.
Attention: Do not install an NVIDIA driver into the Docker® image at Docker build time. nvidia-docker is essentially a wrapper around docker that transparently provisions a container with the necessary components to execute code on the GPU.

NVIDIA provides a large set of images in the NGC container registry that are already tested, tuned, and are ready to run. You can pull any one of these images to create a container and add software or data of your choosing.

A best-practice is to avoid docker commit usage for developing new docker images, and to use Dockerfiles instead. The Dockerfile method provides visibility and capability to efficiently version-control changes made during development of a docker image. The docker commit method is appropriate for short-lived, disposable images only (see Example 3: Customizing A Container Using docker commit for an example).

It is important to note that all nvidia-docker deep learning framework images include the source to build the framework itself as well as all of the prerequisites.
Attention: Do not install an NVIDIA driver into the Docker image at Docker build time. nvidia-docker is essentially a wrapper around docker that transparently provisions a container with the necessary components to execute code on the GPU.

A best-practice is to avoiddocker commit usage for developing new Docker images, and to use Dockerfiles instead. The Dockerfile method provides visibility and capability to efficiently version-control changes made during development of a Docker image. The Docker commit method is appropriate for short-lived, disposable images only.

For more information on writing a Docker file, see the best practices documentation.

9.1. Benefits And Limitations To Customizing A Container

You can customize a container to fit your specific needs for numerous reasons; for example, you depend upon specific software that is not included in the container that NVIDIA provides. No matter your reasons, you can customize a container.

The container images do not contain sample data-sets or sample model definitions unless they are included with the framework source. Be sure to check the container for sample data-sets or models.

9.2. Example 1: Building A Container From Scratch

Docker uses Dockerfiles to create or build a Docker image. Dockerfiles are scripts that contain commands that Docker uses successively to create a new Docker image. Simply put, a Dockerfile is the source code for the container image. Dockerfiles always start with a base image to inherit from.

For more information, see Best practices for writing Dockerfiles.

  1. Create a working directory on your local hard-drive.
  2. In that directory, open a text editor and create a file called Dockerfile. Save the file to your working directory.
  3. Open your Dockerfile and include the following:
    FROM ubuntu:14.04
    RUN apt-get update && apt-get install -y curl
    CMD echo "hello from inside a container"
    Where the last line CMD, executes the indicated command when creating the container. This is a way to check that the container was built correctly.

    For this example, we are also pulling the container from the Docker repository and not the DGX™ system repository. There will be subsequent examples using the NVIDIA® repository.

  4. Save and close your Dockerfile.
  5. Build the image. Issue the following command to build the image and create a tag.
    $ docker build -t <new_image_name>:<new_tag> .
    Note: This command was issued in the same directory where the Dockerfile is located.

    The output from the docker build process lists “Steps”; one for each line in the Dockerfile.

    For example, let us name the container test1 and tag it with latest. Also, for illustrative purposes, let us assume our private DGX system repository is called nvidian_sas. The command below builds the container. Some of the output is shown below so you know what to expect.
    $ docker build -t test1:latest .
    Sending build context to Docker daemon 3.072 kB
    Step 1/3 : FROM ubuntu:14.04
    14.04: Pulling from library/ubuntu
    ...
    Step 2/3 : RUN apt-get update && apt-get install -y curl
    ...
    Step 3/3 : CMD echo "hello from inside a container"
     ---> Running in 1f491b9235d8
     ---> 934785072daf
    Removing intermediate container 1f491b9235d8
    Successfully built 934785072daf

    For information about building your image, see docker build. For information about tagging your image, see docker tag.

  6. Verify that the build was successful. You should see a message similar to the following:
    Successfully built  934785072daf
    This message indicates that the build was successful. Any other message and the build was not successful.
    Note: The number, 934785072daf, is assigned when the image is built and is random.
  7. Confirm you can view your image. Issue the following command and view your container.
    $ docker images
    REPOSITORY      TAG            IMAGE ID        CREATED                SIZE
    test1           latest         934785072daf    19 minutes ago         222 MB
    The new container is now available to be used.
    Note: The container is local to this DGX system. If you want to store the container in your private repository, follow the next step.
  8. Store the container in your private Docker repository by pushing it.
    Note: This only works for the DGX-1™ and the DGX Station.
    1. The first step in pushing it, is to tag it.
      $ docker tag test1 nvcr.io/nvidian_sas/test1:latest
    2. Now that the image has been tagged, you can push it to, for example, a private project on nvcr.io named nvidian_sas.
      $ docker push nvcr.io/nvidian_sas/test1:latest
      The push refers to a repository [nvcr.io/nvidian_sas/test1]
      …
    3. Verify that the container appears in the nvidian_sas repository.

9.3. Example 2: Customizing A Container Using Dockerfile

This example uses a Dockerfile to customize the NVCaffe container in nvcr.io. Before customizing the container, you should ensure the NVCaffe 17.03 container has been loaded into the registry using the docker pull command before proceeding.
$ docker pull nvcr.io/nvidia/caffe:17.03

As mentioned earlier in this document, the Docker containers on nvcr.io also provide a sample Dockerfile that explains how to patch a framework and rebuild the Docker image. In the directory /workspace/docker-examples, there are two sample Dockerfiles. For this example, we will use the Dockerfile.customcaffe file as a template for customizing a container.

  1. Create a working directory called my_docker_images on your local hard drive.
  2. Open a text editor and create a file called Dockerfile. Save the file to your working directory.
  3. Open your Dockerfile again and include the following lines in the file:
    FROM nvcr.io/nvidia/caffe:17.03
    # APPLY CUSTOMER PATCHES TO CAFFE
    # Bring in changes from outside container to /tmp
    # (assumes my-caffe-modifications.patch is in same directory as
    Dockerfile)
    #COPY my-caffe-modifications.patch /tmp
    
    # Change working directory to NVCaffe source path
    WORKDIR /opt/caffe
    
    # Apply modifications
    #RUN patch -p1 < /tmp/my-caffe-modifications.patch
    
    # Note that the default workspace for caffe is /workspace
    RUN mkdir build && cd build && \
      cmake -DCMAKE_INSTALL_PREFIX:PATH=/usr/local -DUSE_NCCL=ON
    -DUSE_CUDNN=ON -DCUDA_ARCH_NAME=Manual -DCUDA_ARCH_BIN="35 52 60 61"
    -DCUDA_ARCH_PTX="61" .. && \
      make -j"$(nproc)" install && \
      make clean && \
      cd .. && rm -rf build
    
    # Reset default working directory
    WORKDIR /workspace
    Save the file.
  4. Build the image using the docker build command and specify the repository name and tag. In the following example, the repository name is corp/caffe and the tag is 17.03.1PlusChanges. For the case, the command would be the following:
    $ docker build -t corp/caffe:17.03.1PlusChanges .
  5. Run the Docker image using the nvidia-docker run command. For example:
    $ nvidia-docker run -ti --rm corp/caffe:17.03.1PlusChanges .

9.4. Example 3: Customizing A Container Using docker commit

This example uses the docker commit command to flush the current state of the container to a Docker image. This is not a recommended best practice, however, this is useful when you have a container running to which you have made changes and want to save them. In this example, we are using the apt-get tag to install packages which requires that the user run as root.
Note:
  • The NVCaffe image release 17.04 is used in the example instructions for illustrative purposes.
  • Do not use the --rm flag when running the container. If you use the --rm flag when running the container, your changes will be lost when exiting the container.
  1. Pull the Docker container from the nvcr.io repository to the DGX system. For example, the following command will pull the NVCaffe container:
    $ docker pull nvcr.io/nvidia/caffe:17.04
  2. Run the container on the DGX system using nvidia-docker.
    $ nvidia-docker run -ti nvcr.io/nvidia/caffe:17.04
    ==================
    == NVIDIA Caffe ==
    ==================
    
    NVIDIA Release 17.04 (build 26740)
    
    Container image Copyright (c) 2017, NVIDIA CORPORATION.  All rights reserved.
    Copyright (c) 2014, 2015, The Regents of the University of California (Regents)
    All rights reserved.
    
    Various files include modifications (c) NVIDIA CORPORATION.  All rights reserved.
    NVIDIA modifications are covered by the license terms that apply to the underlying project or file.
    
    NOTE: The SHMEM allocation limit is set to the default of 64MB.  This may be insufficient for NVIDIA Caffe.  NVIDIA recommends the use of the following flags:
       nvidia-docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 ...
    
    root@1fe228556a97:/workspace#
  3. You should now be the root user in the container (notice the prompt). You can use the command apt to pull down a package and put it in the container.
    Note: The NVIDIA containers are built using Ubuntu which uses the apt-get package manager. Check the container release notes Deep Learning Documentation for details on the specific container you are using.
    In this example, we will install Octave; the GNU clone of MATLAB, into the container.
    # apt-get update
    # apt install octave
    Note: You have to first issue apt-get update before you install Octave using apt.
  4. Exit the workspace.
    # exit
  5. Display the list of containers using docker ps -a. As an example, here is some of the output from the docker ps -a command:
    $ docker ps -a
    CONTAINER ID    IMAGE                        CREATED       ...
    1fe228556a97    nvcr.io/nvidia/caffe:17.04   3 minutes ago ...
  6. Now you can create a new image from the container that is running where you have installed Octave. You can commit the container with the following command.
    $ docker commit 1fe228556a97 nvcr.io/nvidian_sas/caffe_octave:17.04
    sha256:0248470f46e22af7e6cd90b65fdee6b4c6362d08779a0bc84f45de53a6ce9294
    
  7. Display the list of images.
    $ docker images
    REPOSITORY                 	TAG             	IMAGE ID     ...
    nvidian_sas/caffe_octave   	17.04           	75211f8ec225 ...
  8. To verify, let's run the container again and see if Octave is actually there.
    Note: This only works for the DGX-1 and the DGX Station.
    $ nvidia-docker run -ti nvidian_sas/caffe_octave:17.04
    ==================
    == NVIDIA Caffe ==
    ==================
    
    NVIDIA Release 17.04 (build 26740)
    
    Container image Copyright (c) 2017, NVIDIA CORPORATION.  All rights reserved. Copyright (c) 2014, 2015, The Regents of the University of California (Regents) All rights reserved.
    
    Various files include modifications (c) NVIDIA CORPORATION.  All rights reserved. NVIDIA modifications are covered by the license terms that apply to the underlying project or file.
    
    NOTE: The SHMEM allocation limit is set to the default of 64MB.  This may be insufficient for NVIDIA Caffe.  NVIDIA recommends the use of the following flags:
       nvidia-docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 ...
    
    root@2fc3608ad9d8:/workspace# octave
    octave: X11 DISPLAY environment variable not set
    octave: disabling GUI features
    GNU Octave, version 4.0.0
    Copyright (C) 2015 John W. Eaton and others.
    This is free software; see the source code for copying conditions.
    There is ABSOLUTELY NO WARRANTY; not even for MERCHANTABILITY or
    FITNESS FOR A PARTICULAR PURPOSE.  For details, type 'warranty'.
    
    Octave was configured for "x86_64-pc-linux-gnu".
    
    Additional information about Octave is available at http://www.octave.org.
    
    Please contribute if you find this software useful.
    For more information, visit http://www.octave.org/get-involved.html
    
    Read http://www.octave.org/bugs.html to learn how to submit bug reports.
    For information about changes from previous versions, type 'news'.
    
    octave:1>

    Since the Octave prompt displayed, Octave is installed.

  9. If you want to save the container into your private repository (Docker uses the phrase “push”), then you can use the command docker push ....
    $ docker push nvcr.io/nvidian_sas/caffe_octave:17.04

The new Docker image is now available for use. You can check your local Docker repository for it.

9.5. Example 4: Developing A Container Using Docker

There are two primary use cases for a developer to extend a container:
  1. Create a development image that contains all of the immutable dependencies for the project, but not the source code itself.
  2. Create a production or testing image that contains a fixed version of the source and all of the software dependencies.

The datasets are not packaged in the container image. Ideally, the container image is designed to expect volume mounts for datasets and results.

In these examples, we mount our local dataset from /raid/datasets on our host to /dataset as a read-only volume inside the container. We also mount a job specific directory to capture the output from a current run.

In these examples, we will create a timestamped output directory on each container launch and map that into the container at /output. Using this method, the output for each successive container launch is captured and isolated.

Including the source into a container for developing and iterating on a model has many awkward challenges that can over complicate the entire workflow. For instance, if your source code is in the container, then your editor, version control software, dotfiles, etc. also need to be in the container.

However, if you create a development image that contains everything you need to run your source code, you can map your source code into the container to make use of your host workstation’s developer environment. For sharing a fixed version of a model, it is best to package a versioned copy of the source code and trained weights with the development environment.

As an example, we will work though a development and delivery example for the open source implementation of the work found in Image-to-Image Translation with Conditional Adversarial Networks by Isola et. al. and is available at pix2pix. Pix2Pix is a Torch implementation for learning a mapping from input images to output images using a Conditional Adversarial Network. Since online projects can change over time, we will focus our attention on the snapshot version d7e7b8b557229e75140cbe42b7f5dbf85a67d097 change-set.

In this section, we are using the container as a virtual environment, in that the container has all the programs and libraries needed for our project.
Note: We have kept the network definition and training script separate from the container image. This is a useful model for iterative development because the files that are actively being worked on are persistent on the host and only mapped into the container at runtime.

The differences to the original project can be found here Comparing changes.

If the machine you are developing on is not the same machine on which you will be running long training sessions, then you may want to package your current development state in the container.

  1. Create a working directory on your local hard-drive.
    mkdir Projects
    $ cd ~/Projects
  2. Git clone the Pix2Pix git repository.
    $ git clone https://github.com/phillipi/pix2pix.git
    $ cd pix2pix
  3. Run the git checkout command.
    $ git checkout -b devel d7e7b8b557229e75140cbe42b7f5dbf85a67d097
  4. Download the dataset:
    bash ./datasets/download_dataset.sh facades
    
    I want to put the dataset on my fast /raid storage.
    $ mkdir -p /raid/datasets
    $ mv ./datasets/facades /raid/datasets
  5. Create a file called Dockerfile, and add the following lines:
    FROM nvcr.io/nvidia/torch:17.03
    RUN luarocks install nngraph
    RUN luarocks install 
    https://raw.githubusercontent.com/szym/display/master/display-scm-0.rockspec
    WORKDIR /source
  6. Build the development Docker container image (build-devel.sh).
    docker build -t nv/pix2pix-torch:devel .
  7. Create the following train.sh script:
    #!/bin/bash -x
    ROOT="${ROOT:-/source}"
    DATASET="${DATASET:-facades}"
    DATA_ROOT="${DATA_ROOT:-/datasets/$DATASET}"
    DATA_ROOT=$DATA_ROOT name="${DATASET}_generation"
    which_direction=BtoA th train.lua

    If you were actually developing this model, you would be iterating by making changes to the files on the host and running the training script which executes inside the container.

  8. Optional: Edit the files and execute the next step after each change.
  9. Run the training script (run-devel.sh).
    nvidia-docker run --rm -ti -v $PWD:/source  -v
    /raid/datasets:/datasets nv/pix2pix-torch:devel ./train.sh

Example 4.1: Package The Source Into The Container

Packaging the model definition and script into the container is very simple. We simply add a COPY step to the Dockerfile.

We’ve updated the run script to simply drop the volume mounting and use the source packaged in the container. The packaged container is now much more portable than our devel container image because the internal code is fixed. It would be good practice to version control this container image with a specific tag and store it in a container registry.

The updates to run the container are equally subtle. We simply drop the volume mounting of our local source into the container.

10. Troubleshooting

For more information about nvidia-docker containers, visit the GitHub site: NVIDIA-Docker GitHub.

For deep learning frameworks release notes and additional product documentation, see the Deep Learning Documentation website: Release Notes for Deep Learning Frameworks.

Notices

Notice

THE INFORMATION IN THIS GUIDE AND ALL OTHER INFORMATION CONTAINED IN NVIDIA DOCUMENTATION REFERENCED IN THIS GUIDE IS PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE INFORMATION FOR THE PRODUCT, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the product described in this guide shall be limited in accordance with the NVIDIA terms and conditions of sale for the product.

THE NVIDIA PRODUCT DESCRIBED IN THIS GUIDE IS NOT FAULT TOLERANT AND IS NOT DESIGNED, MANUFACTURED OR INTENDED FOR USE IN CONNECTION WITH THE DESIGN, CONSTRUCTION, MAINTENANCE, AND/OR OPERATION OF ANY SYSTEM WHERE THE USE OR A FAILURE OF SUCH SYSTEM COULD RESULT IN A SITUATION THAT THREATENS THE SAFETY OF HUMAN LIFE OR SEVERE PHYSICAL HARM OR PROPERTY DAMAGE (INCLUDING, FOR EXAMPLE, USE IN CONNECTION WITH ANY NUCLEAR, AVIONICS, LIFE SUPPORT OR OTHER LIFE CRITICAL APPLICATION). NVIDIA EXPRESSLY DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY OF FITNESS FOR SUCH HIGH RISK USES. NVIDIA SHALL NOT BE LIABLE TO CUSTOMER OR ANY THIRD PARTY, IN WHOLE OR IN PART, FOR ANY CLAIMS OR DAMAGES ARISING FROM SUCH HIGH RISK USES.

NVIDIA makes no representation or warranty that the product described in this guide will be suitable for any specified use without further testing or modification. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to ensure the product is suitable and fit for the application planned by customer and to do the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this guide. NVIDIA does not accept any liability related to any default, damage, costs or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this guide, or (ii) customer product designs.

Other than the right for customer to use the information in this guide with the product, no other license, either expressed or implied, is hereby granted by NVIDIA under this guide. Reproduction of information in this guide is permissible only if reproduction is approved by NVIDIA in writing, is reproduced without alteration, and is accompanied by all associated conditions, limitations, and notices.

Trademarks

NVIDIA and the NVIDIA logo are trademarks and/or registered trademarks of NVIDIA Corporation in the Unites States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.