Running NGC Containers Using Singularity

Overview

This chapter provides step-by-step instructions for pulling HPC containers from the NGC registry and running them using Singularity.

Singularity v2.6+ provides native Docker registry support. This support allows most Docker images, such as those hosted on NGC, to be pulled and converted to Singularity-compatible images in a single step.

NVIDIA tests HPC container compatibility with Singularity by converting the NGC images to Singularity format and running them through a rigorous QA process. The instructions below walk you through the process of pulling and running NGC containers with the Singularity runtime. Application-specific information may vary so it is recommended that you follow the container specific documentation before running with Singularity. If the container documentation does not include Singularity information, then the respective container was not tested using Singularity.

Prerequisites

These instructions assume the following.

  • You have Singularity v2.6+installed on your system

  • You have performed the following steps from the NGC website (see [NGC Getting Started Guide).

    • Signed up for an NGC account at https://ngc.nvidia.com/signup.

    • Created an NGC API key for access to the NGC container registry

    • Browsed the NGC website and identified an available NGC container and tag to run

  • Ensure you have correctly set udev rules which are detailed (here)

Note:

It is recommended that you install nvidia-container-cli because if installed, Singularity will use it. More information can be found here.

Converting to Singularity Image

Before running with Singularity you must set NGC container registry authentication credentials.

This is most easily accomplished by setting the following environment variables.

bash

$ export SINGULARITY_DOCKER_USERNAME='$oauthtoken'
$ export SINGULARITY_DOCKER_PASSWORD=<NVIDIA NGC API key>

tcsh

> setenv SINGULARITY_DOCKER_USERNAME '$oauthtoken'
> setenv SINGULARITY_DOCKER_PASSWORD <NVIDIA NGC API key>

More information describing how to obtain and use your NVIDIA NGC API key can be found here.

Once credentials are set in the environment, the NGC container can be pulled to a local Singularity image.

$ singularity build <app_tag>.simg docker://nvcr.io/<repository>/<app:tag>
This will save the container to the current directory as
<app_tag>.simg

For example to convert the HPC application NAMD hosted on NGC to a Singularity image, run

$ singularity build namd_2.12-171025.simg docker://nvcr.io/hpc/namd:2.12-171025

After the build has finished the Singularity image file, namd_2.12-171025.simg, will be available for use in the current working directory.

Running the Singularity Container

Once the local Singularity image has been pulled, the following modes of running are supported.

To leverage NVIDIA GPUs, you must use the Singularity flag `--nv`when running the containers. More singularity flags are explained here.

Important:

For Amazon Machine Image Users

Amazon Machine Images on Amazon Web Service have a default root umask of 077. Singularity must be installed with a umask of 022 to run properly. To (re)install Singularity with correct permissions:
  • Uninstall Singularity (if it is installed)

  • Change the umask with: `$ umask 0022`

  • Install Singularity

  • Restore the umask: `$ umask 0077`

This causes installed Singularity files to have permission 0755 instead of the default 0700. Note that the umask command only applies changes to the current shell. Use umask and install Singularity from the same shell session.

Directory Access

Singularity containers are themselves ostensibly read only. In order to provide application input and persist application output we’ll bind a host directory into our container, this is accomplished through the Singularity `-B` flag. The format of this flag is `-B <host_src_dir>:<container_dst_dir>`. Once a host directory is bound into the container we can interact with this directory from within the container exactly as we can outside the container.

It is also often convenient to use the  `--pwd <container_dir>` flag, which will set the present working directory of the command to be run within the container.

The Singularity commands below will mount the present working directory on the host to `/host_pwd` in the container process and set the present working directory of the container process to `/host_pwd`. With this set of flags the `<cmd>` to be run will be  launched from the host directory Singularity was called from.

$ singularity exec --nv -B $(pwd):/host_pwd --pwd /host_pwd <image.simg> <cmd>
Note:

Note: Binding to a directory which doesn't exist within the container image requires kernel and configuration support that may not be available on all systems, particularly those running older kernels such as CentOS/RHEL 6. When in doubt contact your system administrator.

Command Line Execution with Singularity

Running the container with Singularity from the command line looks similar to the command below.

$ singularity exec --nv <app_tag>.simg <command_to_run>

For example, to run the NAMD executable in the container

$ singularity exec --nv namd_2.12-171025.simg /opt/namd/namd-multicore

Interactive Shell with Singularity

To start a shell within the container, run the command below

$ singularity exec --nv <app_tag>.simg /bin/bash

For example, to start an interactive shell in the NAMD container

$ singularity exec --nv namd_2.12-171025.simg