AlphaFold2 (Latest)
AlphaFold2 (Latest)

Deployment Guide

This section provides additional details for the deployment of the AlphaFold2 NIM.

Container image tags can be seen with the command below, similar to other container images on NGC.

Copy
Copied!
            

ngc registry image info nvcr.io/nim/deepmind/alphafold2:1.0.0

Pull the container image using one of the following commands:

NIMTools

Docker

Copy
Copied!
            

docker pull nvcr.io/nim/deepmind/alphafold2:1.0.0

NGC SDK

Copy
Copied!
            

ngc registry image pull nvcr.io/nim/deepmind/alphafold2:1.0.0

As in the Quickstart Guide, you can run the following command to start the AlphaFold2 NIM

Copy
Copied!
            

export LOCAL_NIM_CACHE=~/.cache/nim export NGC_CLI_API_KEY=<Your NGC CLI API Key> docker run --rm --name alphafold2 --runtime=nvidia \ -p 8000:8000 \ -e NGC_CLI_API_KEY \ -v $LOCAL_NIM_CACHE:/opt/nim/.cache \ nvcr.io/nim/deepmind/alphafold2:1.0.0

Below is an overview of the command and its options as well as others available at NIM startup:

  • docker run: This is the command to run a new container from a Docker image.

  • --rm: This flag tells Docker to automatically remove the container when it exits. This property is useful for one-off runs or testing, as it prevents the container from being left behind.

  • --name alphafold2: This flag gives the container the name “alphafold2”.

  • --runtime=nvidia: This flag specifies the runtime to use for the container. In this case, it is set to “nvidia”, which is used for GPU acceleration.

  • -e NGC_CLI_API_KEY: This passes the NGC_CLI_API_KEY environment variable (and the value set in the parent terminal) to the container. This is used for authentication with NVIDIA’s NGC (NVIDIA GPU Cloud) service, including downloading the model data if it is not present in the NIM Cache.

  • -p 8000:8000: This flag maps port 8000 on the host machine to port 8000 in the container. This allows you to access the container’s services from the host machine.

  • -v <source>:<dest>: Mounts the host directory /home/$USER/.cache/nim in the container as /opt/nim/.cache so that it can be used for storing and reusing downloaded models.

Optional runtime variables and performance tuning

Below, we detail some advanced usage for starting the AlphaFold2 NIM:

Copy
Copied!
            

docker run --rm --name alphafold2 --runtime=nvidia \ -e CUDA_VISIBLE_DEVICES=0 \ -e NGC_CLI_API_KEY \ -e NIM_CACHE_PATH=/home/$USER/alphafold2-data \ -e NIM_PARALLEL_MSA_RUNNERS=3 \ -e NIM_PARALLEL_THREADS_PER_MSA=12 \ -p 8000:8000 \ -v $LOCAL_NIM_CACHE:/opt/nim/.cache \ nvcr.io/nim/deepmind/alphafold2:1.0.0

  • -e CUDA_VISIBLE_DEVICES=0: This flag sets an environment variable CUDA_VISIBLE_DEVICES to the value “0”. This variable controls which GPU devices are visible to the container. In this case, it is set to 0, which means the container will only use the first GPU (if available).

  • nvcr.io/nim/deepmind/alphafold2:1.0.0: This is the Docker image name and tag. The image is hosted on NVIDIA’s container registry (nvcr.io) and is named alphafold2. The tag 1.0.0 specifies a specific version of the image.

Checking the status of the AlphaFold2 NIM

You can view running docker containers on your system by using the following command:

Copy
Copied!
            

docker ps

This will return an output that looks like the following if the NIM is running:

Copy
Copied!
            

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d114948j4f55 alphafold2 "/opt/nvidia/nvidia_…" 46 minutes ago Up 46 minutes 6006/tcp, 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 8888/tcp, 0.0.0.0:50051->50051/tcp, :::50051->50051/tcp test1

The first column in the output is the docker container ID, which can be useful for interacting with the container. The remaining fields described the image the container is running, the command (in this case, the NIM server software), when the container was built, its status, how long it has been running, any open ports, and finally its name (given by the startup command).

If the NIM is not running, only the header will be returned.

Killing a running container

If for some reason your container has entered a state in which you cannot interact with it, or for any reason it must be killed, you may use the following command and the ID obtained from the docker ps command. Here is an example using the ID from the previous example:

Copy
Copied!
            

docker kill d114948j4f55

This will immediately terminate the running container. Note: any in-flight requests will be canceled and data may be lost.

Model Checkpoint Caching

On initial startup, the container will download the AlphaFold2 parameters and supporting data from NGC. You can skip this download step on future runs by caching the model weights locally using a cache directory as in the example below.

Copy
Copied!
            

# Create the cache directory on the host machine export LOCAL_NIM_CACHE=~/.cache/nim mkdir -p $LOCAL_NIM_CACHE # Run the container with the cache directory mounted in the appropriate location docker run --rm --name alphafold2 --runtime=nvidia \ -e CUDA_VISIBLE_DEVICES=0 \ -e NGC_CLI_API_KEY \ -v $LOCAL_NIM_CACHE:/opt/nim/.cache \ -p 8000:8000 \ nvcr.io/nim/deepmind/alphafold2:1.0.0

Note

Caching the model checkpoint can save a considerable amount of time on subsequent container runs.

Previous Quickstart Guide
Next AlphaFold2 NIM Endpoints
© Copyright © 2024, NVIDIA Corporation. Last updated on Aug 28, 2024.