AlphaFold2-Multimer (Latest)
AlphaFold2-Multimer (Latest)

Prerequisites

Copy
Copied!
            

docker run -it --rm --runtime=nvidia --gpus all ubuntu nvidia-smi


Example output:

Copy
Copied!
            

+-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.78.01 Driver Version: 525.78.01 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A | | 41% 30C P8 1W / 260W | 2244MiB / 11264MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| +-----------------------------------------------------------------------------+


Note

For more information on enumerating multi-GPU systems, please see the NVIDIA Container Toolkit’s GPU Enumeration Docs

The AlphaFold2-Multimer NIM is configured to run on a single GPU. The minimum GPU memory requirement for the AlphaFold2-Multimer NIM is 32GB. The AlphaFold2-Multimer NIM should run on any NVIDIA GPU that meets this minimum hardware requirement and has compute capability ≥ 8.0. The AlphaFold2-Multimer NIM also requires at least 512GB of free hard drive space to store the various MSA databases required by endpoints exposed from the NIM.

In summary, users looking to successfully run the AlphaFold2-Multimer NIM for short sequences / multimers should have as system with:

  • One NVIDIA GPU with ≥ 32GB of VRAM and Compute Capability ≥ 8.0.

  • At least 64 GB of RAM.

  • A CPU with at least 24 available cores.

  • At least 512GB of free SSD drive space.

For optimal performance on long sequences / multimers and multiple MSA databases, we recommend a system with:

  • At least one NVIDIA GPU with 80GB of RAM (e.g. A100 80GB).

  • At least 128GB of RAM.

  • A CPU with at least 36 available cores.

  • At least 512GB of free fast NVMe SSD drive space.

  1. Create an account on NGC.

  2. Generate an API Key.

  3. Log in to Docker with your NGC API key: docker login nvcr.io --username='$oauthtoken' --password=${NGC_CLI_API_KEY}

  1. Download the NGC CLI Tool <https://org.ngc.nvidia.com/setup/installers/cli> for your OS.

Important

Use NGC CLI version 3.41.1 or newer. Here is the command to install this on AMD64 Linux in your home directory:


Copy
Copied!
            

wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/ngc-apps/ngc_cli/versions/3.41.3/files/ngccli_linux.zip -O ~/ngccli_linux.zip && \ unzip ~/ngccli_linux.zip -d ~/ngc && \ chmod u+x ~/ngc/ngc-cli/ngc && \ echo "export PATH=\"\$PATH:~/ngc/ngc-cli\"" >> ~/.bash_profile && source ~/.bash_profile


  1. Set up your NGC CLI Tool locally. (You’ll need your API key for this!)

Copy
Copied!
            

ngc config set


Note

After you enter your API key, you may see multiple options for the org and team. Select as desired or hit enter to accept the default.


  1. Log in to NGC.

You’ll need to log in to NGC via Docker and set the NGC_API_KEY environment variable to pull images:

Copy
Copied!
            

docker login nvcr.io Username: $oauthtoken Password: <Enter your NGC key here>

Then, set the relevant environment variables in your shell. You will need to set the NGC_CLI_API_KEY variable:

Copy
Copied!
            

export NGC_CLI_API_KEY=<Enter your NGC key here>

  1. Set up your NIM (model) cache.

The NIM cache allows you to download models and store previously-downloaded models so that you don’t need to download them again later when you run the NIM again. The NIM cache must be readable and writable by the NIM, so in addition to creating the directory, the permissions on this directory need to be set to globally readable writable. The NIM cache directory can be set up as follows:

Copy
Copied!
            

## Create the NIM cache directory. mkdir -p /home/$USER/.cache/nim ## Set the NIM cache directory permissions to 777. chmod -R 777 /home/$USER/.cache/nim ## If you hit permissions issues after running the NIM and downloading the model for AlphaFold2, ## set model & database permissions to 777 as well. Required for running the NIM! (sudo) chmod -R 777 /home/$USER/.cache/nim/alphafold2-data_v1.1.0

Note

Throughout this documentation, we refer to the above path as the $LOCAL_NIM_CACHE=~/.cache/nim. You can set this cache path to any location you’d like, preferably on a high-speed SSD for fast read/write access to the downloaded AlphaFold2 model and databases. Remember to chmod -R 777 $LOCAL_NIM_CACHE on that directory regardless of the location of $LOCAL_NIM_CACHE, which is required to be able to download the AlphaFold2 model for running the NIM.


Important

If you experience issues downloading the model via running the NIM, you can manually download the AlphaFold2 model using the following command: ngc registry model download-version nim/deepmind/alphafold2-data:1.1.0 followed by (sudo) chmod -R 777 /home/$USER/.cache/nim/alphafold2-data_v1.1.0


Now, you should be able to pull the container and download the model using the environment variables. To get started, see the Quickstart Guide.

Previous Overview
Next Quickstart Guide
© Copyright © 2024, NVIDIA Corporation. Last updated on Sep 24, 2024.