Prerequisites
Begin with Docker-supported operating system
Install Docker - minimum version: 23.0.1
Install NVIDIA Drivers - minimum version: 535
Install the NVIDIA Container Toolkit - minimum version: 1.13.5
Verify your container runtime supports NVIDIA GPUs by running
docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
Example output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.78.01 Driver Version: 525.78.01 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| 41% 30C P8 1W / 260W | 2244MiB / 11264MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
For more information on enumerating multi-GPU systems, please see the NVIDIA Container Toolkit’s GPU Enumeration Docs
The AlphaFold2 NIM is configured to run on a single GPU. The minimum GPU memory requirement for the AlphaFold2 NIM is 32GB. The AlphaFold2 NIM should run on any NVIDIA GPU that meets this minimum hardware requirement and has compute capability ≥8.0. The AlphaFold2 NIM also requires at least 512GB of free hard drive space.
In summary, users looking to successfully run the AlphaFold2 NIM for small sequences should have as system with:
One NVIDIA GPU with ≥32GB of VRAM and Compute Capability ≥8.0
At least 64 GB of RAM
A CPU with at least 24 available cores
At least 512GB of free SSD drive space.
For optimum performance, we recommend a system with:
At least one NVIDIA GPU with 80GB of RAM (e.g., A100 80GB)
At least 128GB of RAM
A CPU with at least 36 available cores
At least 512GB of free fast NVMe SSD drive space.
Docker log in with your NGC API key using
docker login nvcr.io --username='$oauthtoken' --password=${NGC_CLI_API_KEY}
Download the
NGC CLI tool <https://org.ngc.nvidia.com/setup/installers/cli>
__ for your OS.
Use NGC CLI version 3.41.1
or newer. Here is the command to install this on AMD64 Linux in your home directory:
wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/ngc-apps/ngc_cli/versions/3.41.3/files/ngccli_linux.zip -O ~/ngccli_linux.zip && \
unzip ~/ngccli_linux.zip -d ~/ngc && \
chmod u+x ~/ngc/ngc-cli/ngc && \
echo "export PATH=\"\$PATH:~/ngc/ngc-cli\"" >> ~/.bash_profile && source ~/.bash_profile
Set up your NGC CLI Tool locally (You’ll need your API key for this!):
ngc config set
After you enter your API key, you may see multiple options for the org and team. Select as desired or hit enter to accept the default.
Log in to NGC
You’ll need log in to NGC via Docker and set the NGC_API_KEY environment variable to pull images:
docker login nvcr.io
Username: $oauthtoken
Password: <Enter your NGC key here>
Then, set the relevant environment variables in your shell. You will need to set the NGC_CLI_API_KEY
variable:
export NGC_CLI_API_KEY=<Enter your NGC key here>
Set up your NIM cache
The NIM cache allows you to download models and store previously-downloaded models so that you don’t need to download them again later when you run the NIM again. The NIM cache must be readable and writable by the NIM, so in addition to creating the directory, the permissions on this directory need to be set to globally readable writable. The NIM cache directory can be set up as follows:
## Create the NIM cache directory
mkdir -p /home/$USER/.cache/nim
## Set the NIM cache directory permissions to the correct values
chmod -R 777 /home/$USER/.cache/nim
Now, you should be able to pull the container and download the model using the environment variables. To get started, see the quickstart guide.