Getting Started
Begin with Docker-supported operating system
Install Docker - minimum version: 23.0.1
Install Docker Compose V2 plugin
Verify the Docker Compose support by running
docker compose version
Example output:
Docker Compose version 2.24.6+ds1-0ubuntu1~22.04.1
Install NVIDIA Drivers - minimum version: 535
Install the NVIDIA Container Toolkit - minimum version: 1.13.5
Verify your container runtime supports NVIDIA GPUs by running
docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
Example output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.78.01 Driver Version: 525.78.01 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| 41% 30C P8 1W / 260W | 2244MiB / 11264MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
For more information on enumerating multi-GPU systems, please see the NVIDIA Container Toolkit’s GPU Enumeration Docs
Supported Hardware
This NIM Agent Blueprint is configured to run on a single GPU, and should run on any NVIDIA GPU that meets this minimum hardware requirement and has compute capability >8.0. The minimum GPU memory requirement to run this NIM Agent Blueprint is 32GB. At least 600GB of free hard drive space is also required.
NGC (NVIDIA GPU Cloud) Account
Copy and export the Personal-Key value to the environment variable by
export NGC_CLI_API_KEY=nvapi-XXX
Docker log in with your NGC API key using
docker login nvcr.io --username='$oauthtoken' --password=${NGC_CLI_API_KEY}
NGC CLI Tool
Download the NGC CLI tool for your OS.
Use NGC CLI version 3.41.1
or newer. Here is the command to install this on AMD64 Linux in your home directory:
wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/ngc-apps/ngc_cli/versions/3.41.3/files/ngccli_linux.zip -O ~/ngccli_linux.zip && \
unzip ~/ngccli_linux.zip -d ~/ngc && \
chmod u+x ~/ngc/ngc-cli/ngc && \
echo "export PATH=\"\$PATH:~/ngc/ngc-cli\"" >> ~/.bash_profile && source ~/.bash_profile
Set up your NGC CLI Tool locally (You’ll need your API key for this!):
ngc config set
After you enter your API key, you may see multiple options for the org and team. Select as desired or hit enter to accept the default.
Model Specific Requirements
This blueprint is composed by three BioNeMo NIMs. So, to launch the blueprint all hardware and software requirements need to be met for each NIM. Such requirements can be found at:
For more complete example please also refer to GitHub for these files.
Prepare a folder to contain all data and model checkpoints. (Important: please ensure you have at least 512GB free space on the disk for this folder to store all data and model checkpoints)
mkdir -p nim_cache/models
chmod -R 777 nim_cache
export NIM_CACHE=${PWD}/nim_cache/models
export ALPHAFOLD2_CACHE=${NIM_CACHE}
export DIFFDOCK_CACHE=${NIM_CACHE}
export MOLMIM_CACHE=${NIM_CACHE}
In the same path that
nim_cache
folder was created above, create a new file named asdocker-compose.yaml
, and copy the content below into it.
version: '3'
services:
alphafold:
image: nvcr.io/nim/deepmind/alphafold2:1.0.0
container_name: cadd-alphafold2
runtime: nvidia
ports:
- "8081:8000"
volumes:
- ${ALPHAFOLD2_CACHE:-~/.cache/nim/models}:/opt/nim/.cache/
environment:
- NGC_CLI_API_KEY=${NGC_CLI_API_KEY:?Error NGC_CLI_API_KEY not set}
diffdock:
image: nvcr.io/nim/mit/diffdock:1.2.0
container_name: cadd-diffdock
runtime: nvidia
ports:
- "8082:8000"
volumes:
- ${DIFFDOCK_CACHE:-~/.cache/nim}:/home/nvs/.cache/nim/models/
environment:
- NGC_CLI_API_KEY=${NGC_CLI_API_KEY:?Error NGC_CLI_API_KEY not set}
- NVIDIA_VISIBLE_DEVICES=${DIFFDOCK_VISIBLE_DEVICES:-0}
molmim:
image: nvcr.io/nim/nvidia/molmim:1.0.0
container_name: cadd-molmim
runtime: nvidia
ports:
- "8083:8000"
volumes:
- ${MOLMIM_CACHE:-~/.cache/nim}:/home/nvs/.cache/nim/models/
environment:
- NGC_CLI_API_KEY=${NGC_CLI_API_KEY:?Error NGC_CLI_API_KEY not set}
In the path that contains the
nim_cache
folder and thedocker-compose.yaml
file, launch the blueprint bydocker compose
:
export NGC_CLI_API_KEY=nvapi-XXX
docker compose up
Please always make sure a valid NGC Personal-Key is set to the environmental variable. Otherwise, launching NIM services will fail.
If this is the first time of launching this blueprint, Docker
will do following tasks:
Pull the container images for the three NIMs. (about 5~10 minutes)
Download the data and model content for the three NIMs. (about 5 hours)
Launch three containers to host the services. (about 1~3 minutes)
Please note that the MSA processing in AlphaFold2 requires a big volume of sequences database, which will be cloned from the NGC registry along with the model folder. This step may take about 5 hours to be completed, if this is the first time of launching AlphaFold2. If the model folder for AlphaFold2 has been downloaded already when running AlphaFold2 NIM alone, it can be moved as the folder nim_cache/models/alphafold2-data_v1.0.0
to skip re-downloading. Downloading of the other two models only take about a few minutes.
Open a new terminal, use the following commands to check the status of three APIs until they all show ready.
AlphaFold2 NIM
curl localhost:8081/v1/health/ready
Example output:
{"status":"ready"}
DiffDock NIM
curl localhost:8082/v1/health/ready
Example output:
true
MolMIM NIM
curl localhost:8083/v1/health/ready
Example output:
{"status":"ready"}