Agent Blueprint: Generative Virtual Screening
Agent Blueprint: Generative Virtual Screening

Getting Started

Copy
Copied!
            

docker compose version


Example output:

Copy
Copied!
            

Docker Compose version 2.24.6+ds1-0ubuntu1~22.04.1


Copy
Copied!
            

docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi


Example output:

Copy
Copied!
            

+-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.78.01 Driver Version: 525.78.01 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A | | 41% 30C P8 1W / 260W | 2244MiB / 11264MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| +-----------------------------------------------------------------------------+


Note

For more information on enumerating multi-GPU systems, please see the NVIDIA Container Toolkit’s GPU Enumeration Docs

Supported Hardware

This NIM Agent Blueprint is configured to run on a single GPU, and should run on any NVIDIA GPU that meets this minimum hardware requirement and has compute capability >8.0. The minimum GPU memory requirement to run this NIM Agent Blueprint is 32GB. At least 600GB of free hard drive space is also required.

NGC (NVIDIA GPU Cloud) Account

  1. Create an account on NGC

  2. Generate an Personal-Key

  3. Copy and export the Personal-Key value to the environment variable by export NGC_CLI_API_KEY=nvapi-XXX

  4. Docker log in with your NGC API key using docker login nvcr.io --username='$oauthtoken' --password=${NGC_CLI_API_KEY}

NGC CLI Tool

  1. Download the NGC CLI tool for your OS.

Important

Use NGC CLI version 3.41.1 or newer. Here is the command to install this on AMD64 Linux in your home directory:


Copy
Copied!
            

wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/ngc-apps/ngc_cli/versions/3.41.3/files/ngccli_linux.zip -O ~/ngccli_linux.zip && \ unzip ~/ngccli_linux.zip -d ~/ngc && \ chmod u+x ~/ngc/ngc-cli/ngc && \ echo "export PATH=\"\$PATH:~/ngc/ngc-cli\"" >> ~/.bash_profile && source ~/.bash_profile


  1. Set up your NGC CLI Tool locally (You’ll need your API key for this!):

Copy
Copied!
            

ngc config set


Note

After you enter your API key, you may see multiple options for the org and team. Select as desired or hit enter to accept the default.


Model Specific Requirements

This blueprint is composed by three BioNeMo NIMs. So, to launch the blueprint all hardware and software requirements need to be met for each NIM. Such requirements can be found at:

For more complete example please also refer to GitHub for these files.

  1. Prepare a folder to contain all data and model checkpoints. (Important: please ensure you have at least 512GB free space on the disk for this folder to store all data and model checkpoints)

Copy
Copied!
            

mkdir -p nim_cache/models chmod -R 777 nim_cache export NIM_CACHE=${PWD}/nim_cache/models export ALPHAFOLD2_CACHE=${NIM_CACHE} export DIFFDOCK_CACHE=${NIM_CACHE} export MOLMIM_CACHE=${NIM_CACHE}


  1. In the same path that nim_cache folder was created above, create a new file named as docker-compose.yaml, and copy the content below into it.

Copy
Copied!
            

version: '3' services: alphafold: image: nvcr.io/nim/deepmind/alphafold2:1.0.0 container_name: cadd-alphafold2 runtime: nvidia ports: - "8081:8000" volumes: - ${ALPHAFOLD2_CACHE:-~/.cache/nim/models}:/opt/nim/.cache/ environment: - NGC_CLI_API_KEY=${NGC_CLI_API_KEY:?Error NGC_CLI_API_KEY not set} diffdock: image: nvcr.io/nim/mit/diffdock:1.2.0 container_name: cadd-diffdock runtime: nvidia ports: - "8082:8000" volumes: - ${DIFFDOCK_CACHE:-~/.cache/nim}:/home/nvs/.cache/nim/models/ environment: - NGC_CLI_API_KEY=${NGC_CLI_API_KEY:?Error NGC_CLI_API_KEY not set} - NVIDIA_VISIBLE_DEVICES=${DIFFDOCK_VISIBLE_DEVICES:-0} molmim: image: nvcr.io/nim/nvidia/molmim:1.0.0 container_name: cadd-molmim runtime: nvidia ports: - "8083:8000" volumes: - ${MOLMIM_CACHE:-~/.cache/nim}:/home/nvs/.cache/nim/models/ environment: - NGC_CLI_API_KEY=${NGC_CLI_API_KEY:?Error NGC_CLI_API_KEY not set}


  1. In the path that contains the nim_cache folder and the docker-compose.yaml file, launch the blueprint by docker compose:

Copy
Copied!
            

export NGC_CLI_API_KEY=nvapi-XXX docker compose up


Note

Please always make sure a valid NGC Personal-Key is set to the environmental variable. Otherwise, launching NIM services will fail.

If this is the first time of launching this blueprint, Docker will do following tasks:

  • Pull the container images for the three NIMs. (about 5~10 minutes)

  • Download the data and model content for the three NIMs. (about 5 hours)

  • Launch three containers to host the services. (about 1~3 minutes)

Note

Please note that the MSA processing in AlphaFold2 requires a big volume of sequences database, which will be cloned from the NGC registry along with the model folder. This step may take about 5 hours to be completed, if this is the first time of launching AlphaFold2. If the model folder for AlphaFold2 has been downloaded already when running AlphaFold2 NIM alone, it can be moved as the folder nim_cache/models/alphafold2-data_v1.0.0 to skip re-downloading. Downloading of the other two models only take about a few minutes.


  1. Open a new terminal, use the following commands to check the status of three APIs until they all show ready.

AlphaFold2 NIM

Copy
Copied!
            

curl localhost:8081/v1/health/ready


Example output:

Copy
Copied!
            

{"status":"ready"}


DiffDock NIM

Copy
Copied!
            

curl localhost:8082/v1/health/ready


Example output:

Copy
Copied!
            

true


MolMIM NIM

Copy
Copied!
            

curl localhost:8083/v1/health/ready


Example output:

Copy
Copied!
            

{"status":"ready"}


Previous Introduction
Next Basic Usage
© Copyright © 2024, NVIDIA Corporation. Last updated on Aug 29, 2024.