Prerequisites#

Supported Hardware#

The RFdiffusion NIM is configured to run on a single GPU. The minimum GPU memory requirement for the RFdiffusion NIM is 12GB. The RFdiffusion NIM should run on any NVIDIA GPU that meets this minimum hardware requirement and has compute capability >7.0. The RFdiffusion NIM also requires at least 15GB of free hard drive space.

Starting with 2.0 release of RFdiffusion NIM, the model is optimized using NVIDIA Warp and NVIDIA TensorRT frameworks, which allows the model to run up to two times faster as compared to non-optimized version. RFdiffusion NIM provides pre-compiled TensorRT engines for A100, A10g, L40 and H100 GPUs; when running on other GPUs, RFdiffusion NIM will build TensorRT engines at runtime.

Software Prerequisites#

docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

Example output:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 550.144.03   Driver Version: 550.144.03   CUDA Version: 12.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0 Off |                  N/A |
| 41%   30C    P8     1W / 260W |   2244MiB / 11264MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

Note

For more information on enumerating multi-GPU systems, please see the NVIDIA Container Toolkit’s GPU Enumeration Docs

NGC (NVIDIA GPU Cloud) Account#

  1. Create an account on NGC

  2. Generate an API Key

  3. Docker log in with your NGC API (enter the key as password when prompted.)

docker login nvcr.io --username='$oauthtoken'