Prerequisites#
Review the following system requirements and setup instructions before deploying NIM for BMD.
System Requirements#
NVIDIA AI Enterprise License: NVIDIA NIM for BMD is available for self-hosting under the NVIDIA AI Enterprise (NVAIE) License.
NVIDIA GPUs: NIM for BMD runs on single or multiple GPUs. Minimum GPU memory is 8 GB. The system requires CUDA compute capability 8.0 or higher.
CPU: x86 processor (modern processor recommended).
Storage: 15 GB of disk space for the Docker container.
Operating System: A Linux distribution that meets the following criteria:
Has
glibc2.35 or higher (refer to the output ofld -v).NVIDIA recommends Ubuntu 20.04 or later.
CUDA Drivers: Follow the installation guide. NVIDIA recommends:
Using a network repository as part of a package manager installation and skipping the CUDA toolkit installation, because libraries are available within the NIM container.
Installing the open kernels for your driver version.
Refer to the Frameworks Support Matrix for NVIDIA driver version compatibility. Ensure that the latest compatible NVIDIA driver is installed before you launch NIM containers.
Docker Setup#
Do the following to set up Docker:
Install Docker.
Install the NVIDIA Container Toolkit.
After installing the toolkit, follow the Configure Docker instructions.
To verify your setup, run the following command:
docker run --rm --gpus=all ubuntu nvidia-smi
This confirms the CUDA driver version and available GPUs.
Example output
+-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.54.14 Driver Version: 550.54.14 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA H100 80GB HBM3 On | 00000000:1B:00.0 Off | 0 | | N/A 36C P0 112W / 700W | 78489MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+
NGC Account#
To create an NVIDIA GPU Cloud (NGC) account and authenticate requests, do the following:
Download the NGC CLI tool for your operating system.
Set up the NGC CLI Tool locally:
ngc config set
Log in to NGC using Docker and set the
NGC_API_KEYenvironment variable to pull images:docker login nvcr.io
Username: $oauthtoken Password: <Enter your NGC key here>
Set the relevant environment variables in your shell, including
NGC_API_KEY:export NGC_API_KEY=<Enter your NGC key here>
Model Setup#
For supported models, sourcing requirements, and mounting instructions, refer to Supported Models. For download hints and container launch arguments by model type (MACE, AIMNet2, TensorNet), refer to Custom Models.