Configuring the Boltz-2 NIM#
The Boltz-2 NIM uses Docker containers. Each NIM has its own Docker container and there are several ways to configure it. The section below describes how to configure a NIM container.
GPU Selection#
By default, Docker can use all available GPUs on the system when it starts with the NVIDIA Container Runtime:
docker run --runtime=nvidia ...
In environments with a combination of GPUs, you can only expose specific GPUs inside the container using either:
The
--gpus
flag. For example,docker run --gpus='"device=1"' ...
The environment variable
NVIDIA_VISIBLE_DEVICES
. For example, to expose only Device 1, pass-e NVIDIA_VISIBLE_DEVICES=1
. To expose GPU IDs 1 and 4, pass-e NVIDIA_VISIBLE_DEVICES=1,4
.
The device IDs to use as inputs are listed in the output of nvidia-smi -L
:
GPU 0: Tesla H100 (UUID: GPU-b404a1a1-d532-5b5c-20bc-b34e37f3ac46)
GPU 1: NVIDIA GeForce RTX 3080 (UUID: GPU-b404a1a1-d532-5b5c-20bc-b34e37f3ac46)
Refer to the NVIDIA Container Toolkit documentation for more instructions.
Environment Variables#
The following table describes the environment variables that can be passed into a NIM as a -e
argument added to a docker run
command:
ENV |
Required? |
Default |
Notes |
---|---|---|---|
|
Yes |
None |
You must set this variable to the value of your personal NGC API key. |
|
No |
|
Location (in container) where the container caches model artifacts. |
|
No |
|
Publish the NIM service to the prescribed port inside the container. Make sure to adjust the port passed to the |
|
No |
|
This variable allows you to specify the level of logging detail you want to see in the container’s logs. Available options are |
|
No |
Unset |
This variable enables a hard override of the NIM’s model path. Users should generally not need to use this variable, but it can be useful when deploying to some cloud services that use alternative methods for model caching. |
|
No |
|
Controls whether to use TF32 for diffusion inference for improved performance on NVIDIA GPUs equipped with tensor cores. |
|
No |
|
Controls the backend used for the pairformer model. Can be either |
|
No |
|
Controls the random seed used for torch / trt inference. |
Volumes#
The following table describes the paths inside the container into which local paths can be mounted.
Container path |
Required |
Notes |
Docker argument example |
---|---|---|---|
|
Not required, but if this volume is not mounted, the container will do a fresh download of the model each time it is brought up. |
This is the directory within which models are downloaded inside the container. It is very important that this directory can be accessed from inside the container. This can be achieved by setting the permissions of the local directory to |
|