Configuring the Boltz-2 NIM#

The Boltz-2 NIM uses Docker containers. Each NIM has its own Docker container and there are several ways to configure it. The section below describes how to configure a NIM container.

GPU Selection#

By default, Docker can use all available GPUs on the system when it starts with the NVIDIA Container Runtime:

docker run --runtime=nvidia ...

In environments with a combination of GPUs, you can only expose specific GPUs inside the container using either:

  • The --gpus flag. For example, docker run --gpus='"device=1"' ...

  • The environment variable NVIDIA_VISIBLE_DEVICES. For example, to expose only Device 1, pass -e NVIDIA_VISIBLE_DEVICES=1. To expose GPU IDs 1 and 4, pass-e NVIDIA_VISIBLE_DEVICES=1,4.

The device IDs to use as inputs are listed in the output of nvidia-smi -L:

GPU 0: Tesla H100 (UUID: GPU-b404a1a1-d532-5b5c-20bc-b34e37f3ac46)
GPU 1: NVIDIA GeForce RTX 3080 (UUID: GPU-b404a1a1-d532-5b5c-20bc-b34e37f3ac46)

Refer to the NVIDIA Container Toolkit documentation for more instructions.

Environment Variables#

The following table describes the environment variables that can be passed into a NIM as a -e argument added to a docker run command:

ENV

Required?

Default

Notes

NGC_API_KEY

Yes

None

You must set this variable to the value of your personal NGC API key.

NIM_CACHE_PATH

No

/opt/nim/.cache

Location (in container) where the container caches model artifacts.

NIM_HTTP_API_PORT

No

8000

Publish the NIM service to the prescribed port inside the container. Make sure to adjust the port passed to the -p/--publish flag of Docker run to reflect that (ex: -p $NIM_HTTP_API_PORT:$NIM_HTTP_API_PORT). The left-hand side of this : is your host address:port, and does NOT have to match with $NIM_HTTP_API_PORT. The right-hand side of the : is the port inside the container which MUST match NIM_HTTP_API_PORT (or 9000 if not set). Supported endpoints are /v1/license (Returns the license information), /v1/metadata (Returns metadata including asset information, license information, model information, and version) and /v1/metrics (Exposes Prometheus metrics via an ASGI app endpoint).

NIM_LOG

No

INFO

Controls NIM service logging. Available options are DEBUG, INFO, WARNING, ERROR, and CRITICAL.

NIM_LOG_LEVEL

No

INFO

Alternative NIM logging level control. Available options are DEBUG, INFO, WARNING, ERROR, and CRITICAL.

TLLM_LOG_LEVEL

No

INFO

Controls TensorRT-LLM backend logging. Available options are VERBOSE, INFO, WARNING, and ERROR.

MODEL_PATH

No

Unset

This variable enables a hard override of the NIM’s model path. Users should generally not need to use this variable, but it can be useful when deploying to some cloud services that use alternative methods for model caching.

NIM_BOLTZ_ENABLE_DIFFUSION_TF32

No

true

Controls whether to use TF32 for diffusion inference for improved performance on NVIDIA GPUs equipped with tensor cores.

NIM_BOLTZ_PAIRFORMER_BACKEND

No

trt

Controls the backend used for the pairformer model. Can be either trt for TensorRT or torch for PyTorch.

NIM_DEFAULT_RANDOM_SEED

No

42

Controls the random seed used for torch / trt inference.

NIM_TELEMETRY_MODE

No

0

Controls telemetry collection. Set to 0 to disable telemetry (default), set to 1 to enable telemetry. Telemetry helps NVIDIA improve performance, compatibility, and reliability while maintaining strict privacy protections. For more information, refer to NVIDIA’s Privacy Policy and NIM Telemetry Settings.

NIM_TELEMETRY_ENABLE_LOGGING

No

true

Enables logging for telemetry operations when set to true. Only applicable when NIM_TELEMETRY_MODE=1.

NIM_EXPOSE_CONFIDENCE_SCORES

No

false

Controls whether to expose confidence scores in output directories. Set to true to enable confidence score output persistence in NIM cache folder, set to false to disable (default).

Volumes#

The following table describes the paths inside the container into which local paths can be mounted.

Container path

Required

Notes

Docker argument example

/opt/nim/.cache (or NIM_CACHE_PATH if present)

Not required, but if this volume is not mounted, the container will do a fresh download of the model each time it is brought up.

This is the directory within which models are downloaded inside the container. It is very important that this directory can be accessed from inside the container. This can be achieved by setting the permissions of the local directory to read-write-execute (777). For example, to use ~/.cache/nim as the host machine directory for caching models, first do mkdir -p ~/.cache/nim, then chmod 777 ~/.cache/nim before running the docker run command.

-v ~/.cache/nim:/opt/nim/.cache

Logging Configuration#

The Boltz-2 NIM provides logging environment variables to control verbosity for different components. You can configure these when starting the container to adjust the level of detail in logs for debugging or production use.

Example: Running with Logging Configuration#

The following example shows how to run the NIM with standard logging configuration:

export LOCAL_NIM_CACHE=~/.cache/nim
export NGC_API_KEY=<Your NGC API Key>

docker run --rm --name boltz2 --runtime=nvidia \
    --shm-size=16G \
    -e NGC_API_KEY \
    -e NIM_LOG=INFO \
    -e NIM_LOG_LEVEL=INFO \
    -e TLLM_LOG_LEVEL=INFO \
    -v $LOCAL_NIM_CACHE:/opt/nim/.cache \
    -p 8000:8000 \
    nvcr.io/nim/mit/boltz2:1.4.0

Example: Debug Logging for Troubleshooting#

To enable verbose logging for troubleshooting issues, set the log levels to their most detailed settings:

export LOCAL_NIM_CACHE=~/.cache/nim
export NGC_API_KEY=<Your NGC API Key>

docker run --rm --name boltz2 --runtime=nvidia \
    --shm-size=16G \
    -e NGC_API_KEY \
    -e NIM_LOG=DEBUG \
    -e NIM_LOG_LEVEL=DEBUG \
    -e TLLM_LOG_LEVEL=VERBOSE \
    -v $LOCAL_NIM_CACHE:/opt/nim/.cache \
    -p 8000:8000 \
    nvcr.io/nim/mit/boltz2:1.4.0