Configuration Alternatives at Runtime#

The OpenFold2 NIM allows alternative runtime configurations.

Start the OpenFold2 NIM#

To start the NIM:

export LOCAL_NIM_CACHE=~/.cache/nim

docker run --rm --name openfold2 \
    --runtime=nvidia \
    --gpus 'device=0' \
    -e NGC_API_KEY \
    -v $LOCAL_NIM_CACHE:/opt/nim/.cache \
    -p 8000:8000 \
    nvcr.io/nim/openfold/openfold2:latest

Notes:

  • The -p option sets the port for the NIM.

  • The -e options define the environment variables, which are passed into the NIM’s container at runtime.

  • --rm removes the container when it exists.

  • -it allows interacting with the container directly at the CLI.

Using an alternative port for OpenFold2 NIM requests#

If you have other HTTP servers running (for example, other NIMs), you may need to make the 8000 port available by using another port for your NIM. To use an alternative port:

  1. Change the exposed port by setting the -p option.

  2. Set the NIM_HTTP_API_PORT environment variable to the new port.

The following is an example of setting the NIM to run on port 7979:

export LOCAL_NIM_CACHE=/mount/largedisk/nim/.cache

docker run --rm --name openfold2 \
    --runtime=nvidia \
    --gpus 'device=0' \
    -e NGC_API_KEY \
    -e NIM_HTTP_API_PORT=7979 \ ## We must set the NIM_HTTP_API_PORT environment variable...
    -v $LOCAL_NIM_CACHE:/opt/nim/.cache \
    -p 7979:7979 \ ## as well as forward the port to host.
    nvcr.io/nim/openfold/openfold2:latest

Running in Torch Mode#

By default, the NIM will run in TensorRT mode for supported GPUs. If you need to run the model directly in Torch mode instead, you can override the default backend optimization by setting the NIM_OPTIMIZED_BACKEND environment variable to torch.

The following is an example of setting the NIM to run in Torch mode:

export LOCAL_NIM_CACHE=~/.cache/nim

docker run --rm --name openfold2 \
    --runtime=nvidia \
    --gpus 'device=0' \
    -e NGC_API_KEY \
    -e NIM_OPTIMIZED_BACKEND=torch \ ## Set the backend to torch mode
    -v $LOCAL_NIM_CACHE:/opt/nim/.cache \
    -p 8000:8000 \
    nvcr.io/nim/openfold/openfold2:latest

Configuring Logging Levels#

OpenFold2 NIM provides several environment variables to control logging verbosity for different components. You can set these when starting the container to get more detailed logs for debugging or reduce verbosity for production.

Available Logging Flags#

Environment Variable

Description

Valid Values

Default

NIM_LOG

Controls NIM service logging

DEBUG, INFO, WARNING, ERROR, CRITICAL

INFO

NIM_LOG_LEVEL

Alternative NIM logging level control

DEBUG, INFO, WARNING, ERROR, CRITICAL

INFO

APP_LOG_LEVEL

Controls application-level logging

DEBUG, INFO, WARNING, ERROR, CRITICAL

INFO

TLLM_LOG_LEVEL

Controls TensorRT-LLM backend logging

VERBOSE, INFO, WARNING, ERROR

INFO

Example: Running with Configured Logging#

The following example shows how to run the NIM with logging configuration:

export LOCAL_NIM_CACHE=~/.cache/nim

docker run --rm --name openfold2 \
    --runtime=nvidia \
    --gpus 'device=0' \
    -e NGC_API_KEY \
    -e NIM_LOG=INFO \
    -e NIM_LOG_LEVEL=INFO \
    -e APP_LOG_LEVEL=INFO \
    -e TLLM_LOG_LEVEL=INFO \
    -v $LOCAL_NIM_CACHE:/opt/nim/.cache \
    -p 8000:8000 \
    nvcr.io/nim/openfold/openfold2:latest

Configuring NIM Telemetry#

NIM Telemetry helps NVIDIA deliver a faster, more reliable experience with greater compatibility across a wide range of environments. It collects only minimal, anonymous metadata (such as hardware type and NIM version). No user data, input sequences, or prediction results are collected.

Telemetry Configuration Flags#

Environment Variable

Required

Default

Description

NIM_TELEMETRY_MODE

No

0

Controls telemetry collection. Set to 0 to disable telemetry (default), set to 1 to enable telemetry.

NIM_TELEMETRY_ENABLE_LOGGING

No

true

Enables logging for telemetry operations when set to true. Only applicable when NIM_TELEMETRY_MODE=1.

Benefits#

  • Enhances performance and reliability: Provides anonymous system and NIM-level insights that help NVIDIA identify bottlenecks, tune performance across hardware configurations, and improve runtime stability.

  • Improves compatibility across deployments: Helps detect and resolve version, driver, and environment compatibility issues early, reducing friction across diverse infrastructure setups.

  • Accelerates troubleshooting and bug resolution: Allows NVIDIA to diagnose errors and regressions faster, leading to quicker support response times and higher overall availability.

  • Informs smarter optimizations and future releases: Real-world, aggregated telemetry data helps guide the optimization of NIM runtimes, model packaging, and deployment workflows, ensuring updates target the scenarios that matter most to users.

  • Protects user privacy and data security: Collects only minimal, anonymous metadata, such as hardware type and NIM version. No user data, input sequences, or prediction results are collected.

  • Fully optional and configurable: Telemetry collection is disabled by default. You can toggle telemetry at any time using environment variables.

Example: Running with Telemetry Enabled#

export LOCAL_NIM_CACHE=~/.cache/nim

docker run --rm --name openfold2 \
    --runtime=nvidia \
    --gpus 'device=0' \
    -e NGC_API_KEY \
    -e NIM_TELEMETRY_MODE=1 \
    -v $LOCAL_NIM_CACHE:/opt/nim/.cache \
    -p 8000:8000 \
    nvcr.io/nim/openfold/openfold2:latest

Privacy and Data Collection#

For more information about data privacy, what is collected, and how to configure telemetry, refer to: