Configuration Alternatives at Runtime#
This section provides a comprehensive guide on how to configure alternative runtimes using OpenFold3.
Start the OpenFold3 NIM#
To start the NIM:
export LOCAL_NIM_CACHE=~/.cache/nim
docker run --rm --name openfold3 \
    --runtime=nvidia \
    --gpus 'device=0' \
    -e NGC_API_KEY \
    -v $LOCAL_NIM_CACHE:/opt/nim/.cache \
    -p 8000:8000 \
    --shm-size=16g \
    nvcr.io/nim/openfold/openfold3:latest
Note
The
-poption sets the port for the NIM.The
-eoptions define the environment variables, which are passed into the NIM’s container at runtime.--rmremoves the container when it exists.-itallows interacting with the container directly at the CLI.
Using an Alternative Port for OpenFold3 NIM Requests
If you have other HTTP servers running (for example, other NIMs), you may need to make the 8000 port available by using another port for your NIM. To use an alternative port:
Change the exposed port by setting the
-poption.Set the
NIM_HTTP_API_PORTenvironment variable to the new port.
The following is an example of setting the NIM to run on port 6626:
export LOCAL_NIM_CACHE=/mount/largedisk/nim/.cache
docker run --rm --name openfold3 \
    --runtime=nvidia \
    --gpus 'device=0' \
    -e NGC_API_KEY \
    -e NIM_HTTP_API_PORT=6626 \ ## We must set the NIM_HTTP_API_PORT environment variable...
    -v $LOCAL_NIM_CACHE:/opt/nim/.cache \
    -p 6626:6626 \ ## as well as forward the port to host.
    --shm-size=16g \
    nvcr.io/nim/openfold/openfold3:latest
Backend Optimization Options#
By default, the NIM will run in TensorRT mode for supported GPUs. You can override the default backend optimization by setting the NIM_OPTIMIZED_BACKEND environment variable.
The following backend options are available:
trt(default): TensorRT support for optimized performance on supported GPUstorch: PyTorch with cuEquivariance for GPU accelerationtorch_baseline: PyTorch with DeepSpeed for distributed training and inference
Running with PyTorch+cuEquivariance#
The following is an example of setting the NIM to run with PyTorch+cuEquivariance:
export LOCAL_NIM_CACHE=~/.cache/nim
docker run --rm --name openfold3 \
    --runtime=nvidia \
    --gpus 'device=0' \
    -e NGC_API_KEY \
    -e NIM_OPTIMIZED_BACKEND=torch \ ## Set the backend to PyTorch+cuEquivariance
    -v $LOCAL_NIM_CACHE:/opt/nim/.cache \
    -p 8000:8000 \
    --shm-size=16g \
    nvcr.io/nim/openfold/openfold3:latest
Running with PyTorch+DeepSpeed#
The following is an example of setting the NIM to run with PyTorch+DeepSpeed:
export LOCAL_NIM_CACHE=~/.cache/nim
docker run --rm --name openfold3 \
    --runtime=nvidia \
    --gpus 'device=0' \
    -e NGC_API_KEY \
    -e NIM_OPTIMIZED_BACKEND=torch_baseline \ ## Set the backend to PyTorch+DeepSpeed
    -v $LOCAL_NIM_CACHE:/opt/nim/.cache \
    -p 8000:8000 \
    --shm-size=16g \
    nvcr.io/nim/openfold/openfold3:latest