Configure NIM#

This section provides additional details for launching and operating the GenMol NIM container.

View NIM Container Information#

ngc registry image info nvcr.io/nim/nvidia/genmol:2.0.0

Pull the Container Image#

docker pull nvcr.io/nim/nvidia/genmol:2.0.0

Runtime Parameters for the Container#

export LOCAL_NIM_CACHE=~/.cache/nim/genmol
mkdir -p "$LOCAL_NIM_CACHE"
chmod 777 "$LOCAL_NIM_CACHE"

docker run --rm -it --name genmol-nim \
  --runtime=nvidia --gpus=all \
  -e NVIDIA_VISIBLE_DEVICES=0 \
  -e NGC_API_KEY=$NGC_API_KEY \
  -e NIM_HTTP_API_PORT=8000 \
  -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
  -p 8000:8000 \
  nvcr.io/nim/nvidia/genmol:2.0.0

Important parameters:

  • --runtime=nvidia enables GPU access for the container.

  • --gpus=all exposes GPUs to the NVIDIA runtime.

  • NVIDIA_VISIBLE_DEVICES=0 restricts a container instance to one GPU.

  • NGC_API_KEY authenticates model asset downloads from NGC.

  • NIM_HTTP_API_PORT=8000 sets the service port inside the container.

  • -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" persists model artifacts across restarts.

  • -p 8000:8000 publishes the service port to the host.

Run Multiple Instances on the Same Host#

Each container instance should be pinned to a different GPU and a different host port.

# Launch instance #1
docker run --rm -it --name genmol-nim-1 \
  --runtime=nvidia --gpus=all \
  -e NVIDIA_VISIBLE_DEVICES=0 \
  -e NGC_API_KEY=$NGC_API_KEY \
  -e NIM_HTTP_API_PORT=8000 \
  -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
  -p 60001:8000 \
  nvcr.io/nim/nvidia/genmol:2.0.0
# Launch instance #2
docker run --rm -it --name genmol-nim-2 \
  --runtime=nvidia --gpus=all \
  -e NVIDIA_VISIBLE_DEVICES=1 \
  -e NGC_API_KEY=$NGC_API_KEY \
  -e NIM_HTTP_API_PORT=8000 \
  -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
  -p 60002:8000 \
  nvcr.io/nim/nvidia/genmol:2.0.0

Model Checkpoint Caching#

On initial startup, the container downloads model assets from NGC. To avoid repeated downloads, create and mount a persistent cache directory:

export LOCAL_NIM_CACHE=~/.cache/nim/genmol
mkdir -p "$LOCAL_NIM_CACHE"
chmod 777 "$LOCAL_NIM_CACHE"

Environment Variables#

The following environment variables can be passed as -e arguments to docker run:

ENV

Required?

Default

Notes

NGC_API_KEY

Yes

None

Personal NGC API key used to authenticate model asset downloads.

NIM_CACHE_PATH

No

/opt/nim/.cache

Location inside the container where model artifacts are cached.

NIM_HTTP_API_PORT

No

8000

Port the NIM HTTP server listens on inside the container. Adjust the -p flag to match (for example, -p 8000:8000).

NIM_LOG_LEVEL

No

INFO

Logging verbosity. Options: DEBUG, INFO, WARNING, ERROR, CRITICAL.

NIM_TELEMETRY_MODE

No

0

Telemetry collection. Set to 0 to disable (default) or 1 to enable. Refer to the NVIDIA Privacy Policy and NIM Telemetry Settings.

Volumes#

Container path

Required

Notes

Docker argument example

/opt/nim/.cache (or NIM_CACHE_PATH if set)

No, but without it the container re-downloads model assets on every start.

The directory must be readable and writable by the container. Run chmod 777 $LOCAL_NIM_CACHE on the host path before mounting.

-v "$LOCAL_NIM_CACHE:/opt/nim/.cache"

Logging#

The following examples show how to start the container with different logging levels.

Standard logging#

docker run --rm -it --name genmol-nim \
  --runtime=nvidia --gpus=all \
  -e NVIDIA_VISIBLE_DEVICES=0 \
  -e NGC_API_KEY=$NGC_API_KEY \
  -e NIM_HTTP_API_PORT=8000 \
  -e NIM_LOG_LEVEL=INFO \
  -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
  -p 8000:8000 \
  nvcr.io/nim/nvidia/genmol:2.0.0

Debug logging#

docker run --rm -it --name genmol-nim \
  --runtime=nvidia --gpus=all \
  -e NVIDIA_VISIBLE_DEVICES=0 \
  -e NGC_API_KEY=$NGC_API_KEY \
  -e NIM_HTTP_API_PORT=8000 \
  -e NIM_LOG_LEVEL=DEBUG \
  -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
  -p 8000:8000 \
  nvcr.io/nim/nvidia/genmol:2.0.0

Stop the Container#

docker stop genmol-nim