Configure NIM#
This section provides additional details for launching and operating the GenMol NIM container.
View NIM Container Information#
ngc registry image info nvcr.io/nim/nvidia/genmol:2.0.0
Pull the Container Image#
docker pull nvcr.io/nim/nvidia/genmol:2.0.0
Runtime Parameters for the Container#
export LOCAL_NIM_CACHE=~/.cache/nim/genmol
mkdir -p "$LOCAL_NIM_CACHE"
chmod 777 "$LOCAL_NIM_CACHE"
docker run --rm -it --name genmol-nim \
--runtime=nvidia --gpus=all \
-e NVIDIA_VISIBLE_DEVICES=0 \
-e NGC_API_KEY=$NGC_API_KEY \
-e NIM_HTTP_API_PORT=8000 \
-v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
-p 8000:8000 \
nvcr.io/nim/nvidia/genmol:2.0.0
Important parameters:
--runtime=nvidiaenables GPU access for the container.--gpus=allexposes GPUs to the NVIDIA runtime.NVIDIA_VISIBLE_DEVICES=0restricts a container instance to one GPU.NGC_API_KEYauthenticates model asset downloads from NGC.NIM_HTTP_API_PORT=8000sets the service port inside the container.-v "$LOCAL_NIM_CACHE:/opt/nim/.cache"persists model artifacts across restarts.-p 8000:8000publishes the service port to the host.
Run Multiple Instances on the Same Host#
Each container instance should be pinned to a different GPU and a different host port.
# Launch instance #1
docker run --rm -it --name genmol-nim-1 \
--runtime=nvidia --gpus=all \
-e NVIDIA_VISIBLE_DEVICES=0 \
-e NGC_API_KEY=$NGC_API_KEY \
-e NIM_HTTP_API_PORT=8000 \
-v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
-p 60001:8000 \
nvcr.io/nim/nvidia/genmol:2.0.0
# Launch instance #2
docker run --rm -it --name genmol-nim-2 \
--runtime=nvidia --gpus=all \
-e NVIDIA_VISIBLE_DEVICES=1 \
-e NGC_API_KEY=$NGC_API_KEY \
-e NIM_HTTP_API_PORT=8000 \
-v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
-p 60002:8000 \
nvcr.io/nim/nvidia/genmol:2.0.0
Model Checkpoint Caching#
On initial startup, the container downloads model assets from NGC. To avoid repeated downloads, create and mount a persistent cache directory:
export LOCAL_NIM_CACHE=~/.cache/nim/genmol
mkdir -p "$LOCAL_NIM_CACHE"
chmod 777 "$LOCAL_NIM_CACHE"
Environment Variables#
The following environment variables can be passed as -e arguments to docker run:
ENV |
Required? |
Default |
Notes |
|---|---|---|---|
|
Yes |
None |
Personal NGC API key used to authenticate model asset downloads. |
|
No |
|
Location inside the container where model artifacts are cached. |
|
No |
|
Port the NIM HTTP server listens on inside the container. Adjust the |
|
No |
|
Logging verbosity. Options: |
|
No |
|
Telemetry collection. Set to |
Volumes#
Container path |
Required |
Notes |
Docker argument example |
|---|---|---|---|
|
No, but without it the container re-downloads model assets on every start. |
The directory must be readable and writable by the container. Run |
|
Logging#
The following examples show how to start the container with different logging levels.
Standard logging#
docker run --rm -it --name genmol-nim \
--runtime=nvidia --gpus=all \
-e NVIDIA_VISIBLE_DEVICES=0 \
-e NGC_API_KEY=$NGC_API_KEY \
-e NIM_HTTP_API_PORT=8000 \
-e NIM_LOG_LEVEL=INFO \
-v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
-p 8000:8000 \
nvcr.io/nim/nvidia/genmol:2.0.0
Debug logging#
docker run --rm -it --name genmol-nim \
--runtime=nvidia --gpus=all \
-e NVIDIA_VISIBLE_DEVICES=0 \
-e NGC_API_KEY=$NGC_API_KEY \
-e NIM_HTTP_API_PORT=8000 \
-e NIM_LOG_LEVEL=DEBUG \
-v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
-p 8000:8000 \
nvcr.io/nim/nvidia/genmol:2.0.0
Stop the Container#
docker stop genmol-nim