Utilities#

NIM includes a set of utility scripts to assist with NIM operation.

Utilities can be launched by adding the name of the desired utility to the Docker run command.

See the Supported Models section for setting valid values for CONTAINER_ID in below examples.

List Model Profiles#

nim_list_model_profiles()#

Prints the system information detected by NIM, and the list of all profiles for the chosen NIM to the console. Profiles are categorized by whether or not they are compatible with the current system, based on the system information detected.

This function can also be called using its alias list-model-profiles.

Example#

export CONTAINER_ID=riva-translate-1_6b
docker run -it --rm --gpus all --entrypoint nim_list_model_profiles \
    nvcr.io/nim/nvidia/$CONTAINER_ID:latest
...
SYSTEM INFO
- Free GPUs:
  -  [2331:10de] (0) NVIDIA H100 PCIe [current utilization: 0%]
MODEL PROFILES
- Compatible with system:
- Incompatible with system:
    - 9f54ea088ebb1e656ee51b6311b4b8ad25e919faab129aa3ccbdd2b984570cd6 - {'model_type': 'prebuilt', 'name': 'riva-translate-1_6b'}
...

Download Model Profiles to NIM Cache#

nim_download_to_cache()#

Downloads selected or default model profile(s) to NIM cache. Can be used to pre-cache profiles prior to deployment. Requires NGC_API_KEY in environment.

This function can also be called using its alias download-to-cache.

--profiles [PROFILES ...], -p [PROFILES ...]#

Profile hashes to download. If none are provided, the optimal profile is downloaded. Multiple profiles can be specified separated by spaces.

--all#

Set to download all profiles to cache

--lora#

Set this to download default lora profile. This expects --profiles and --all arguments are not specified.

manifest-file <manifest_file>, -m <manifest_file>#

The manifest file path is an optional parameter that users can specify. It allows for the downloading of model profiles.

--model-cache-path <model-cache-path>#

The model cache path is an optional parameter that users can specify. This feature enables the modification of the default model_cache_path.

Example#

export CONTAINER_ID=riva-translate-1_6b
export LOCAL_NIM_CACHE=$HOME/cache
docker run -it --rm --gpus all -e NGC_API_KEY \
    -v $LOCAL_NIM_CACHE:/opt/nim/.cache --entrypoint nim_download_to_cache \
    nvcr.io/nim/nvidia/$CONTAINER_ID:latest \
    -p 9f54ea088ebb1e656ee51b6311b4b8ad25e919faab129aa3ccbdd2b984570cd6
...
INFO 2025-05-13 10:24:23.985 download.py:81] Fetching contents for profile 9f54ea088ebb1e656ee51b6311b4b8ad25e919faab129aa3ccbdd2b984570cd6
INFO 2025-05-13 10:24:23.985 download.py:86] {
  "model_type": "prebuilt",
  "name": "riva-translate-1_6b"
}
INFO 2025-05-13 10:24:23.985 nim_sdk.py:285] Using the profile specified by the user: 9f54ea088ebb1e656ee51b6311b4b8ad25e919faab129aa3ccbdd2b984570cd6
INFO 2025-05-13 10:24:23.985 nim_sdk.py:299] Downloading manifest profile: 9f54ea088ebb1e656ee51b6311b4b8ad25e919faab129aa3ccbdd2b984570cd6
...

Create Model Store#

nim_create_model_store()#

Extracts files from a cached model profile and creates a properly formatted directory. If the profile is not already cached, it will be downloaded to the model cache. Downloading the profile requires NGC_API_KEY to be set in the environment.

This function can also be called using its alias create-model-store.

--profile <PROFILE>, -p <PROFILE>#

The profile hash to use when creating the model directory. If not present locally, it will be downloaded.

--model-store <MODEL_STORE>, -m <MODEL_STORE>#

Directory path where model --profile will be extracted and copied to.

--model-cache-path <model-cache-path>#

The manifest file path is an optional parameter that users can specify. This allows users to modify the default model_cache_path.

Example#

export CONTAINER_ID=riva-translate-1_6b
export LOCAL_NIM_CACHE=$HOME/cache
docker run -it --rm --gpus all -e NGC_API_KEY \
    -v $LOCAL_NIM_CACHE:/opt/nim/.cache --entrypoint nim_create_model_store \
    nvcr.io/nim/nvidia/$CONTAINER_ID:latest \
    -p 9f54ea088ebb1e656ee51b6311b4b8ad25e919faab129aa3ccbdd2b984570cd6 \
    -m /tmp
...
INFO 2025-05-13 10:25:13.833 create_model_store.py:57] Fetching contents for profile 9f54ea088ebb1e656ee51b6311b4b8ad25e919faab129aa3ccbdd2b984570cd6
INFO 2025-05-13 10:25:13.833 nim_sdk.py:368] Using the default model_cache_path: /opt/nim/workspace
INFO 2025-05-13 10:25:13.833 nim_sdk.py:378] Creating model store at /tmp
...

Check NIM Cache#

nim_check_cache_env()#

Checks if the NIM cache directory is present and can be written to.

This function can also be called using its alias nim-llm-check-cache-env.

Example#

export CONTAINER_ID=riva-translate-1_6b
export LOCAL_NIM_CACHE=$HOME/cache
docker run -it --rm --gpus all -e NGC_API_KEY \
    -v /bad_path:/opt/nim/.cache --entrypoint nim_create_model_store \
    nvcr.io/nim/nvidia/$CONTAINER_ID:latest \
The NIM cache directory `/opt/nim/.cache` is read-only. The application may fail if the model is not already present in the cache.