Getting Started#
Prerequisites#
Check the support matrix to make sure that you have the supported hardware and software stack.
NGC Authentication#
Generate an API key#
An NGC API key is required to access NGC resources and a key can be generated here: https://org.ngc.nvidia.com/setup/personal-keys.
When creating an NGC API Personal key, ensure that at least “NGC Catalog” is selected from the “Services Included” dropdown. More Services can be included if this key is to be reused for other purposes.
Note
Personal keys allow you to configure an expiration date, revoke or delete the key using an action button, and rotate the key as needed. For more information about key types, please refer the NGC User Guide.
Export the API key#
Pass the value of the API key to the docker run
command in the next section as the NGC_API_KEY
environment variable to download the appropriate models and resources when starting the NIM.
If you’re not familiar with how to create the NGC_API_KEY
environment variable, the simplest way is to export it in your terminal:
export NGC_API_KEY=<value>
Run one of the following commands to make the key available at startup:
# If using bash
echo "export NGC_API_KEY=<value>" >> ~/.bashrc
# If using zsh
echo "export NGC_API_KEY=<value>" >> ~/.zshrc
Note
Other, more secure options include saving the value in a file, so that you can retrieve with cat $NGC_API_KEY_FILE
, or using a password manager.
Docker Login to NGC#
To pull the NIM container image from NGC, first authenticate with the NVIDIA Container Registry with the following command:
echo "$NGC_API_KEY" | docker login nvcr.io --username '$oauthtoken' --password-stdin
Use $oauthtoken
as the username and NGC_API_KEY
as the password. The $oauthtoken
username is a special name that indicates that you will authenticate with an API key and not a user name and password.
Launching the NIM#
The following command launches a Docker container for the llama-3.2-nv-rerankqa-1b-v2
model.
# Choose a container name for bookkeeping
export NIM_MODEL_NAME=nvidia/llama-3.2-nv-rerankqa-1b-v2
export CONTAINER_NAME=$(basename $NIM_MODEL_NAME)
# Choose a NIM Image from NGC
export IMG_NAME="nvcr.io/nim/$NIM_MODEL_NAME:1.3.0"
# Choose a path on your system to cache the downloaded models
export LOCAL_NIM_CACHE=~/.cache/nim
mkdir -p "$LOCAL_NIM_CACHE"
# Start the NIM
docker run -it --rm --name=$CONTAINER_NAME \
--runtime=nvidia \
--gpus all \
--shm-size=16GB \
-e NGC_API_KEY \
-v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
-u $(id -u) \
-p 8000:8000 \
$IMG_NAME
Flags |
Description |
---|---|
|
|
|
Delete the container after it stops (see Docker docs) |
|
Give a name to the NIM container for bookkeeping (here |
|
Ensure NVIDIA drivers are accessible in the container. |
|
Expose all NVIDIA GPUs inside the container. See the configuration page for mounting specific GPUs. |
|
Allocate host memory for multi-GPU communication. Not required for single GPU models or GPUs with NVLink enabled. |
|
Provide the container with the token necessary to download adequate models and resources from NGC. See above. |
|
Mount a cache directory from your system ( |
|
Use the same user as your system user inside the NIM container to avoid permission mismatches when downloading models in your local cache directory. |
|
Forward the port where the NIM server is published inside the container to access from the host system. The left-hand side of |
|
Name and version of the NIM container from NGC. The NIM server automatically starts if no argument is provided after this. |
If you have an issue with permission mismatches when downloading models in your local cache directory, add the
-u $(id -u)
option to thedocker run
call to run under your current identity.
If you are running on a host with different types of GPUs, you should specify GPUs of the same type using the
--gpus
argument todocker run
. For example,--gpus '"device=0,2"'
. The device IDs of 0 and 2 are examples only; replace them with the appropriate values for your system. Device IDs can be found by runningnvidia-smi
. More information can be found GPU Enumeration.
GPU clusters with GPUs in Multi-instance GPU mode (MIG), are currently not supported
Running Inference#
NOTE: It may take a few seconds for the container to be ready and start accepting requests from the time the docker container is started.
Confirm the service is ready to handle inference requests:
curl -X 'GET' 'http://localhost:8000/v1/health/ready'
If the service is ready, you will get a response like this:
{"ready":true}
curl -X "POST" \
"http://localhost:8000/v1/ranking" \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"model": "nvidia/nv-rerankqa-mistral-4b-v3",
"query": {"text": "which way did the traveler go?"},
"passages": [
{"text": "two roads diverged in a yellow wood, and sorry i could not travel both and be one traveler, long i stood and looked down one as far as i could to where it bent in the undergrowth;"},
{"text": "then took the other, as just as fair, and having perhaps the better claim because it was grassy and wanted wear, though as for that the passing there had worn them really about the same,"},
{"text": "and both that morning equally lay in leaves no step had trodden black. oh, i marked the first for another day! yet knowing how way leads on to way i doubted if i should ever come back."},
{"text": "i shall be telling this with a sigh somewhere ages and ages hense: two roads diverged in a wood, and i, i took the one less traveled by, and that has made all the difference."}
],
"truncate": "END"
}'
For further information, see the API examples.
Deploying on Multiple GPUs#
The NIM deploys a single model across however many GPUs that you specify and are visible inside the docker container. If you do not specify the number of GPUs, the NIM defaults to one GPU. When using multiple GPUs, Triton distributes inference requests across the GPUs to keep them equally utilized.
Use the docker run --gpus
command-line argument to specify the number of GPUs that are available for deployment.
Example using all GPUs:
docker run --gpus all ...
Example using two GPUs:
docker run --gpus 2 ...
Example using specific GPUs:
docker run --gpus '"device=1,2"' ...
Downloading NIM Models to Cache#
If model assets must be pre-fetched, such as in an air-gapped system, you can download the assets to the NIM cache without starting the server.
To download assets first run list-model-profiles
to determine the desired profile, and then run download-to-cache
with that profile, as shown following.
For details, see Optimization.
# Choose a container name for bookkeeping
export NIM_MODEL_NAME=nvidia/llama-3.2-nv-rerankqa-1b-v2
export CONTAINER_NAME=$(basename $NIM_MODEL_NAME)
# Choose a NIM Image from NGC
export IMG_NAME="nvcr.io/nim/$NIM_MODEL_NAME:1.3.0"
# Choose a path on your system to cache the downloaded models
export LOCAL_NIM_CACHE=~/.cache/nim
mkdir -p "$LOCAL_NIM_CACHE"
# List NIM model profiles and select the most appropriate one for your use case
docker run -it --rm --name=$CONTAINER_NAME \
-e NIM_CPU_ONLY=1 \
-u $(id -u) \
$IMG_NAME list-model-profiles
export NIM_MODEL_PROFILE=<selected profile>
# Start the NIM container with a command to download the model to the cache
docker run -it --rm --name=$CONTAINER_NAME \
--gpus all \
--shm-size=16GB \
-e NGC_API_KEY \
-e NIM_CPU_ONLY=1 \
-v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
-u $(id -u) \
$IMG_NAME download-to-cache --profiles $NIM_MODEL_PROFILE
# Start the NIM container in an airgapped environment and serve the model
docker run -it --rm --name=$CONTAINER_NAME \
--runtime=nvidia \
--gpus=all \
--shm-size=16GB \
--network=none \
-v $LOCAL_NIM_CACHE:/mnt/nim-cache:ro \
-u $(id -u) \
-e NIM_CACHE_PATH=/mnt/nim-cache \
-e NGC_API_KEY \
-p 8000:8000 \
$IMG_NAME
By default, the download-to-cache
command downloads the most appropriate model assets for the detected GPU. To override this behavior and download a specific model, set the NIM_MODEL_PROFILE
environment variable when launching the container. Use the list-model-profiles
command available within the NIM container to list all profiles. See Optimization for more details.
Stopping the Container#
The following commands stop the container by stopping and removing the running docker container.
docker stop $CONTAINER_NAME
docker rm $CONTAINER_NAME