Get Started With NVIDIA NIM for Object Detection#
This documentation helps you get started with NVIDIA NIM for Object Detection.
Prerequisites#
Check the support matrix to make sure that you have the supported hardware and software stack.
Installing WSL2 for Windows#
If you are running on an RTX AI PC or Workstation, refer to the NIM on WSL2 documentation for setup instructions.
NGC Authentication#
Generate an API key#
An NGC API key is required to access NGC resources and a key can be generated here: https://org.ngc.nvidia.com/setup/api-keys.
When creating an NGC API Personal key, ensure that at least “NGC Catalog” is selected from the “Services Included” dropdown. More Services can be included if this key is to be reused for other purposes.
Note
Personal keys allow you to configure an expiration date, revoke or delete the key using an action button, and rotate the key as needed. For more information about key types, please refer the NGC User Guide.
Export the API key#
Pass the value of the API key to the docker run
command in the next section as the NGC_API_KEY
environment variable to download the appropriate models and resources when starting the NIM.
If you’re not familiar with how to create the NGC_API_KEY
environment variable, the simplest way is to export it in your terminal:
export NGC_API_KEY=<value>
Run one of the following commands to make the key available at startup:
# If using bash
echo "export NGC_API_KEY=<value>" >> ~/.bashrc
# If using zsh
echo "export NGC_API_KEY=<value>" >> ~/.zshrc
Note
Other, more secure options include saving the value in a file, so that you can retrieve with cat $NGC_API_KEY_FILE
, or using a password manager.
Docker Login to NGC#
To pull the NIM container image from NGC, first authenticate with the NVIDIA Container Registry with the following command:
echo "$NGC_API_KEY" | docker login nvcr.io --username '$oauthtoken' --password-stdin
Use $oauthtoken
as the username and NGC_API_KEY
as the password. The $oauthtoken
username is a special name that indicates that you will authenticate with an API key and not a user name and password.
Launching the NIM#
The Object Detection NIM supports multiple models for page element, table structure, and graphic element detection. This section provides an example of launching the NIM for the nemoretriever-page-elements-v2 model. To launch the NIM for another model, replace NIM_MODEL_NAME
with the Model ID for the NIM from the Support Matrix - Models .
The following command launches a Docker container for the nemoretriever-page-elements-v2
model. For Docker versions >= 19.03, the --runtime=nvidia
option has the same effect as the --gpus all
option.
# Choose a container name for bookkeeping
export NIM_MODEL_NAME=nvidia/nemoretriever-page-elements-v2
export CONTAINER_NAME=$(basename $NIM_MODEL_NAME)
# Choose a NIM Image from NGC
export IMG_NAME="nvcr.io/nim/$NIM_MODEL_NAME:1.2.0"
# Choose a path on your system to cache the downloaded models
export LOCAL_NIM_CACHE=~/.cache/nim
mkdir -p "$LOCAL_NIM_CACHE"
# Start the NIM
docker run -it --rm --name=$CONTAINER_NAME \
--runtime=nvidia \
--gpus all \
--shm-size=16GB \
-e NGC_API_KEY \
-v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
-u $(id -u) \
-p 8000:8000 \
$IMG_NAME
Flags |
Description |
---|---|
|
|
|
Delete the container after it stops (see Docker docs). |
|
Give a name to the NIM container for bookkeeping (here |
|
Ensure NVIDIA drivers are accessible in the container. |
|
Expose all NVIDIA GPUs inside the container. See the configuration page for mounting specific GPUs. |
|
Allocate host memory for multi-GPU communication. Not required for single GPU models or GPUs with NVLink enabled. |
|
Provide the container with the token necessary to download adequate models and resources from NGC. See above. |
|
Mount a cache directory from your system ( |
|
Use the same user as your system user inside the NIM container to avoid permission mismatches when downloading models in your local cache directory. |
|
Forward the port where the NIM server is published inside the container to access from the host system. The left-hand side of |
|
Name and version of the NIM container from NGC. The NIM server automatically starts if no argument is provided after this. |
If you have an issue with permission mismatches when downloading models in your local cache directory, add the
-u $(id -u)
option to thedocker run
call to run under your current identity.
If you are running on a host with different types of GPUs, you should specify GPUs of the same type using the
--gpus
argument todocker run
. For example,--gpus '"device=0,2"'
. The device IDs of 0 and 2 are examples only; replace them with the appropriate values for your system. Device IDs can be found by runningnvidia-smi
. More information can be found GPU Enumeration.
GPU clusters with GPUs in Multi-instance GPU mode (MIG), are currently not supported.
API Calls#
Note
It may take a few seconds for the container to be ready to start accepting requests after you start the docker container.
Confirm that the service is ready to handle inference requests by using the following code.
curl -X 'GET' 'http://localhost:8000/v1/health/ready'
If the service is ready, you see a response similar to the following.
{"object":"health-response","message":"Service is ready."}
After the service is ready, use the following code to run inference.
API_ENDPOINT="http://localhost:8000"
# Create JSON payload with base64 encoded image
IMAGE_SOURCE="https://assets.ngc.nvidia.com/products/api-catalog/nemo-retriever/object-detection/page-elements-example-1.jpg"
# IMAGE_SOURCE="path/to/your/image.jpg" # Uncomment to use a local file instead
# Encode the image to base64 (handles both URLs and local files)
if [[ $IMAGE_SOURCE == http* ]]; then
# Handle URL
BASE64_IMAGE=$(curl -s ${IMAGE_SOURCE} | base64 -w 0)
else
# Handle local file
BASE64_IMAGE=$(base64 -w 0 ${IMAGE_SOURCE})
fi
# Construct the full JSON payload
JSON_PAYLOAD='{
"input": [{
"type": "image_url",
"url": "data:image/jpeg;base64,'${BASE64_IMAGE}'"
}]
}'
# Send POST request to inference endpoint
echo "${JSON_PAYLOAD}" | \
curl -X POST "${API_ENDPOINT}/v1/infer" \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d @-
For more details, refer to the API Reference.
Deploy on Multiple GPUs#
The NIM deploys a single model across however many GPUs that you specify and are visible inside the docker container. If you do not specify the number of GPUs, the NIM defaults to one GPU. When using multiple GPUs, Triton distributes inference requests across the GPUs to keep them equally utilized.
Use the docker run --gpus
command-line argument to specify the number of GPUs that are available for deployment.
Example using all GPUs:
docker run --gpus all ...
Example using two GPUs:
docker run --gpus 2 ...
Example using specific GPUs:
docker run --gpus '"device=1,2"' ...
Deploy Alongside Other NIMs on the Same GPU#
You can deploy ${__product__short_name} alongside another NIM (for example, llama3-8b-instruct
) on the same GPU (for example, A100 80GB
, A100 40GB
, or H100 80GB
).
For more information about deployment, see Launch LLM NIMs from NGC, NIM Operator, GPU Operator with MIG, and Time-Slicing GPUs in Kubernetes.
Use the docker run --gpus
command-line argument to specify the same GPU as shown in the following code.
docker run --gpus '"device=1"' ... $IMG_NAME
docker run --gpus '"device=1"' ... $LLM_IMG_NAME
Downloading NIM Models to Cache#
If model assets must be pre-fetched, such as in an air-gapped system, you can download the assets to the NIM cache without starting the server.
To download assets first run list-model-profiles
to determine the desired profile, and then run download-to-cache
with that profile, as shown following.
For details, see Optimization.
# Choose a container name for bookkeeping
export NIM_MODEL_NAME=nvidia/nemoretriever-page-elements-v2
export CONTAINER_NAME=$(basename $NIM_MODEL_NAME)
# Choose a NIM Image from NGC
export IMG_NAME="nvcr.io/nim/$NIM_MODEL_NAME:1.2.0"
# Choose a path on your system to cache the downloaded models
export LOCAL_NIM_CACHE=~/.cache/nim
mkdir -p "$LOCAL_NIM_CACHE"
# List NIM model profiles and select the most appropriate one for your use case
docker run -it --rm --name=$CONTAINER_NAME \
-e NIM_CPU_ONLY=1 \
-u $(id -u) \
$IMG_NAME list-model-profiles
export NIM_MODEL_PROFILE=<selected profile>
# Start the NIM container with a command to download the model to the cache
docker run -it --rm --name=$CONTAINER_NAME \
--gpus all \
--shm-size=16GB \
-e NGC_API_KEY \
-e NIM_CPU_ONLY=1 \
-v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
-u $(id -u) \
$IMG_NAME download-to-cache --profiles $NIM_MODEL_PROFILE
# Start the NIM container in an airgapped environment and serve the model
docker run -it --rm --name=$CONTAINER_NAME \
--runtime=nvidia \
--gpus=all \
--shm-size=16GB \
--network=none \
-v $LOCAL_NIM_CACHE:/mnt/nim-cache:ro \
-u $(id -u) \
-e NIM_CACHE_PATH=/mnt/nim-cache \
-e NGC_API_KEY \
-p 8000:8000 \
$IMG_NAME
By default, the download-to-cache
command downloads the most appropriate model assets for the detected GPU. To override this behavior and download a specific model, set the NIM_MODEL_PROFILE
environment variable when launching the container. Use the list-model-profiles
command available within the NIM container to list all profiles. See Optimization for more details.
Stopping the Container#
The following commands stop the container by stopping and removing the running docker container.
docker stop $CONTAINER_NAME
docker rm $CONTAINER_NAME