Configure NeMo Retriever Text Reranking NIM#
NeMo Text Retriever NIM use docker containers under the hood. Each NIM is its own Docker container and there are several ways to configure it. The remainder of this section describes the various ways to configure a NIM container.
Use this documentation to learn how to configure NeMo Retriever Text Reranking NIM.
GPU Selection#
The NIM container is GPU-accelerated and uses NVIDIA Container Toolkit for access to GPUs on the host.
You can specify the --gpus all command-line argument to the docker run command if the host has one or more of the same GPU model.
If the host has a combination of GPUs, such as an A6000 and a GeForce display GPU, run the container on compute-capable GPUs only.
Expose specific GPUs to the container by using either of the following methods:
Specify the
--gpusargument, such as--gpus="device=1".Set the
NVIDIA_VISIBLE_DEVICESenvironment variable, such as-e NVIDIA_VISIBLE_DEVICES=1.
Run the nvidia-smi -L command to list the device IDs to specify in the argument or environment variable:
GPU 0: Tesla H100 (UUID: GPU-b404a1a1-d532-5b5c-20bc-b34e37f3ac46)
GPU 1: NVIDIA GeForce RTX 3080 (UUID: GPU-b404a1a1-d532-5b5c-20bc-b34e37f3ac46)
Refer to GPU Enumeration in the NVIDIA Container Toolkit documentation for more information.
Memory Footprint#
The memory footprint of the NeMo Retriever Text Reranking NIM depends on the model backend and loaded profiles.
For ONNX model profiles, memory is allocated dynamically according to the requests.
For TensorRT model profiles, all optimization profiles are loaded at startup and memory is pre-allocated based on the maximum input shapes. Triton dynamically selects the optimal profile at runtime based on actual input shapes. Refer to the support matrix for the approximate memory footprint by compute capability.
PID Limit#
In certain deployment or container runtime environments, default process and thread limits (PID limits) can interfere with NIM startup. These set limits are set by Docker, Podman, Kubernetes, or the operating system.
If the PID limit is too low, you might see symptoms such as:
NIM starts up partially, but fails to reach ready state, and then stalls.
NIM starts up partially, but fails to reach ready state, and then crashes.
NIM serves a small number of requests, and then fails.
To verify that PID limits are impacting the NIM container, you can remove or adjust the PID limit at the container, node, and operating system level. Removing the PID limit and then checking for success is a useful diagnostic step.
To increase the PID limit in a
docker runcommand, set--pids-limit=-1. For details, see docker container run.To increase the PID limit in a
podman runcommand,--pids-limit=-1. For details, see Podman pids-limit.To increase the PID limit in Kubernetes, set the PodPidsLimit on the kubelet on each node. For details, see your Kubernetes distribution specific documentation.
To increase the PID limit at the operating system level, see your OS-specific documentation.
Volumes#
The following table identifies the paths that are used in the container. Use this information to plan the local paths to bind mount into the container.
Container Path |
Description |
Example |
|---|---|---|
|
Specifies the path, relative to the root of the container, for downloaded models. The typical use for this path is to bind mount a directory on the host with this path inside the container.
For example, to use If you do not specify a bind or volume mount, as shown in the The |
|