Configure NVIDIA NIM for Object Detection#

NVIDIA NIM for Object Detection use docker containers under the hood. Each NIM is its own Docker container and there are several ways to configure it. The remainder of this section describes the various ways to configure a NIM container.

Use this documentation to learn how to configure NVIDIA NIM for Object Detection.

GPU Selection#

The NIM container is GPU-accelerated and uses NVIDIA Container Toolkit for access to GPUs on the host.

You can specify the --gpus all command-line argument to the docker run command if the host has one or more of the same GPU model. If the host has a combination of GPUs, such as an A6000 and a GeForce display GPU, run the container on compute-capable GPUs only.

Expose specific GPUs to the container by using either of the following methods:

  • Specify the --gpus argument, such as --gpus="device=1".

  • Set the NVIDIA_VISIBLE_DEVICES environment variable, such as -e NVIDIA_VISIBLE_DEVICES=1.

Run the nvidia-smi -L command to list the device IDs to specify in the argument or environment variable:

GPU 0: Tesla H100 (UUID: GPU-b404a1a1-d532-5b5c-20bc-b34e37f3ac46)
GPU 1: NVIDIA GeForce RTX 3080 (UUID: GPU-b404a1a1-d532-5b5c-20bc-b34e37f3ac46)

Refer to GPU Enumeration in the NVIDIA Container Toolkit documentation for more information.

Optimization Mode#

The NVIDIA NIM for Object Detection can run in modes optimized for VRAM usage or performance when using a TensorRT model profile. You control the optimization mode by setting the NIM_TRITON_OPTIMIZATION_MODE environment variable to one of: default, perf_opt, or vram_opt.

  • default — The NIM loads one TensorRT engine profile that spans the full range of supported batch sizes. When you run in this mode, the NIM has relatively low VRAM usage, however, there is a reduced latency and throughput for small batch sizes, such as 1, 2, 3, 4.

  • perf_opt — The NIM loads all TensorRT engine profiles except for the default profile. This mode enables coverage of full supported batch sizes. When you run in this mode, the NIM has improved latency and throughput for small batch sizes, such as 1, 2, 3, 4. However, VRAM usage is not ideal because multiple profiles are loaded.

  • vram_opt — The NIM loads only the first and smallest TensorRT engine profile. This mode has the smallest VRAM usage by the NIM, but constrains the batch sizes to 1 only. This has the same effect as setting NIM_TRITON_MAX_BATCH_SIZE to 1 and NIM_TRITON_OPTIMIZATION_MODE to perf_opt.

When you specify both NIM_TRITON_OPTIMIZATION_MODE and NIM_TRITON_MAX_BATCH_SIZE the following occurs:

  • default — Higher NIM_TRITON_MAX_BATCH_SIZE results in higher VRAM usage.

  • perf_opt — Profiles larger than NIM_TRITON_MAX_BATCH_SIZE are not be used.

  • vram_optNIM_TRITON_MAX_BATCH_SIZE is ignored.

PID Limit#

In certain deployment or container runtime environments, default process and thread limits (PID limits) can interfere with NIM startup. These set limits are set by Docker, Podman, Kubernetes, or the operating system.

If the PID limit is too low, you might see symptoms such as:

  • NIM starts up partially, but fails to reach ready state, and then stalls.

  • NIM starts up partially, but fails to reach ready state, and then crashes.

  • NIM serves a small number of requests, and then fails.

To verify that PID limits are impacting the NIM container, you can remove or adjust the PID limit at the container, node, and operating system level. Removing the PID limit and then checking for success is a useful diagnostic step.

  • To increase the PID limit in a docker run command, set --pids-limit=-1. For details, see docker container run.

  • To increase the PID limit in a podman run command, --pids-limit=-1. For details, see Podman pids-limit.

  • To increase the PID limit in Kubernetes, set the PodPidsLimit on the kubelet on each node. For details, see your Kubernetes distribution specific documentation.

  • To increase the PID limit at the operating system level, see your OS-specific documentation.

Shared Memory flag#

Tokenization uses Triton’s Python backend capabilities that scales with the number of CPU cores available. You may need to increase the available shared memory given to the microservice container.

Example providing 1g of shared memory:

docker run ... --shm-size=1g ...

Triton Ensemble Configuration#

The NVIDIA NIM for Object Detection enables you to configure the underlying Triton Ensemble Models by using environment variables. For most use cases, the default values for these variables are sufficient. However, for highly concurrent workloads in resource-constrained environments, you can tune the values of the following environment variables to improve the stability of the NIM.

Variable

Description

Default Value

NIM_TRITON_IDLE_BYTES_LIMIT

The threshold for idle VRAM memory (bytes) after which the Torch CUDA cache is emptied and all inter-process communication (IPC) files are closed.

1GB

NIM_TRITON_FLUSH_INTERVAL

This option determines after how many requests the NIM_TRITON_IDLE_BYTES_LIMIT is checked.

1 (After every request)

NIM_TRITON_RATE_LIMIT

If set, this option configures Triton to rate limit the execution count throughout the ensemble model pipeline to the provided integer value. GPU-bound model inference is given priority, while other components of the ensemble model pipeline (pre-processors, post-processors, etc.) are given lower priority.

None

Volumes#

The following table identifies the paths that are used in the container. Use this information to plan the local paths to bind mount into the container.

Container Path

Description

Example

/opt/nim/.cache or NIM_CACHE_PATH

Specifies the path, relative to the root of the container, for downloaded models.

The typical use for this path is to bind mount a directory on the host with this path inside the container. For example, to use ~/.cache/nim on the host, run mkdir -p ~/.cache/nim before you start the container. When you start the container, specify the -v ~/.cache/nim:/opt/nim/.cache -u $(id -u) arguments to the docker run command.

If you do not specify a bind or volume mount, as shown in the -v argument in the preceding command, the container downloads the model each time you start the container.

The -u $(id -u) argument runs the container with your user ID to avoid file system permission issues and errors.

-v ~/.cache/nim:/opt/nim/.cache -u $(id -u)