Configuration

PaddleOCR NIM can be deployed as a container in any OCI-compatible environment (e.g. Docker, Kubernetes). This section details the various ways you can configure a NIM container.

GPU Selection

Passing --gpus all to docker run is acceptable in homogeneous environments with one or more of the same GPU.

In heterogeneous environments with a combination of GPUs, such as an A6000 + a GeForce display GPU, workloads should only run on compute-capable GPUs. Expose specific GPUs inside the container using either:

  • the --gpus flag (ex: --gpus="device=1")

  • the environment variable NVIDIA_VISIBLE_DEVICES (ex: -e NVIDIA_VISIBLE_DEVICES=1)

The device ID(s) to use as input(s) are listed in the output of nvidia-smi -L:

GPU 0: Tesla H100 (UUID: GPU-b404a1a1-d532-5b5c-20bc-b34e37f3ac46)
GPU 1: NVIDIA GeForce RTX 3080 (UUID: GPU-b404a1a1-d532-5b5c-20bc-b34e37f3ac46)

Refer to the NVIDIA Container Toolkit documentation for more instructions.

Environment Variables

The following table describes the environment variables that can be passed into a NIM, as a -e argument added to a docker run command:

ENV

Required?

Default

Notes

NGC_API_KEY

Yes

None

You must set this variable to the value of your personal NGC API key.

NIM_CACHE_PATH

No

/opt/nim/.cache

Location (in container) where the container caches model artifacts.

NIM_LOGGING_JSONL

No

0

Set to 1 to enable JSON logging.

NIM_LOG_LEVEL

No

DEFAULT

Log level of NVIDIA NIM for Table Extraction. Possible values of the variable are DEFAULT, DEBUG, INFO, WARNING, ERROR, CRITICAL. Mostly, the effect of DEBUG, INFO, WARNING, ERROR, CRITICAL is described in Python 3 logging.

NIM_HTTP_API_PORT

No

8000

Publish the NIM service to the prescribed port inside the container. Make sure to adjust the port passed to the -p/--publish flag of docker run to reflect that (ex: -p $NIM_HTTP_API_PORT:$NIM_SERVER_PORT). The left-hand side of this : is your host address:port, and does NOT have to match with $NIM_HTTP_API_PORT. The right-hand side of the : is the port inside the container which MUST match NIM_HTTP_API_PORT (or 8000 if not set).

NIM_MANIFEST_PROFILE

No

None

Override the NIM optimization profile that is automatically selected by specifying a profile ID from the manifest located at /opt/nim/etc/default/model_manifest.yaml. If not specified, the NIM will attempt to select an optimal profile compatible with available GPUs.

NIM_MANIFEST_ALLOW_UNSAFE

No

0

If set to 1, enable selection of a model profile not included in the original model_manifest.yaml or a profile that is not detected to be compatible with the deployed hardware.

NIM_TRITON_LOG_VERBOSE

No

0

Set to 1 to enable verbose logging for the underlying Triton instance. Note that Triton logs are emitted to stderr.

Volumes

The following table describes the paths inside the container into which local paths can be mounted.

Container path

Required

Notes

Docker argument example

/opt/nim/.cache (or NIM_CACHE_PATH if present)

Not required, but if this volume is not mounted, the container will do a fresh download of the model each time it is brought up.

This is the directory within which models are downloaded inside the container. It is very important that this directory could be accessed from inside the container. This can be achieved by making the directory writable by others. For example, to use ~/.cache/nim as the host machine directory for caching models, first do mkdir -p ~/.cache/nim && chmod o+w ~/.cache/nim before running the docker run ... command.

-v ~/.cache/nim:/opt/nim/.cache