Environment Variables for NeMo Retriever Text Embedding NIM#
Use this documentation to learn about the environment variables for NeMo Retriever Text Embedding NIM.
Environment Variables#
The following table identifies the environment variables that are used in the container.
Set environment variables with the -e
command-line argument to the docker run
command.
Name |
Description |
Default Value |
---|---|---|
|
Set this variable to the value of your personal NGC API key. |
None |
|
Specifies the fully qualified path, in the container, for downloaded models. |
|
|
Specifies the network port number, in the container, for gRPC access to the microservice. |
|
|
Specifies the network port number, in the container, for HTTP access to the microservice. Refer to Publishing ports in the Docker documentation for more information about host and container network ports. |
|
|
Specifies the number of worker threads to start for HTTP requests. |
|
|
Specifies the network port number, in the container, for NVIDIA Triton Inference Server. |
|
|
When set to |
|
|
Specifies the logging level. The microservice supports the following values: DEBUG, INFO, WARNING, ERROR, and CRITICAL. |
|
|
When set to |
|
|
Set to |
|
|
Specifies the fully qualified path, in the container, for the model manifest YAML file. |
|
|
Specifies the model profile ID to use with the container. By default, the container attempts to automatically match the host GPU model and GPU count with the optimal model profile. |
None |
|
The number of model instances to deploy. |
Unset (this value overrides a hardware-specific config value) |
|
The number of tokenizer instances to use. |
|
|
Specifies the model names used in the API.
Specify multiple names in a comma-separated list.
If you specify multiple names, the server responds to any of the names.
The name in the model field of a response is the first name in this list.
By default, the model is inferred from the |
None |
|
If set to a non-empty string, the |
None |
|
For the NVIDIA Triton Inference Server, sets the max queue delayed time to allow other requests to join the dynamic batch. For more information, refer to the Triton User Guide. |
|
|
Specifies the gRPC port number, for NVIDIA Triton Inference Server. |
|
|
When set to |
|
|
Specify the maximum batch size that the underlying Triton instance can process. The value must be less than or equal to maximum batch size that was used to compile the engine. By default, the NIM uses the maximum possible batch size for a given model and GPU. To decrease the memory footprint of the server, choose a smaller maximum batch size. If the model uses the |
None |
|
Specify the maximum sequence length that can be processed by the Triton server. By default, the NIM uses the maximum possible sequence length for a given model and GPU. To decrease the memory footprint of the server, choose a smaller maximum sequence length. Only discrete values are supported. Refer to the NIM’s support matrix for valid values (and their estimated memory footprint). |
None |
|
Controls the performance mode of the NIM.
When set to |
|