Custom LLM Providers#
Note
The time to complete this tutorial is approximately 15 minutes.
About Custom LLM Providers#
You can use LLM providers other than NVIDIA NIM with NeMo Guardrails microservice.
The provider can be hosted locally or through external services.
Refer to the following steps to configure additional LLM providers using the config.yml file and properly setting environment variables.
The microservice supports multi-LLM engines and configuration.
The microservice recognizes and integrates the specified LLM providers based on your config.yml and environment variables.
Only OpenAI compatible LLM providers are supported.
Understanding Configuration File Changes#
The config.yml file located at the root of your configuration store identifies other LLM providers.
Define each model with model_id, engine, model, and base_url fields. Provide optional configuration in parameters if needed.
The following example defines three models and uses two LLM providers, nim and openai.
models:
- model_id: meta/llama-3.1-8b-instruct
engine: nim
model: meta/llama-3.1-8b-instruct
base_url: https://integrate.api.nvidia.com/v1
- model_id: davinci-002
engine: openai
model: davinci-002
base_url: https://api.openai.com/v1
- model_id: gpt-4o
engine: openai
model: gpt-4o
base_url: https://api.openai.com/v1
Each model is identified by the model_id field, which has to be unique for each entry.
This allows Guardrails clients to call the same model hosted at multiple URLs. For example a staging and production cluster.
The following configuration demonstrates this concept. Client requests can refer to either staging/my-awesome-llm or prod/my-awesome-llm models to select between the staging and production clusters.
models:
- model_id: staging/my-awesome-llm
engine: nim
model: my-awesome-llm
base_url: https://staging.models.com/v1
- model_id: prod/my-awesome-llm
engine: nim
model: my-awesome-llm
base_url: https://production.models.com/v1
Using Environment Variables#
Guardrails simplifies authentication by using the engine field of the model configuration to look up API keys stored in environment variables as follows.
When the
enginevalue is set tonim, the NeMo Guardrails microservice gets the API key from the environment variable$NVIDIA_API_KEY.When the
enginevalue is set toopenai, the NeMo Guardrails microservice gets the API key from the environment variable$OPENAI_API_KEY.For other
enginevalues, the NeMo Guardrails microservice gets the API key from the clientX-Model-Authorizationheader field in the client request is used. For more information, refer to Custom HTTP Headers.
Set the environment variable values for the container.
Identify the engine names in the
config.ymlfile, such asnimoropenai. Create environment variables as needed for each engine.
export OPENAI_API_KEY="<openai-api-key>"
Example of Using an OpenAI Model#
Use the guardrails configuration endpoint to upload your configuration:
curl -X POST "http://guardrails-service:7331/v1/config" \ -H "Content-Type: application/json" \ -d '{ "config_id": "openai-config", "config": { "models": [ { "model_id": "davinci-002", "engine": "openai", "model": "davinci-002", "base_url": "https://api.openai.com/v1" } ] } }'
Run inference:
curl -X 'POST' \ "http://guardrails-service:7331/v1/guardrail/completions" \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ -H 'X-Model-Authorization: $OPENAI_API_KEY' \ -d '{ "model": "davinci-002", "prompt": "what can you do for me?", "max_tokens": 16, "stream": false, "temperature": 1, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0 }'