Integrating with NeMo Guardrails#

NeMo Guardrails uses the LangChain ChatNVIDIA connector to connect to a locally running NIM microservice like Llama 3.1 NemoGuard 8B ContentSafety NIM. The content safety microservice exposes the standard OpenAI interface on the v1/completions and v1/chat/completions endpoints.

NeMo Guardrails simplifies the complexity of building the prompt template, parsing the content safety model responses, and provides a programmable method to build a chatbot with content safety rails.

To integrate NeMo Guardrails with the content safety microservice, create a config.yml file that is similar to the following example:

models:
  - type: main
    engine: openai
    model: gpt-3.5-turbo-instruct

  - type: "content_safety"
    engine: nim
    parameters:
      base_url: "http://localhost:8000/v1"
      model_name: "llama-nemotron-safety-guard-v2"

rails:
  input:
    flows:
      - topic safety check input $model=content_safety
  • Field engine specifies nim.

  • Field parameters.base_url specifies the IP address and port of the Llama 3.1 NemoGuard 8B ContentSafety NIM host.

  • Field parameters.model_name in the Guardrails configuration must match the model name served by the Llama 3.1 NemoGuard 8B ContentSafety NIM.

  • The rails definition specifies content_safety as the model.

Refer to NVIDIA NeMo Guardrails documentation for more information about the configuration file.