NVIDIA NeMo Guardrails Library Developer Guide#
The NeMo Guardrails library is an open-source Python package for adding programmable guardrails to LLM-based applications. It intercepts inputs and outputs, applies configurable safety checks, and blocks or modifies content based on defined policies.
About the NeMo Guardrails Library#
Learn about the library and its capabilities in the following sections.
Add programmable guardrails to LLM applications with this open-source Python library.
Apply input, retrieval, dialog, execution, and output rails to protect LLM applications.
High level explanation of how Guardrails works.
Connect to NVIDIA NIM, OpenAI, Azure, Anthropic, HuggingFace, and LangChain providers.
Get Started#
Follow these steps to start using the NeMo Guardrails library.
Install NeMo Guardrails with pip, configure your environment, and verify the installation.
Follow hands-on tutorials to deploy Nemotron Content Safety, Nemotron Topic Control, and Nemotron Jailbreak Detect
Next Steps#
Once you’ve completed the get-started tutorials, explore the following areas to deepen your understanding.
Configure YAML files, Colang flows, custom actions, and other components to control LLM behavior.
Run guardrailed inference using the Python API or Guardrails API server.
Measure accuracy and performance of dialog, fact-checking, moderation, and hallucination rails.
Debug guardrails with verbose mode, explain method, and generation log options.
Deploy guardrails using the local API server, Docker containers, or production microservices.
Integrate NeMo Guardrails with LangChain chains, runnables, and LangGraph workflows.