Skip to main content
Ctrl+K
NVIDIA NeMo Guardrails - Home NVIDIA NeMo Guardrails - Home

NVIDIA NeMo Guardrails

NVIDIA NeMo Guardrails - Home NVIDIA NeMo Guardrails - Home

NVIDIA NeMo Guardrails

Table of Contents

NVIDIA NeMo Guardrails

  • About NeMo Guardrails
  • Installation Guide
  • Getting Started
  • Release Notes

Common Tasks

  • Configuration Guide
    • Custom Initialization
    • General Options
    • LLM Configuration
    • Guardrails Configuration
    • Tracing Configuration
    • Knowledge Base
    • Exceptions and Error Handling
  • Guardrails Library
  • Guardrails Process
  • Colang Guide
  • LLM Support
  • Multimodal Data
  • Python API
  • CLI
  • Server Guide
  • LangChain
    • LangChain Integration
    • RunnableRails
    • LangGraph Integration
    • Chain-With-Guardrails
      • Chain with Guardrails
    • Runnable-As-Action
      • Runnable as Action
  • Detailed Logging
    • Output Variables
  • Tracing
    • Quick Start
    • Adapter Configurations
    • Advanced OpenTelemetry Integration
    • Troubleshooting
  • Jailbreak Detection Heuristics
    • Using Jailbreak Detection Heuristics
  • LLMs
    • NVIDIA AI Endpoints
      • Using LLMs hosted on NVIDIA API Catalog
    • Vertex AI
      • Using LLMs hosted on Vertex AI
  • Multi Config API
    • Multi-config API
  • Migrating from Colang 1 to Colang 2

Advanced Uses

  • Generation Options
  • Prompt Customization
  • Embedding Search Providers
  • NeMo Guardrails with Docker
  • Streaming
  • AlignScore Deployment
  • Extract User-provided Values
  • Bot Message Instructions
  • Event-based API
  • Self-hosting Llama Guard using vLLM
  • Nested AsyncIO Loop
  • Vertex AI Setup
  • Nemotron Safety Guard Deployment
  • Llama 3.1 NemoGuard 8B Topic Control Deployment
  • NemoGuard JailbreakDetect Deployment
  • KV Cache Reuse for NemoGuard NIM
  • Blueprint with NemoGuard NIMs
  • Tools Integration with NeMo Guardrails
  • Memory Model Cache
  • Guardrailing Bot Reasoning Content

Security

  • Security Guidelines

Evaluation

  • Guardrails Evaluation
  • LLM Vulnerability Scanning

Guardrails with Colang

  • Hello World
  • Core Colang Concepts
  • Demo Use Case
  • Input Rails
  • Output Rails
  • Topical Rails
  • Retrieval-Augmented Generation

Colang 2.0

  • Overview
  • What’s Changed
  • Getting Started
    • Hello World
    • Dialog Rails
    • Multimodal Rails
    • Input Rails
    • Interaction Loop
    • LLM Flows
    • Recommended Next Steps
  • Language Reference
    • Introduction
    • Event Generation & Matching
    • Working with Actions
    • Defining Flows
    • Working with Variables & Expressions
    • Flow control
    • Colang Standard Library (CSL)
      • Fundamental Core Flows (core.co)
      • Timing Flows (timing.co)
      • LLM Flows (llm.co)
      • Interactive Avatar Modality Flows (avatars.co)
      • Guardrail Flows (guardrails.co)
      • User Attention Flows (attention.co)
    • Make use of Large Language Models (LLM)
    • More on Flows
    • Python Actions
    • Development and Debugging

Reference

  • Architecture
    • Architecture Guide
  • Glossary
  • Frequently Asked Questions (FAQ)
  • User Guides

User Guides#

  • CLI
    • Guardrails CLI
    • Options
  • Colang Guide
    • Why a New Language
    • Concepts
    • Syntax
    • Conclusion
  • Guardrails Library
    • LLM Self-Checking
    • NVIDIA Models
    • Community Models and Libraries
    • Third-Party APIs
    • Other
  • Guardrails Process
    • Overview
    • Categories of Rails
    • The Guardrails Process
    • The Dialog Rails Flow
  • LLM Support
    • Evaluation experiments
    • LLM Support and Guidance
  • Python API
    • Basic usage
    • RailsConfig
    • Message Generation
    • Actions
    • Action Parameters
  • Server Guide
    • Guardrails Server
    • Actions Server
  • Advanced
    • AlignScore Deployment
    • Bot Message Instructions
    • Embedding Search Providers
    • Event-based API
    • Extract User-provided Values
    • Generation Options
    • Jailbreak Detection Deployment
    • Self-hosting Llama Guard using vLLM
    • Nested AsyncIO Loop
    • Prompt Customization
    • Streaming
    • NeMo Guardrails with Docker
    • Vertex AI Setup
    • Nemotron Safety Guard Deployment
    • Llama 3.1 NemoGuard 8B Topic Control Deployment
    • Blueprint with NemoGuard NIMs
  • Detailed Logging
    • Output Variables
  • Input Output Rails Only
    • Generation Options - Using only Input and Output Rails
  • Jailbreak Detection Heuristics
    • Using Jailbreak Detection Heuristics
  • LangChain
    • LangChain Integration
    • RunnableRails
    • LangGraph Integration
    • Chain-With-Guardrails
    • Runnable-As-Action
  • LLMs
    • NVIDIA AI Endpoints
    • Vertex AI
  • Multi Config API
    • Multi-config API
NVIDIA NVIDIA
Privacy Policy | Manage My Privacy | Do Not Sell or Share My Data | Terms of Service | Accessibility | Corporate Policies | Product Security | Contact

Copyright © 2023-2025, NVIDIA Corporation.