Skip to main content
Programmable Guardrails

NeMo Guardrails

Add safety, security, and control to your LLM applications NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems. Built by NVIDIA, it provides a flexible framework for defining rules and controls that govern how your LLM interacts with users.

Get Started

Installation

Install NeMo Guardrails and set up your environment

Quickstart

Build your first guardrailed application in minutes

Core Concepts

Learn about the five types of guardrails

Examples

Explore real-world implementations

Why NeMo Guardrails?

Safety & Trust

Prevent harmful content and keep conversations on-topic

Controllable Dialog

Guide conversations along predefined paths

Secure Tool Integration

Safely connect LLMs to external services

Multi-Layer Protection

Apply guardrails at input, dialog, retrieval, execution, and output stages

Key Features

  • Five Types of Guardrails: Input, dialog, retrieval, execution, and output rails for comprehensive control
  • Colang DSL: Purpose-built language for defining conversational flows and guardrails
  • Built-in Library: Pre-built guardrails for jailbreak detection, content safety, fact-checking, and more
  • LLM Integration: Works with OpenAI, NVIDIA NIM, HuggingFace, and other providers
  • Async-First: Built on Python async for high-performance applications
  • LangChain Compatible: Seamlessly integrate with LangChain chains and agents

Build docs developers (and LLMs) love