Guardrail
Guardrails Component
Overview
Guardrails are a critical security and safety component in GLChat that provides content filtering and safety checks for both user inputs and AI-generated responses. The guardrail system is built on top of NVIDIA's NeMo Guardrails framework and offers comprehensive protection against harmful, inappropriate, or off-topic content.
What are Guardrails?
Guardrails act as a safety net for your AI chatbot, ensuring that:
User inputs are screened for harmful, inappropriate, or off-topic content before processing
AI responses are checked for safety violations before being sent to users
Content filtering is applied based on configurable business rules and universal safety standards
Compliance with regulatory requirements (COPPA, FERPA, HIPAA, etc.) is maintained
The system uses advanced natural language processing and pattern matching to identify and block content that violates safety policies, while allowing legitimate business-related conversations to proceed normally.
How Guardrails Work
Core Architecture
The guardrail system consists of several key components:
GuardrailManager: The main component that orchestrates content checking
NeMo Guardrails: The underlying framework that performs the actual content analysis
Colang Configuration: Business logic that defines what content is allowed or blocked
Pipeline Integration: Seamless integration with the Standard RAG pipeline
Content Checking Process
Last updated