Simple Guardrail
This guide will walk you through adding a Guardrail component to your existing RAG pipeline using the gllm-guardrail library. You will learn how to validate inputs and block harmful content before it reaches your expensive AI models.
Guardrail functionality provides input validation and safety checks, preventing errors and protecting your system from malicious or malformed inputs.
Installation
# you can use a Conda environment
pip install --extra-index-url "https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-rag gllm-core gllm-generation gllm-inference gllm-pipeline gllm-retrieval gllm-misc gllm-datastore gllm-guardrail# you can use a Conda environment
pip install --extra-index-url "https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-rag gllm-core gllm-generation gllm-inference gllm-pipeline gllm-retrieval gllm-misc gllm-datastore gllm-guardrail# you can use a Conda environment
FOR /F "tokens=*" %T IN ('gcloud auth print-access-token') DO pip install --extra-index-url "https://oauth2accesstoken:%T@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-rag gllm-core gllm-generation gllm-inference gllm-pipeline gllm-retrieval gllm-misc gllm-datastore gllm-guardrailYou can either:
You can refer to the guide whenever you need explanation or want to clarify how each part works.
Follow along with each step to recreate the files yourself while learning about the components and how to integrate them.
Both options will work—choose based on whether you prefer speed or learning by doing!
Project Setup
Extend Your RAG Pipeline Project
Start with your completed RAG pipeline project from the Your First RAG Pipeline tutorial.
<project-name>/
├── data/
│ ├── <index>/...
├── modules/
│ ├── retriever.py
│ └── response_synthesizer.py
├── .env
├── pipeline.py # 👈 Will be updated with guardrail functionality1) Build the Guardrail Pipeline
Initialize the Guardrail Manager
In pipeline.py, initialize a GuardrailManager with a PhraseMatcherEngine to block specific banned keywords.
Create the guardrail step
Integrate the guardrail into your pipeline using the guard step. This step will run the guardrail check and only proceed to the next step if the content is safe.
How it works: The guard step calls guardrail_manager.check_content(). If is_safe is True, it continues to retrieve_step. If False, it stops execution.
Compose the final pipeline
Chain the guardrail step at the beginning of your pipeline.
2) Run the Pipeline
Test with Safe and Unsafe Inputs
Congratulations! You've successfully secured your RAG pipeline. Your application now automatically blocks harmful requests before they reach your language models.
Last updated