person-military-pointingSimple Guardrail

This guide will walk you through adding a Guardrail component to your existing RAG pipeline using the gllm-guardrail library. You will learn how to validate inputs and block harmful content before it reaches your expensive AI models.

Guardrail functionality provides input validation and safety checks, preventing errors and protecting your system from malicious or malformed inputs.

circle-info

This tutorial extends the Your First RAG Pipeline tutorial. Ensure you have followed the instructions to set up your repository.

chevron-rightPrerequisiteshashtag

This example specifically requires:

  1. Completion of the Your First RAG Pipelinearrow-up-right tutorial - this builds directly on top of it

  2. Completion of all setup steps listed on the Prerequisitesarrow-up-right page

  3. A working OpenAI API key configured in your environment variables

You should be familiar with these concepts and components:

  1. Components in Your First RAG Pipeline - Required foundation

  2. Guardrail Tutorial - Recommended reading


githubView full project code on GitHub

Installation

# you can use a Conda environment
pip install --extra-index-url "https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-rag gllm-core gllm-generation gllm-inference gllm-pipeline gllm-retrieval gllm-misc gllm-datastore gllm-guardrail

You can either:

  1. You can refer to the guide whenever you need explanation or want to clarify how each part works.

  2. Follow along with each step to recreate the files yourself while learning about the components and how to integrate them.

Both options will work—choose based on whether you prefer speed or learning by doing!

Project Setup

1

Extend Your RAG Pipeline Project

Start with your completed RAG pipeline project from the Your First RAG Pipeline tutorial.

<project-name>/
├── data/
│   ├── <index>/...
├── modules/
│   ├── retriever.py
│   └── response_synthesizer.py
├── .env
├── pipeline.py    # 👈 Will be updated with guardrail functionality

1) Build the Guardrail Pipeline

1

Initialize the Guardrail Manager

In pipeline.py, initialize a GuardrailManager with a PhraseMatcherEngine to block specific banned keywords.

2

Create the guardrail step

Integrate the guardrail into your pipeline using the guard step. This step will run the guardrail check and only proceed to the next step if the content is safe.

How it works: The guard step calls guardrail_manager.check_content(). If is_safe is True, it continues to retrieve_step. If False, it stops execution.

3

Compose the final pipeline

Chain the guardrail step at the beginning of your pipeline.

2) Run the Pipeline

1

Test with Safe and Unsafe Inputs


Congratulations! You've successfully secured your RAG pipeline. Your application now automatically blocks harmful requests before they reach your language models.

Last updated