RAGO Pipelines
To access this menu, go to the sidebar and select Pipeline & Preset > RAGO Pipelines.

You can manage all RAG (Retrieval-Augmented Generation) pipelines from this tab. This includes selecting a pipeline type, uploading a new pipeline, adding or editing presets, and configuring settings to control how your project retrieves and processes information.
Select Pipeline
Select the pipeline you want to use from the available options on the RAG Pipelines page. Each pipeline type offers different capabilities and is suited for specific use cases.
Pipeline Descriptions:
No-Op A basic project without a connected knowledge base. Ideal for simple conversations, testing, or as a default fallback pipeline.
Standard An project connected to a knowledge base that retrieves relevant information and generates responses based on the retrieved content. This Standard RAG pipeline focuses on retrieving semantically similar document chunks to support contextual answers.
Graph RAG An extended version of the Standard RAG pipeline that enhances traditional vector-based retrieval with knowledge graph integration. Graph RAG combines both vector-based chunks and structured graph entities or relationships extracted from documents to produce richer, more context-aware responses.
Available options may vary and could differ from the example shown below.
Upload Pipeline
You can upload pipeline by clicking the "Upload Pipeline" button on the top right.

Preset List, Prompt Builder, and LMRP

After selecting a pipeline, click New Preset to create a preset configuration. Fill in the required fields below, then click Save to store your changes or Cancel to discard them.

General Information
Preset ID Enter a unique identifier used to register and reference this preset within the system.
Model Configuration
Supported Model IDs Select which LLMs (Large Language Models) this preset supports (e.g., GPT-4, Claude 3).
Use Cache Choose whether to enable caching for faster repeated retrievals.
Yes – Cache enabled.
No – Cache disabled.
Privacy & Data Protection
Anonymize Em Control whether PII masking is applied during embedding/retrieval.
Yes – PII will be masked.
No – No masking applied.
Anonymize Lm Control whether PII masking is applied during generation (when prompts are sent to the LLM).
Yes – PII will be masked.
No – No masking applied.
Support PII Anonymization Enable to hide or mask sensitive user data (PII) during the retrieval process.
Yes – PII will be masked.
No – No masking applied.
Retrieval & Knowledge Configuration
Augment Context Enable to allow the project to pull information from a knowledge base before generating responses.
Yes – Use knowledge base.
No – Skip retrieval step.
Chat History Limit Define how many previous messages the project retains in conversation memory.
Prompt Context Char Threshold Set how much prior chat context is included when sending prompts to the model.
Reference Formatter Threshold Set the minimum similarity score (range
0.0–1.0) required for a source to be cited in responses.Higher value = only highly relevant sources.
Lower value = broader inclusion.
Reference Formatter Batch Size Define how many candidate references are evaluated in each batch.
Reference Formatter Type Choose the formatting style for displaying source references in generated answers.
Use Model Knowledge Allow the model to use its built-in knowledge when no relevant information is found in the knowledge base.
Yes – Fallback allowed.
No – Must only use knowledge base.
Search & Retrieval Behavior
Enable Smart Search Integration Integrate with the Smart Search Engine for enhanced semantic retrieval.
Yes – Smart Search enabled.
No – Smart Search disabled.
Normal Search Top K Number of top results to retrieve using Standard RAG search.
Rerank Kwargs Enter additional parameters (key-value pairs) to fine-tune reranking behavior.
Rerank Type Choose the reranking method used to prioritize search results.
Search Types Choose which search methods your project can use. You can select one or both.
Normal – Standard RAG.
Web – Live web search.
SQL Search - Enable Chat Filter based on schema in DPO Pipeline.
Essential Deep Research - Comprehensive research with a lite version that's faster and more concise.
Comprehensive Deep Research - In-depth research with thorough analysis and explanations.
Smart Search Top K Number of top results to retrieve using Smart Search.
Vector Weight Set the weighting applied to retrieved vectors during ranking or scoring.
Web Search Top K Number of top results to retrieve using Web Search.
Web Search Control
Web Search Blacklist List of blocked domains that the project must avoid.
Web Search Whitelist List of approved domains the project can access during web searches.
Document Processing
Support Multimodal Enable if the preset should support both text and image input.
Yes – Accepts text + image.
No – Text only.
Use DocProc Enable to process uploaded documents using the Document Processor (DocProc). This extracts and structures document content and supports file attachments for knowledge base ingestion.
Yes – Document processing active and attachments supported.
No – Document processing disabled.
Swirl Configuration
Autosuggest Providers Provide real-time search suggestions as users type. Example:
Enabled:
{ "[Unique Provider ID]": internal/web }Disabled:
{}
Discovery Providers Specify providers responsible for generating suggested questions or follow-up queries.
Shingle Providers Generate short overlapping word sequences (shingles) to improve query understanding. Example:
Enabled:
{ "[Unique Provider ID]": internal/web }Disabled:
{}
Swirl Providers Primary data sources queried when a user performs a search via the Swirl interface. Format:
{ [Unique Provider ID]: internal/web }Internal – Local data sources (e.g., Elasticsearch, Chroma).
Web – External sources (e.g., Firecrawl, Google).
Web Swirl Providers Define external web-based providers specifically for web search queries.
Graph RAG Configuration
Llama Index Graph RAG Embedding Model Specify the embedding model used for Graph RAG operations.
Llama Index Graph RAG LLM Model Specify the language model used for reasoning in Graph RAG.
Graph RAG Implementation Define or select the Graph RAG implementation method used in your environment.
Memory Settings
Enable Memory Allow the pipeline to recall past interactions for context.
Retrieve Memory Threshold Minimum similarity score for retrieving past memory.
Retrieve Memory Top K Number of past memory entries to retrieve.
Safety & Guardrails
Allowed Topics List of topics the model is allowed to discuss.
Banned Phrase Words or phrases the model must avoid.
Core Restriction Categories
Enable Guardrails Activate safety filters to prevent unsafe or restricted outputs.
Guardrail Fallback Message Message shown if a response is blocked by guardrails.
Guardrail Mode Choose where guardrails are applied: Input, Output, Both, or Disabled.
Topic Safety Mode Enable topic-based safety checks during conversations.
Others
Enable Live Chat
The available fields in the Add Preset form may vary depending on the selected pipeline type.
View and Edit Preset
To view or edit a preset, click the Edit icon on the Preset List page.

You can review and modify the preset details as needed. Once you’re done, click Save to apply the changes or Cancel to discard them.


The Prompt Builder Catalogue allows you to create and manage reusable prompt templates that define how the project and users interact with the model.
Each prompt template includes several default prompts, which may vary depending on the pipeline selected. The available fields and configurations will adjust automatically based on the chosen pipeline type.
Add New Prompt Builder Catalogue
To create a new prompt, slide the control to the right to open the Prompt Builder page.

Fill in the required fields as described below, then click Save to store the prompt or Cancel to discard the changes.

Prompt Name Enter a unique and descriptive name for the prompt so it can be easily identified and reused within the catalogue.
Prompt Grouping Type Choose how the prompt is applied across models:
No Grouping – The prompt is available to all models.
Scope – The prompt applies only to a specific model provider, model name, or both.
Template Group – The prompt applies only to a selected group of templates.
Scope (Visible only if “Scope” is selected as the grouping type)
Enter the model name or provider name to limit where the prompt will apply.
Template Group (Visible only if “Template Group” is selected as the grouping type)
Select a template group from the predefined list of available templates groups.
System Instruction Provide instructions that guide the overall behavior or tone of the project. These are fixed rules or directives that shape how the system responds.
User Instruction Define the instructions or inputs that users can provide during conversations to guide responses for specific interactions.
View and Edit Prompt Builder Details
To view or edit a prompt builder entry, click the Edit icon on the Prompt Builder page.

You can modify the details as needed, then click Save to apply changes or Cancel to discard them.

Delete Prompt Builder
To delete a prompt , click the Trash icon in the Action column. A confirmation pop-up will appear, click Delete to confirm or Cancel to cancel the action.

You cannot delete a prompt that still has assigned project. Remove all assigned prompt first before attempting deletion.

The LMRP (Language Model Request Processor) Catalogue allows you to create and manage configurations that define how prompts are processed and how responses are structured.
Edit LMRP Catalogue
You can only edit the LMRP Catalogue entries that are predefined and provided by the system. Fill in the required fields as described below, then click Save to apply the configuration or Cancel to discard the changes.

Name Enter a unique and descriptive name for the LMRP entry to identify it within the catalogue.
Scope Select where this prompt can be applied. You can restrict it to specific providers, specific models, or allow it to apply to all.
Prompt Builder System Template Provide system-level instructions that guide how the project behaves or responds overall. These are typically fixed rules or guidelines that shape consistent system behavior.
Prompt Builder User Template Provide user-level instructions or input templates that guide how the project responds during conversations or specific interactions.
LMRP Model Select the language model this LMRP entry will use for processing. Choose from the list of available LMRP models in the dropdown.
Output Parser Type Select how the chatbot should format its response:
JSON – Use this option if the output must follow a structured format.
None – Use this option if no specific output format is required.
You can see the recommended LMRP config here.
Last updated