Pipeline
A pipeline is a structured, deterministic execution path that retrieves relevant context and generates a grounded response. GLChat ships with four main built-in pipelines: Standard RAG, No-Op, Gemini Live, and Deep Research.
Available Pipelines
Standard Pipelines
These are the preset pipelines available to every project configuration. They cover the most common deployment scenarios.
Standard RAG
Retrieves from a knowledge base, web, or database, then generates a grounded answer
Knowledge base Q&A, document search, web-augmented answers
No-Op
Skips retrieval entirely — the LLM responds without any process beforehand
General conversation when no knowledge base is configured
External Pipeline
Calls a configurable external webhook endpoint using an OpenAI-compatible interface
Connecting GLChat to any external LLM or service via HTTP
Datasaur External
Calls the Datasaur LLM API with built-in citation support and Datasaur-specific reference formatting
Chatbots powered by Datasaur's hosted LLM platform
Specialized Pipelines
These pipelines are not available as preset options. They are activated only when a specific feature flag or search type is present in the request — and each requires a dedicated backend handler to be configured via environment variables.
Gemini Live
enable_live: true + BACKEND_GEMINI_LIVE_PIPELINE_HANDLER configured
Interactive voice and live conversation scenarios
Deep Research
search_type is ESSENTIALS_DEEP_RESEARCH, COMPREHENSIVE_DEEP_RESEARCH, or INTERNAL_DEEP_RESEARCH + BACKEND_ESSENTIALS_DEEP_RESEARCH_PIPELINE_HANDLER configured
In-depth research queries requiring breadth and synthesis across multiple sources
Want to use or build a custom pipeline?
GLChat supports connecting your own external pipeline or even creating your own custom pipeline. For a step-by-step guide, see Custom Pipeline Development Guide
Last updated