Config

These are config that can be passed

Config Reference

Key
Type
Default
Applies To
Description

search_type

string

"normal"

Both

Search strategy: "normal", "search", or "sql_search"

augment_context

bool

true

Both

Whether to use knowledge base context. Set to false to skip retrieval entirely

use_model_knowledge

bool

true

standard-rag only

Whether the LLM is allowed to use its own parametric knowledge

normal_search_top_k

int

20

Both

Number of chunks to retrieve for normal search (min: 1)

smart_search_top_k

int

20

Both (when search_type=search)

Number of chunks to retrieve for smart search (min: 1)

enable_hybrid_search

bool

true

Both

Combines dense (vector) and sparse (BM25) search. Set to false for vector-only

enable_mmr

bool

false

Both

Enable Maximal Marginal Relevance reranking to improve chunk diversity

lambda_mult

float

0.5

Both (when enable_mmr=true)

MMR diversity parameter — 0.0 = maximum diversity, 1.0 = maximum relevance

reference_formatter_type

string

"lm"

standard-rag only

How references are formatted: "lm" (LM-based), "similarity", or "none"

Search Type

Value
Description

normal

Standard search using hybrid (dense + sparse) or vector-only (when enable_hybrid_search: false)

search

Smart Search — retrieves from external systems via BOSA connectors

sql_search

Structured query against a SQL-backed knowledge base

Example: Advanced Config

Standard RAG

curl -X POST "https://glchat.glair.ai/components/{chatbot_id}:standard-rag/run" \
  -H "Content-Type: application/json" \
  -H "X-API-Key: {your_api_key}" \
  -d '{
    "inputs": {
      "query": "What is our refund policy?"
    },
    "config": {
      "search_type": "normal",
      "normal_search_top_k": 10,
      "enable_hybrid_search": false,
      "rerank_type": "none",
      "reference_formatter_type": "similarity"
    },
    "model_name": "Gemini 3.1 Flash Live Preview"
  }'

Retrieval Pipeline

Last updated