Retrieval Pipeline Tool

Runs only the retrieval stage — queries the knowledge base and returns matched chunks. No LLM generation is performed.

Endpoint

POST /components/{chatbot_id}:retrieval-pipeline/run

Example Request

curl -X POST "https://glchat.glair.ai/components/{chatbot_id}:retrieval-pipeline/run" \
  -H "Content-Type: application/json" \
  -H "X-API-Key: {your_user_api_key}" \
  -d '{
    "inputs": {
      "standalone_query": "What is our refund policy?"
    },
    "config": {}
  }'

Response

{
  "chunks": [
    {
      "id": "54652883-dc0e-4e08-928e",
      "content": "Returns are accepted within 30 days of purchase...",
      "metadata": { "source": "policy-doc.pdf", ...},
      "score": 0.71
    }
  ]
}
Field
Type
Description

chunks

list

Retrieved chunks sorted by relevance score (descending)

Inputs Schema

Field
Type
Required
Default
Description

standalone_query

string

Yes

Query used for knowledge base retrieval

retrieval_params

dict

No

{}

Low-level parameters forwarded to the vector store query

steps

list[dict]

No

[]

Custom processing steps to override default retrieval flow

attachment_chunks

list[Chunk]

No

[]

Pre-loaded chunks from file attachments

search_chunk_results

list[Chunk]

No

[]

Pre-fetched search chunks to inject alongside retrieval

memory_results

list[Chunk]

No

[]

Memory chunks from the user's memory store

context

string

No

""

Additional context string

Last updated