GLChat Events & Streaming

This guide is for external pipeline implementation purpose

Overview

This section explains how GLChat's external pipeline feature works under the hood, and how to implement real-time events that enhance user experience.

What You'll Learn

  • How GLChat communicates with external pipelines

  • Why OpenAI event format is required

  • How to implement thinking events (showing AI reasoning)

  • How to implement activity events (showing tool actions)

  • Complete event streaming architecture


How External Pipeline Works

The Request Flow

Key Insight: Why OpenAI Format?

GLChat uses the OpenAI LM Invoker internally to process ALL external pipeline responses.

What this means:

  • GLChat doesn't know or care which LLM you use (OpenAI, Anthropic, Google, etc.)

  • GLChat expects events in OpenAI's native event format

  • GLChat validates, parses, and renders events using OpenAI's schema

  • If you send custom event formats, they won't display correctly

circle-info

⚠️ Notes: Regardless of which LLM you use, always convert your events to OpenAI format before sending to GLChat.

Why This Matters

Wrong Approach

Result: GLChat's OpenAI LM Invoker doesn't recognize this → Events not displayed

Correct Approach

Result: GLChat recognizes and displays the event


Understanding Event Types

GLChat supports three main event types that enhance user experience:

No
Event Type
What It Is
When to Use

1

Response Events (Text Output)

The actual text response from the LLM, streamed in chunks.

Always — this is your primary output.

2

Thinking Events (AI Reasoning)

Shows the AI's internal reasoning process. Available with thinking-enabled models.

When you want to show multi-step reasoning, planning, or analysis.

3

Activity Events (Tool Actions)

Shows discrete actions the AI performs, like web search.

When your LLM uses tools or external services.


Complete Event Examples

Let's walk through a complete conversation with all event types.

User Query: "Explain to me how LLM works. Find on internet to fetch more details."


Step 1: Response Initialization


Step 2: Thinking Starts

Your LLM (with thinking enabled) begins reasoning.

Internal LLM Event:

You Convert to OpenAI Format:

SSE Event You Send:

What happen: Thinking start event is sent to GLChat


Step 3: Thinking Content (Reasoning)

Your LLM shows its reasoning process.

Internal LLM Event:

You Convert to OpenAI Format:

SSE Event You Send:

What happens: Thinking event is sent to GLChat


Step 4: Thinking End

Your LLM end reasoning process.

Internal LLM Event:

You Convert to OpenAI Format:

SSE Event You Send:

What happens: Thinking end event is sent to GLChat

What user sees: Full thinking process on UI


Step 5: Activity Event (Web Search)

Your LLM performs web search.

Internal LLM Event:

You Convert to OpenAI Format:

SSE Event You Send:

What happen: Activity event is sent to GLChat

What user sees: 🔍 Searching: "AI developments 2025"


Step 6: Response Text (Final Output)

Your LLM generates the actual response.

Internal LLM Event:

SSE Event You Send:

What user sees: The text appears in the chat message


Step 7: Completion

What happen: Generation end


Event Conversion Mapping

Here's the complete mapping from internal LLM events to OpenAI format:

Thinking Events

Internal Event
OpenAI Event
Purpose

data_type: "thinking_start"

response.reasoning_summary_part.added

Initialize reasoning event

data_type: "thinking"

response.reasoning_summary_text.delta

Stream reasoning content

data_type: "thinking_end"

response.reasoning_summary_part.done

Complete reasoning display

Activity Events

Internal Event
OpenAI Event
Purpose

data_type: "activity" + type: "search"

response.output_item.done with web_search_call

Show web search

data_type: "activity" + type: "open_page"

response.output_item.done with web_search_call

Show page opened

data_type: "activity" + type: "find_in_page"

response.output_item.done with web_search_call

Show in-page search

Response Events

Internal Event
OpenAI Event
Purpose

type: "response"

response.output_text.delta

Stream text output

Example

Last updated