Aggregator Pattern

Agents contribute specialized outputs that are collected and synthesized by an aggregator agent into a single, well-formatted result.

Overview

Reach for this pattern when multiple agents (or tools) produce complementary information and you want a unified summary. Executive briefings, dashboards, and cross-team status reports are common fits.

Demo Scenario: Daily Briefing Synthesizer

This runnable example assembles a morning briefing by combining three specialists using gllm-pipeline for orchestration:

  • Time & calendar agent – pulls the current time and today's events

  • Weather agent – reports the local forecast

  • Synthesizer agent – stitches everything together into a friendly briefing

Specialists run in parallel for faster execution, and their outputs are merged and passed to the synthesizer. Each specialist uses a mock tool that returns static values so the demo works out of the box; swap the tools for real integrations to connect to live data.

Diagram

spinner

Implementation Steps

  1. Create specialist agents with tools

    from glaip_sdk import Agent
    from tools.mock_time_tool import MockTimeTool, MockCalendarTool, MockWeatherTool
    
    time_calendar_agent = Agent(
        name="time_calendar_agent",
        tools=[MockTimeTool, MockCalendarTool],
        model="openai/gpt-5-mini"
    )
    
    weather_agent = Agent(
        name="weather_agent",
        tools=[MockWeatherTool],
        model="openai/gpt-5-mini"
    )
    
    synth_agent = Agent(
        name="synth_agent",
        instruction="Synthesize a brief morning briefing...",
        model="openai/gpt-5-mini"
    )
  2. Build pipeline: parallel specialists → merge → synthesize

    from gllm_pipeline.steps import parallel, step, transform
    
    pipeline = (
        parallel(branches=[time_calendar_step, weather_step])
        | transform(
            join_partials,
            ["time_text", "weather_text"],
            "partials_text"
        )
        | step(
            component=synth_agent.to_component(),
            input_state_map={"query": "partials_text"},
            output_state="final_answer"
        )
    )
    pipeline.state_type = State
  3. Run the pipeline

    result = await pipeline.invoke(state)
    print(result['final_answer'])

Full implementation: See aggregator/main.py for complete code with State definition and helper functions.

AgentComponent: See the Agent as Component guide for details on the .to_component() pattern.

How to Run

From the glaip/examples/multi-agent-system-patterns directory in the GL SDK Cookbookarrow-up-right:

Ensure your .env contains:

Output

Notes

  • This example uses gllm-pipeline for orchestrating the multi-agent workflow with parallel execution.

  • Replace the mock tool scripts under aggregator/tools/ with real integrations to connect to live systems.

  • Add more specialists (finance, news, incidents) by adding more branches to the parallel() step.

  • Combine this pattern with a router or scheduler for automated briefings.

  • To install gllm-pipeline: uv add gllm-pipeline-binary==0.4.13 (compatible with aip_agents and langgraph <0.3.x)

Last updated