Parallel
A concurrent workflow where multiple agents process the same input simultaneously, ideal for comparative analysis, multi-model testing, or getting multiple perspectives.
Multiple agents tackle independent subtasks simultaneously to reduce overall latency, with each agent's output preserved separately.
Overview
Use this pattern when subtasks do not depend on each other and you want faster responses by running them side by side. The outputs are displayed separately, preserving each agent's unique perspective without synthesis.
Demo Scenario: Trip Planning with Specialized Agents
Two travel specialists work in parallel on the same user request using gllm-pipeline:
Logistics agent – focuses on flights, hotels, and transportation
Activities agent – curates attractions, food, and experiences
The pipeline runs both specialists simultaneously and returns their outputs separately, allowing you to see each specialist's perspective distinctly.
Diagram
Implementation Steps
Create specialist agents
Build pipeline: parallel → merge
Run the pipeline
Full implementation: See
parallel/main.pyfor complete code with State definition and step configuration.AgentComponent: See the Agent as Component guide for details on the
.to_component()pattern.
How to Run
From the gl-aip/examples/multi-agent-system-patterns directory in the GL SDK Cookbook:
Ensure your .env contains:
Output
Notes
This example uses gllm-pipeline for orchestrating parallel execution of specialist agents.
The
parallel()step automatically runs all branches concurrently for optimal performance.Add more specialists by adding more branches to the
parallel()step.The
transform()step provides a clean way to format and combine outputs while preserving each agent's perspective.Unlike the Aggregator pattern, this pattern does not synthesize outputs - each agent's response remains distinct.
To install gllm-pipeline:
uv add gllm-pipeline-binary==0.4.13(compatible with aip_agents and langgraph <0.3.x)
Related Documentation
Agents guide — Configure instructions and streaming renderers.
Automation & scripting — Capture transcripts or usage metrics in CI workflows.
Security & privacy — Apply tool-output and memory policies when sharing results downstream.
Last updated
Was this helpful?