Deep Research

Overview

This guide shows how to perform deep research using the GL SDKarrow-up-right, starting from simple, direct usage and gradually moving toward more advanced orchestration patterns.

At its core, the GL SDK provides a DeepResearcherarrow-up-right Componentarrow-up-right that can be used on its own to execute deep research against different providers.

Optionally, this Component can be placed inside a Pipelinearrow-up-right when you need additional logic such as context preparation or routing decisions. The Pipeline orchestrates when and under what conditions deep research is invoked, but does not define how deep research itself works internally.

You can:

  • use the DeepResearcher directly for straightforward research tasks, or

  • compose it inside a Pipeline to build richer flows around deep research

This page demonstrates both approaches.

1. Deep Research Hello World

This section shows the simplest way to perform deep research using the GL SDK, by invoking deep research directly without any Pipeline orchestration.

It demonstrates using DeepResearcher as a GL SDK Component, focusing on the core deep research capability with minimal setup and no additional control logic.

Each example uses the same research() interface while swapping out the underlying deep research provider. This allows different providers to be used interchangeably, without changing the calling logic that invokes the Component.

Installation

To run the examples below, you only need the GL SDK packages installed and valid credentials for the deep research provider you want to use.

The following commands install the required SDK from the internal package registry. Choose the command that matches your environment.

Implementation

Below are minimal examples that perform deep research using the GL SDK.

Each example follows the same flow:

  1. define a research query

  2. invoke deep research using the same research() call

  3. receive streamed progress and final results via an event emitter

The only difference between examples is the underlying deep research provider being used. The calling code and usage pattern remain the same.

How it works:

  1. You provide a research query.

  2. You invoke deep research via the GL SDK using the research() call.

  3. The underlying provider executes the research and streams progress and results back to your application.

πŸ“ More examples on GitHubarrow-up-right

2. Deep Research with Custom Prompt

This section shows how to influence how research results are presented, without changing how the research itself is executed.

Custom prompts allow you to:

  • adjust tone and writing style

  • provide domain-specific instructions

  • control formatting of the final output

Implementation

The prompt affects how the final research output is written, but the research execution and reasoning strategy remain provider-defined.

Use cases:

  • Adjust tone (formal, casual, technical, etc.)

  • Add domain-specific instructions

  • Format the final output in specific styles (news article, academic paper, etc.)

πŸ“ Complete code on GitHubarrow-up-right

3. Deep Research with MCP Integration

This section shows how to provide additional data sources to deep research by supplying MCP tools at invocation time.

MCP integration allows deep research to access private or non-public data (such as enterprise systems or personal data sources) during execution. It does not change how deep research performs research or reasoning internally.

Note: MCP integration is currently only available with OpenAIDeepResearcher, based on provider support.

Prerequisites

The examples below assume that deep research is invoked directly using the GL SDK, with MCP tools passed in as part of the execution context.

Make sure you have:

  • MCP server URL or MCP connector credentials

  • For MCP connectors (like Google Calendar), get auth token from the provider

Implementation

MCP tools extend what data deep research can access, but the execution flow and research strategy remain defined by the underlying provider.

Benefits:

  • Provide access to private or non-public data sources

  • Integrate enterprise systems into the research context

  • Enable deep research to reference additional data during execution

4. Deep Research Pipeline with Routing

This section demonstrates how to place DeepResearcher Component inside a Pipeline to orchestrate when it is invoked, based on the characteristics of a user query.

Here, the Pipeline is responsible for:

  • inspecting the incoming request

  • deciding whether deep research is required

  • routing execution accordingly

The Pipeline does not define how deep research is executed internally. Deep research is invoked as an encapsulated step, and its internal reasoning remains provider-defined.

Setup

  1. Clone the repository

  2. Set UV authentication and install dependencies

    For Unix-based systems (Linux, macOS):

    For Windows:

  3. Prepare .env file

Implementation

In this example, DeepResearcher Component is used as one step within a Pipeline. The Pipeline handles routing logic and context preparation, while deep research itself remains a standalone invocation.

Run the script

How it works:

  1. The Pipeline evaluates the user query and determines the appropriate execution path.

  2. If deep research is required, the Pipeline invokes the deep research step.

  3. Otherwise, the Pipeline routes the request to a simpler response path.

  4. The Pipeline returns the final result produced by the selected path.

The deep research step is treated as an encapsulated unit; the Pipeline does not break down or modify its internal execution.

πŸ“ Complete code on GitHubarrow-up-right

5. Deep Research Pipeline with Google Drive Integration

This section demonstrates Pipeline orchestration with additional data sources, using Google Drive as an example.

In this setup:

  • the Pipeline controls routing and execution flow

  • Google Drive access is provided via an MCP connector

  • Deep research is invoked as an encapsulated step with additional data available during execution

Setup

  1. Clone the repository (if you haven't already)

  2. Set UV authentication and install dependencies

    For Unix-based systems (Linux, macOS):

    For Windows:

  3. Prepare .env file with Google Drive authentication

    Get the auth token from OpenAI Connector Guidearrow-up-right

    When generating the token, make sure to enable the following scopes:

    • userinfo.email

    • userinfo.profile

    • drive.readonly

    Add to .env:

Implementation

In this example, Google Drive is made available to deep research through an MCP connector, while the Pipeline determines when deep research should be invoked.

The Google Drive connector extends the data available during research, but does not change the execution flow or reasoning strategy of deep research itself.

Run the script

Benefits:

  • Make documents stored in Google Drive available as research context

  • Combine private documents with public information during research

  • Integrate external data sources without changing research execution logic

πŸ“ Complete code on GitHubarrow-up-right

Last updated