Extend LM Capabilities with Tools

This guide will walk you through implementing tool calling in your applications using two different approaches.

Tool calling enables language models to execute external functions during conversations, allowing dynamic computation, data retrieval, and complex workflows beyond simple text generation.

For example, when asked "What is 15 + 25 then multiply by 2?", instead of guessing, the model calls your add and multiply functions to provide accurate results.

Prerequisites

This example specifically requires:

  1. Completion of all setup steps listed on the Prerequisites page.

  2. A working OpenAI API key configured in your environment variables.

You should be familiar with these concepts and components:

View full project code on GitHub

Installation

# you can use a Conda environment
pip install --extra-index-url "https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-inference

You can either:

  1. You can refer to the guide whenever you need explanation or want to clarify how each part works.

  2. Follow along with each step to recreate the files yourself while learning about the components and how to integrate them.

Both options will work—choose based on whether you prefer speed or learning by doing!

Project Setup

1

Environment Configuration

Ensure you have a file named .env in your project directory with the following content:

OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"

Replace <YOUR_OPENAI_API_KEY> with your actual OpenAI API key.


There are two approaches to implement tool calling:

  1. LM Request Processor: Simplified approach with built-in tool execution handling

  2. LM Invoker with Execution Loop: Direct control over the tool calling process

Choose the approach that best fits your use case and complexity requirements.

Option 1: LM Request Processor

This approach simplifies tool calling by using the LM Request Processor, which handles tool execution automatically.

1) Define Tools and Components

Set up your tools and LMRP components:

1

Import Libraries and Define Tools

2

Configure LM Invoker with Tools

The LM invoker automatically handles tool registration and execution when using LMRP.

2) Set Up Prompt Builder and LMRP

Create the prompt builder and request processor:

1

Create Prompt Builder

2

Initialize LM Request Processor

The LMRP automatically handles the tool calling execution loop, making implementation much simpler than Option 2.

3) Process Requests with Tool Calling

Execute tool calling with a single method call:

1

Process the Request

2

Expected Output

The LMRP will automatically:

  1. Format the prompt using the prompt builder

  2. Send the request to the LM invoker

  3. Handle any tool calls the model makes

  4. Return the final response after all tool executions

For the query "What is 10 + 20 * 0 - 4?", the model will use the mathematical tools to calculate the correct result: 6

Option 2: LM Invoker with Execution Loop

This approach gives you full control over the tool calling execution flow and conversation management.

1) Define Your Tools

First, create the tools that your AI can use:

1

Import Required Libraries

2

Create Tool Functions

Define your tools using the @tool decorator:

The @tool decorator automatically generates the schema that the model needs to understand and call your functions.

2) Set Up the LM Invoker

Configure the LM invoker with your tools:

1

Initialize the Invoker with Tools

The LM invoker will automatically register your tools with the model for calling.

3) Implement the Execution Loop

Create the execution loop that handles tool calling:

1

Create the Execution Function

The execution loop handles multiple rounds of tool calling, allowing the model to use tool results for further reasoning and additional tool calls.

2

Set Up Prompt Builder

Configure the prompt builder to guide the model:

The system message is crucial for encouraging proper tool usage.

4) Execute Tool Calling

Run the complete tool calling example:

1

Run the Example

2

Expected Flow

The execution will follow this pattern:

  1. User Query: "What is 15 + 25 then multiply by 2?"

  2. Model Analysis: Identifies need for addition and multiplication

  3. First Tool Call: add(15, 25) → Returns 40

  4. Second Tool Call: multiply(40, 2) → Returns 80

  5. Final Response: "The result is 80"

📂 Complete Guide Files

Option 1 Implementation

Option 2 Implementation

Last updated