Extend LM Capabilities with Tools
This guide will walk you through implementing tool calling in your applications using two different approaches.
Tool calling enables language models to execute external functions during conversations, allowing dynamic computation, data retrieval, and complex workflows beyond simple text generation.
For example, when asked "What is 15 + 25 then multiply by 2?", instead of guessing, the model calls your add and multiply functions to provide accurate results.
Installation
# you can use a Conda environment
pip install --extra-index-url "https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-inference# you can use a Conda environment
pip install --extra-index-url "https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-inference# you can use a Conda environment
FOR /F "tokens=*" %T IN ('gcloud auth print-access-token') DO pip install --extra-index-url "https://oauth2accesstoken:%T@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-inferenceYou can either:
You can refer to the guide whenever you need explanation or want to clarify how each part works.
Follow along with each step to recreate the files yourself while learning about the components and how to integrate them.
Both options will work—choose based on whether you prefer speed or learning by doing!
Project Setup
Environment Configuration
Ensure you have a file named .env in your project directory with the following content:
OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"There are two approaches to implement tool calling:
LM Request Processor: Simplified approach with built-in tool execution handling
LM Invoker with Execution Loop: Direct control over the tool calling process
Choose the approach that best fits your use case and complexity requirements.
Option 1: LM Request Processor
This approach simplifies tool calling by using the LM Request Processor, which handles tool execution automatically.
1) Define Tools and Components
Set up your tools and LMRP components:
Import Libraries and Define Tools
Configure LM Invoker with Tools
The LM invoker automatically handles tool registration and execution when using LMRP.
2) Set Up Prompt Builder and LMRP
Create the prompt builder and request processor:
Create Prompt Builder
Initialize LM Request Processor
3) Process Requests with Tool Calling
Execute tool calling with a single method call:
Process the Request
Expected Output
The LMRP will automatically:
Format the prompt using the prompt builder
Send the request to the LM invoker
Handle any tool calls the model makes
Return the final response after all tool executions
For the query "What is 10 + 20 * 0 - 4?", the model will use the mathematical tools to calculate the correct result: 6
Option 2: LM Invoker with Execution Loop
This approach gives you full control over the tool calling execution flow and conversation management.
1) Define Your Tools
First, create the tools that your AI can use:
Import Required Libraries
Create Tool Functions
Define your tools using the @tool decorator:
2) Set Up the LM Invoker
Configure the LM invoker with your tools:
Initialize the Invoker with Tools
The LM invoker will automatically register your tools with the model for calling.
3) Implement the Execution Loop
Create the execution loop that handles tool calling:
Create the Execution Function
Set Up Prompt Builder
Configure the prompt builder to guide the model:
The system message is crucial for encouraging proper tool usage.
4) Execute Tool Calling
Run the complete tool calling example:
Run the Example
Expected Flow
The execution will follow this pattern:
User Query: "What is 15 + 25 then multiply by 2?"
Model Analysis: Identifies need for addition and multiplication
First Tool Call:
add(15, 25)→ Returns40Second Tool Call:
multiply(40, 2)→ Returns80Final Response: "The result is 80"
📂 Complete Guide Files
Option 1 Implementation
Option 2 Implementation
Last updated