Language Model (LM) Invoker
gllm-inference | Tutorial: Language Model (LM) Invoker| Use Case: Utilize Language Model Request Processor | API Reference | Cookbook
What’s an LM Invoker?
The LM invoker is a unified interface designed to help you interact with language models to generate outputs based on the provided inputs. In this tutorial, you'll learn how to invoke a language model using OpenAILMInvoker in just a few lines of code. You can also explore other types of LM Invokers, available here.
Prerequisites
This example specifically requires completion of all setup steps listed on the Prerequisites page.
Installation
# you can use a Conda environment
pip install --extra-index-url https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/ gllm-inference# you can use a Conda environment
pip install --extra-index-url https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/ gllm-inference# you can use a Conda environment
FOR /F "tokens=*" %T IN ('gcloud auth print-access-token') DO pip install --extra-index-url "https://oauth2accesstoken:%T@glsdk.gdplabs.id/gen-ai-internal/simple/" "gllm-inference"Quickstart
Initialization and Invoking
Let’s jump into a basic example using OpenAILMInvoker. We’ll ask the model a simple question and print the output.
import asyncio
from gllm_inference.lm_invoker import OpenAILMInvoker
from gllm_inference.model import OpenAILM
lm_invoker = OpenAILMInvoker(OpenAILM.GPT_5_NANO)
output = asyncio.run(lm_invoker.invoke("What is the capital city of Indonesia?"))
print(f"output: {output}")Output:
Understanding LM Invoker Output Type
Starting v0.6.0, the output of LM Invoker will be LMOutput object only.
Depending on how you configure the LM Invoker, the result of invoke(...) may be either a plain string or an LMOutput object:
String → returned when you don’t request any extra features. Example:
LMOutput→ returned whenever a output contains more than just plain text, such as when features like structured output, tool calling, or thinking are used. In these cases, you can still access the generated string throughoutput.text, while also taking advantage of the additional attributes exposed by the enabled feature.
Message Roles
Modern LMs understand context better when you structure inputs like a real conversation. That’s where message roles come in. You can simulate multi-turn chats, set instructions, or give memory to the model through a structured message format.
Example 1: Passing a system message
Output:
Example 2: Simulating a multi-turn conversation
Output:
Multimodal Input
Our LM Invokers supports attachments (images, documents, etc.). This lets you send rich content and ask the model to analyze or describe them.
Loading Attachments
An Attachment is a content object that can be loaded in the following ways:
using a remote file (URL)
using a local file
using a data URL
using raw bytes
We can load the attachment in the following ways:
Example 1: Describe an image
Output:
Example 2: Analyze a PDF
Output:
Supported Attachment Types
Each LM might support different types of inputs. As of now, OpenAILMInvoker supports image and documents. You can find more about supported type for each LM Invoker here.
Structured Output
In many real-world applications, we don't just want natural language outputs — we want structured data that our programs can parse and use directly.
You can define your expected output using:
A Pydantic
BaseModelclass (recommended).A JSON schema dictionary compatible with Pydantic's schema format.
When structured output is enabled, structured output results are stored in the outputs attribute of the LMOutput object and can be accessed via the structured_outputs property. The output type depends on the input schema:
Pydantic instance → The output will be a Pydantic BaseModel instance.
JSON schema dict → The output will be a Python dictionary.
Using a Pydantic BaseModel (Recommended)
You can define your expected output format as a Pydantic class. This ensures strong type safety and makes the output easier to work with in Python.
Output:
Using a JSON Schema Dictionary
Alternatively, you can define the structure using a JSON schema dictionary.
Output
If JSON schema is used, it must still be compatible with Pydantic's JSON schema, especially for complex schemas. For this reason, it is recommended to create the JSON schema using Pydantic's model_json_schema method.
Tool Calling
Tool calling means letting a language model call external functions to help it solve a task. It allows the AI to interact with external functions and APIs during the conversation, enabling dynamic computation, data retrieval, and complex workflows.
Think of it as:
The LM is smart at reading and reasoning, but when it needs to calculate or get external data, it picks up the phone and calls your "tool".
For more information about tools definition, please refer to this guide.
LM Invocation with Tool
Let's try to integrate a simple math operation tool to our LM invoker!
Output:
When the LM Invoker is invoked with tool calling capability, the model will return the tool calls. In this case, we still need to execute the tools and feed the result back to the LM invoker ourselves. If you'd like to handle this looping process automatically, please refere to the LM Request Processor component.
Native Tools
Native tools are a specific set tools that allow the language model to execute certain built-in capabilities during the invocation, enabling dynamic computation, data retrieval, and complex workflows. Similar to the user-defined tools, the native tools can be enabled by passing them through the LM invoker's tools parameter.
Each type of native tools is only available for certain LM invokers. Please find the available native tools below:
Code interpreter — Writes and runs Python code in a sandboxed environment.
Web search — Searches the web for relevant information.
MCP Server — Uses remote MCP servers to give models new capabilities.
Thinking
Certain language model providers and models supports thinking. Thinking allows models to produce an internal chain of thought before responding to the user. This enables model to perform advanced tasks such as complex problem solving, coding, scientific reasoning, and multi-step planning for agentic workflows.
When thinking is enabled, thinking results are stored in the outputs attribute of the LMOutput object and can be accessed via the thinkings property.
Let's try to perform thinking by using OpenAI's gpt-5-nano model:
Output:
Output Analytics
Output analytics enables you to collect detailed metrics and insights about your language model invocations. When output analytics is enabled, the output includes the following extra attributes:
token_usage: Input and output token counts.duration: Time taken to generate the output.finish_details: Additional details about how the generation finished.
To enable output analytics, simply need to set the output_analytics parameter to True.
Output:
Retry & Timeout
Retry & timeout functionality provides robust error handling and reliability for language model interactions. It allows you to automatically retry failed requests and set time limits for operations, ensuring your applications remain responsive and resilient to network issues or API failures.
Retry & timeout can be configured via the RetryConfig class' parameters:
max_retries: Maximum number of retry attempts (defaults to 3 maximum retry attempts).
timeout: Maximum time in seconds to wait for each request (defaults to 30.0 seconds). To disable timeout, this parameter can be set to 0.0 second.
Let's try to apply it to our LM invoker!
Output Transformer
Output transformers allow you to transform the raw output from the language model into a different format or structure. This is useful when you want to post-process the model's output before returning it to your application.
The LM Invoker supports output transformation through the output_transformer parameter, which can be configured during initialization.
As an example, let's use the JSON output transformer to automatically parse JSON outputs:
Output:
Extra Capabilities
Some LM invokers also provide additional capabities that are useful in certain cases:
Batch Invocation — To manage batch requests for cheaper but slower invocations.
File Management — To manage uploaded files in their server side. These files can then be used as inputs during invocations.
Data Store Management — To manage built-in data stores to be used as internal knowledge base. This allows the LM invoker to perform built-in RAG (Retrieval-Augmented Generation).
Troubleshooting
If you encounter errors, refer to the Troubleshooting Guide for detailed explanations of common errors and how to resolve them.
Last updated
Was this helpful?