Language Model (LM) Invoker
What’s an LM Invoker?
Installation
# you can use a Conda environment
pip install --extra-index-url https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/ gllm-inference# you can use a Conda environment
pip install --extra-index-url https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/ gllm-inference# you can use a Conda environment
FOR /F "tokens=*" %T IN ('gcloud auth print-access-token') DO pip install --extra-index-url "https://oauth2accesstoken:%T@glsdk.gdplabs.id/gen-ai-internal/simple/" "gllm-inference"Quickstart
Initialization and Invoking
import asyncio
from gllm_inference.lm_invoker import OpenAILMInvoker
from gllm_inference.model import OpenAILM
lm_invoker = OpenAILMInvoker(OpenAILM.GPT_5_NANO)
response = asyncio.run(lm_invoker.invoke("What is the capital city of Indonesia?"))
print(f"Response: {response}")Understanding LM Invoker Output Type
Feature
Attribute
Type
Message Roles
Example 1: Passing a system message
Example 2: Simulating a multi-turn conversation
Multimodal Input
Loading Attachments
Example 1: Describe an image
Example 2: Analyze a PDF
Supported Attachment Types
Structured Output
Using a Pydantic BaseModel (Recommended)
BaseModel (Recommended)Using a JSON Schema Dictionary
Tool Calling
Tool Definition
LM Invocation with Tool
Output Analytics
Retry & Timeout
Reasoning
Web Search
Code Interpreter

MCP Server Integration
Batch Processing
Create a Batch Job
Get a Batch Job Status
Retrieve a Batch Job Results
List Batch Jobs
Cancel a Batch Job
Last updated
Was this helpful?