Quickstart with LM Invoker

What’s an LM Invoker?

The LM invoker is a utility module designed to help you send prompts to a language model and get smart responses. In this quickstart, you'll learn how to invoke a language model using OpenAILMInvoker in just a few lines of code. You can also explore other types of LM Invokers, available here.

Prerequisites

This example specifically requires completion of all setup steps listed on the Prerequisites page.

Installation

# you can use a Conda environment
pip install --extra-index-url https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/ "gllm-inference"

Invoking OpenAILMInvoker

Let’s jump into a basic example using OpenAILMInvoker. We’ll ask the model a simple question and print the response.

import asyncio
from gllm_inference.lm_invoker import OpenAILMInvoker

# Initialize the language model
lm_invoker = OpenAILMInvoker("gpt-4.1-nano")

# Send a prompt and get a response
response = asyncio.run(lm_invoker.invoke("What is the capital city of Indonesia?"))

print(f"Response: {response}")

Expected Output

Response: The capital city of Indonesia is Jakarta.

That’s it! You've just made your first successful language model call using OpenAILMInvoker. Fast, clean, and ready to scale into more complex use cases!

You can also see other supported language models here

Last updated