LM Request Processor (LMRP)

gllm-inference | Tutorial: LM Request Processor (LMRP) | Use Case: Utilize Language Model Request Processor | API Reference

What’s an LMRP?

The LM Request Processor (LMRP) is an orchestrator module that wraps a prompt builder, an LM invoker, and optionally an output parser to perform end-to-end LM invocation in a single process. In this tutorial, you'll learn how to use the LMRequestProcessor in just a few lines of code.

Prerequisites

This example specifically requires completion of all setup steps listed on the Prerequisites page.

Installation

# you can use a Conda environment
pip install --extra-index-url https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/ gllm-inference

Quickstart

Let’s jump into a basic example using LMRequestProcessor. This basic LMRP usage will only utilize a simple PromptBuilder and an OpenAILMInvoker.

import asyncio
from gllm_inference.prompt_builder import PromptBuilder
from gllm_inference.lm_invoker import OpenAILMInvoker
from gllm_inference.model import OpenAILM
from gllm_inference.request_processor import LMRequestProcessor

prompt_builder = PromptBuilder(
    system_template="Talk like a pirate.", 
    user_template="What is the capital city of Indonesia?",
)
lm_invoker = OpenAILMInvoker(OpenAILM.GPT_5_NANO)
lm_request_processor = LMRequestProcessor(prompt_builder, lm_invoker)
response = asyncio.run(lm_request_processor.process())
print(f"Response: {response}")

Expected Output

Using Output Parser

Optionally, the LMRP can also utilize an output parser. In this case, the LMRP must be configured to output a response with a format that is compatible with the output parser (e.g. through the prompt template). In the example below, we're instructing the language model to answer in JSON format since we're using the JSONOutputParser.

Expected Output

Using Prompt Variables

The LMRP also supports passing prompt variables to the prompt builder. Let's try it out!

Expected Output

Adding History

The LMRP also supports passing history to the prompt builder. Let's try it out!

Expected Output

Adding Extra Contents

The LMRP also supports passing extra contents to the prompt builder. Let's try it out!

Expected Output

Automatic Tool Execution

When tools are provided to the language model, the LMRP has the capability to automatically executes the tools until the desired final response is produced. Let's test it out!

Expected Output

In the case that we don't want the LMRP to automatically executes the tool, we can set the auto_execute_tools param to False. In this case, the LMRP will directly return the ToolCall objects produced the language models. Let's try this by changing the following line from the above example:

Expected Output

This is a simpler way to initialize an LMRP, in which you can pass the essential parameters directly without manually creating individual components like prompt builders and LM invokers.

The build_lm_request_processor() function is a convenience method that automatically creates and configures all the necessary components for you:

This single function call automatically:

  1. Creates a PromptBuilder with your templates

  2. Sets up the appropriate LMInvoker for your model

  3. Combines them into a complete LMRequestProcessor


Congratulations! We've successfully completed the tutorial to use the LMRP!

Last updated