LM Request Processor (LMRP)
What’s an LMRP?
Installation
# you can use a Conda environment
pip install --extra-index-url https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/ gllm-inference# you can use a Conda environment
pip install --extra-index-url https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/ gllm-inference# you can use a Conda environment
FOR /F "tokens=*" %T IN ('gcloud auth print-access-token') DO pip install --extra-index-url "https://oauth2accesstoken:%T@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-inferenceQuickstart
import asyncio
from gllm_inference.prompt_builder import PromptBuilder
from gllm_inference.lm_invoker import OpenAILMInvoker
from gllm_inference.model import OpenAILM
from gllm_inference.request_processor import LMRequestProcessor
prompt_builder = PromptBuilder(
system_template="Talk like a pirate.",
user_template="What is the capital city of Indonesia?",
)
lm_invoker = OpenAILMInvoker(OpenAILM.GPT_5_NANO)
lm_request_processor = LMRequestProcessor(prompt_builder, lm_invoker)
response = asyncio.run(lm_request_processor.process())
print(f"Response: {response}")Using Output Parser
Using Prompt Variables
Adding History
Adding Extra Contents
Automatic Tool Execution
Use LM Request Processor Builder (Recommended)
Last updated
Was this helpful?