Produce Consistent Output from LM
This guide will walk you through creating structured output responses using LM Request Processor (LMRP) with response schemas.
Structured output allows you to receive LM responses in a predefined, consistent format (Pydantic BaseModel/JSON). Instead of getting unstructured text, you get validated Python objects that are ready to use in your application.
Installation
# you can use a Conda environment
pip install --extra-index-url "https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-inference# you can use a Conda environment
pip install --extra-index-url "https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-inference# you can use a Conda environment
FOR /F "tokens=*" %T IN ('gcloud auth print-access-token') DO pip install --extra-index-url "https://oauth2accesstoken:%T@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-inferenceYou can either:
You can refer to the guide whenever you need explanation or want to clarify how each part works.
Follow along with each step to recreate the files yourself while learning about the components and how to integrate them.
Both options will work—choose based on whether you prefer speed or learning by doing!
Project Setup
Environment Configuration
Ensure you have a file named .env in your project directory with the following content:
OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"Option 1: Using LM Invoker's Response Schema
1) Define Your Response Schema
The response schema defines the exact structure you want the AI to return. We'll use Pydantic models to define this structure:
Import Required Libraries
Start by importing the necessary dependencies:
Create Your Pydantic Models
Define the structure for individual activities and the complete response:
🧠 These models define exactly what fields the AI response must include and their data types.
2) Configure the LM Invoker
The LM invoker handles communication with the language model and enforces the response schema:
Set up the LM Invoker with Response Schema
The response_schema parameter ensures the AI response matches your Pydantic model exactly.
3) Create the Prompt Builder
The prompt builder formats your prompts consistently:
Define Your Prompt Templates
🧠 The
{question}placeholder will be replaced with actual user input when processing requests.
4) Build the LM Request Processor
The LM request processor combines your prompt builder and LM invoker into a complete processing pipeline:
Create the Request Processor
This creates a complete pipeline that will:
Format your prompt using the prompt builder
Send it to the LM invoker with schema enforcement
Return structured, validated results
🧠 The LM Request Processor automatically handles the entire workflow, making structured output generation seamless.
5) Process Requests and Get Structured Output
Now you can process requests and receive structured responses:
Process a Request
Expected Output Structure
The response will be a validated ActivityList object:
Access Individual Fields
You can access specific data from the structured response:
Option 2: Using JSON Output Parser
This approach uses the JSON Output Parser to handle structured output parsing after the LM generates a response. Instead of enforcing the schema at the LM level, it relies on prompt instructions and post-processing.
1) Define Your Response Schema
The response schema definition remains the same as Option 1:
Import Required Libraries
Create Your Pydantic Models
🧠 The same Pydantic models work for both approaches - the difference is in how they're applied.
2) Configure the JSON Output Parser
Create the output parser that will handle the JSON parsing and validation:
Set up the JSON Output Parser
3) Configure the LM Invoker
Unlike Option 1, the LM invoker doesn't need a response schema parameter:
Set up the LM Invoker
🧠 Notice there's no
response_schemaparameter - the structure is enforced through prompting and parsing.
4) Create the Prompt Builder with Schema Instructions
The prompt must instruct the model to return JSON in the expected format:
Define Your Prompt Templates with Schema
🧠 The
{schema}placeholder will be filled with the actual JSON schema at runtime.
5) Build the LM Request Processor with Output Parser
Include the output parser in the request processor configuration:
Create the Request Processor
This creates a pipeline that will:
Format your prompt with schema instructions
Send it to the LM invoker
Parse and validate the JSON response using the output parser
🧠 The output parser handles both JSON parsing and optional Pydantic model validation.
6) Process Requests with Schema Parameter
Pass the schema as a prompt parameter when processing requests:
Process a Request with Schema
Expected Output Structure
The response will contain parsed JSON data that matches your schema structure:
📂 Complete Guide Files
Option 1: Using LM Invoker's Response Schema
Option 2: Using JSON Output Parser
Last updated