Produce Consistent Output from LM
Installation
# you can use a Conda environment
pip install --extra-index-url "https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-inference# you can use a Conda environment
pip install --extra-index-url "https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-inference# you can use a Conda environment
FOR /F "tokens=*" %T IN ('gcloud auth print-access-token') DO pip install --extra-index-url "https://oauth2accesstoken:%T@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-inferenceProject Setup
1
OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"Option 1: Using LM Invoker's Response Schema
1) Define Your Response Schema
1
2
2) Configure the LM Invoker
1
3) Create the Prompt Builder
1
4) Build the LM Request Processor
1
5) Process Requests and Get Structured Output
1
2
3
Option 2: Using JSON Output Parser
1) Define Your Response Schema
1
2
2) Configure the JSON Output Parser
1
3) Configure the LM Invoker
1
4) Create the Prompt Builder with Schema Instructions
1
5) Build the LM Request Processor with Output Parser
1
6) Process Requests with Schema Parameter
1
2
📂 Complete Guide Files
Option 1: Using LM Invoker's Response Schema
Option 2: Using JSON Output Parser
Last updated
Was this helpful?