LM-Based Router
Installation
pip install gllm-pipeline gllm-inferenceBasic Usage
Step 1: Set up the LM Request Processor
import asyncio
from gllm_inference.request_processor import build_lm_request_processor
from gllm_pipeline.router import LMBasedRouter
# Create an LM request processor
lm_processor = build_lm_request_processor(
lm_invoker_kwargs={
"model_id": "openai/gpt-5-nano",
"credentials": "<YOUR_OPENAI_API_KEY>"
},
prompt_builder_kwargs={
"system_template": "You are a customer support routing assistant.",
"user_template": "Route this query to the appropriate department: {source}"
}
)Step 2: Create the router
Step 3: Route queries
Advanced Configuration
Custom Output Parsing
Multi-Step Routing
Complete Example
Configuration Options
LM Model Selection
Route Filtering
Best Practices
Troubleshooting
See Also
Last updated
Was this helpful?