Agent
Overview
An Agent in the AI Agent Platform is an intelligent autonomous entity that can perform tasks, make decisions, and interact with various tools and services. Agents are created through the platform's backend API and executed via the runner service, which spawns individual A2A (Agent-to-Agent) servers for each agent.
What is an Agent?
An AI Agent is a software program that:
Perceives its environment through inputs and conversation history
Reasons about problems using large language models (OpenAI, Google, etc.)
Acts by executing tools and making API calls
Streams responses in real-time through server-sent events
Maintains conversation context across interactions
Agent Architecture
Agent Types
1. Config Agents
Standard agents configured through the API:
{
"name": "customer_support_agent",
"type": "config",
"framework": "langchain",
"instruction": "You are a helpful customer support agent",
"description": "Handles customer inquiries",
"tools": ["tool-1", "tool-2"],
"agent_config": {
"lm_provider": "openai",
"lm_name": "gpt-4o",
"lm_hyperparameters": {
"temperature": 0.7,
"max_tokens": 1000
}
}
}
2. A2A Agents
Agents that communicate with external A2A endpoints:
{
"type": "a2a",
"framework": "langchain",
"a2a_profile": {
"url": "https://api.example.com/agent-endpoint"
}
}
3. Hierarchical Agents
Agents with sub-agents for complex workflows:
{
"name": "supervisor_agent",
"type": "config",
"framework": "langgraph",
"instruction": "Coordinate tasks between specialized agents",
"tools": ["coordination_tool"],
"agents": ["sub-agent-id-1", "sub-agent-id-2"]
}
Agent Creation
API Endpoint
POST /agents/
Request Format
{
"name": "my_agent",
"type": "config",
"framework": "langchain",
"version": "1.0.0",
"description": "Agent description",
"instruction": "You are a helpful assistant",
"metadata": {
"environment": "production",
"category": "support"
},
"tools": ["tool-id-1", "tool-id-2"],
"mcps": ["mcp-id-1"],
"agents": ["sub-agent-id"],
"agent_config": {
"lm_provider": "openai",
"lm_name": "gpt-4o",
"lm_hyperparameters": {
"temperature": 0.7,
"max_tokens": 1000,
"top_p": 1.0
},
"lm_base_url": "https://api.openai.com/v1",
"lm_api_key": "your-api-key"
}
}
Response Format
{
"success": true,
"data": {
"id": "agent-uuid"
},
"message": "Agent created successfully"
}
Agent Execution
Running an Agent
POST /agents/{agent_id}/run
Request Format
{
"input": "What's the weather like today?",
"chat_history": [
{
"role": "user",
"content": "Hello"
},
{
"role": "assistant",
"content": "Hi! How can I help you?"
}
]
}
Streaming Response Format
The agent returns server-sent events (SSE) with JSON data:
data: {"content": "I'll help you check the weather.", "type": "text"}
data: {"content": "Using weather tool...", "type": "tool_use"}
data: {"content": "The weather is sunny, 72°F", "type": "text"}
data: {"status": "completed", "summary": {"chunks_count": 15, "total_tokens": 120}}
Agent Configuration
Supported Frameworks
langchain: Standard LangChain agents
langgraph: Graph-based workflow agents
google_adk: Google AI Development Kit agents
Supported Providers
openai: OpenAI models (GPT-4, GPT-3.5, etc.)
google: Google models (Gemini, etc.)
anthropic: Anthropic models (Claude, etc.)
Model Configuration
{
"agent_config": {
"lm_provider": "openai",
"lm_name": "gpt-4o",
"lm_hyperparameters": {
"temperature": 0.7,
"max_tokens": 1000,
"top_p": 1.0
},
"lm_base_url": "https://api.openai.com/v1",
"lm_api_key": "your-api-key"
}
}
Agent Management
List Agents
GET /agents/
Query parameters:
agent_type
: Filter by type (config, a2a)framework
: Filter by frameworkname
: Partial name matchmetadata.{key}
: Filter by metadata fields
Get Agent Details
GET /agents/{agent_id}
Update Agent
PUT /agents/{agent_id}
Delete Agent (Soft Delete)
DELETE /agents/{agent_id}
Restore Agent
POST /agents/{agent_id}/restore
Agent Lifecycle in Runner Service
1. Agent Spawning
When an agent is executed:
Runner service receives execution request
Agent Lifecycle Manager spawns A2A agent server
Agent server starts on available port (8002-9000)
Agent registers with service registry
2. Agent Execution
LangChain agent is created with tools and configuration
Agent processes input with conversation history
Response is streamed back via server-sent events
Agent remains active for future requests
3. Agent Management
Auto-termination: Agents shut down after idle timeout
Health monitoring: Regular health checks
Resource management: Port allocation and cleanup
Tools Integration
Custom Tools
Upload Python files with @tool_plugin
decorator:
from gllm_plugin.tools import tool_plugin
from langchain_core.tools import BaseTool
@tool_plugin(name="my_tool", version="1.0.0")
class MyTool(BaseTool):
name = "my_custom_tool"
description = "Does something useful"
def _run(self, input: str) -> str:
return f"Processed: {input}"
Native Tools
Platform-provided tools:
Web search tools
Data analysis tools
File operations
Email sending
MCP Tools
Model Context Protocol integration:
External MCP servers
Standardized tool interface
Dynamic tool discovery
BOSA Tools
Enterprise system integration:
User services
Document management
Workflow automation
Analytics
Error Handling
Common Error Types
Agent not found: HTTP 404
Validation errors: HTTP 422
Execution failures: HTTP 500
Authentication errors: HTTP 401
Error Response Format
{
"success": false,
"error": {
"type": "ValidationError",
"message": "Invalid agent configuration",
"details": {
"field": "instruction",
"issue": "required field missing"
}
}
}
Best Practices
1. Agent Design
Clear Instructions: Provide specific, actionable system prompts
Appropriate Tools: Select tools that match agent's purpose
Reasonable Limits: Set appropriate token limits and timeouts
Error Handling: Design for graceful failure scenarios
2. Performance
Tool Selection: Use only necessary tools to reduce latency
Caching: Leverage conversation history for context
Streaming: Use streaming for better user experience
Resource Management: Monitor agent resource usage
3. Security
API Keys: Secure storage of model API keys
Input Validation: Validate all user inputs
Access Control: Use API key authentication
Audit Logging: Track agent usage and actions
Integration Examples
Python Client
import requests
import json
# Create agent
agent_data = {
"name": "my_agent",
"type": "config",
"framework": "langchain",
"instruction": "You are a helpful assistant",
"agent_config": {
"lm_provider": "openai",
"lm_name": "gpt-4o"
}
}
response = requests.post(
"https://api.ai-agent-platform.com/agents/",
json=agent_data,
headers={"X-API-Key": "your-api-key"}
)
agent_id = response.json()["data"]["id"]
# Run agent
run_data = {
"input": "Hello, how are you?",
"chat_history": []
}
response = requests.post(
f"https://api.ai-agent-platform.com/agents/{agent_id}/run",
json=run_data,
headers={"X-API-Key": "your-api-key"},
stream=True
)
for line in response.iter_lines():
if line.startswith(b"data: "):
data = json.loads(line[6:])
print(data.get("content", ""))
JavaScript Client
// Create agent
const agentData = {
name: "my_agent",
type: "config",
framework: "langchain",
instruction: "You are a helpful assistant",
agent_config: {
lm_provider: "openai",
lm_name: "gpt-4o"
}
};
const createResponse = await fetch("/agents/", {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-API-Key": "your-api-key"
},
body: JSON.stringify(agentData)
});
const { data: { id: agentId } } = await createResponse.json();
// Run agent with streaming
const runData = {
input: "Hello, how are you?",
chat_history: []
};
const response = await fetch(`/agents/${agentId}/run`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-API-Key": "your-api-key"
},
body: JSON.stringify(runData)
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = JSON.parse(line.slice(6));
console.log(data.content);
}
}
}
Next Steps
Learn about Tools to extend agent capabilities
Explore Memory for conversation context
Understand A2A for agent-to-agent communication
Check out SDK documentation for framework-specific details
Last updated