Tool
What is a Tool?
A Tool in GLLM Core is a Model Context Protocol (MCP)–style callable that an LLM agent can use to interact with the outside world.
Conceptually, a Tool is:
Named: identified by a
nameand optionaltitle.Described: has a human-readable
descriptionand optionalannotations.Schema-first: exposes structured
input_schemaandoutput_schema.Backed by a function: optionally wraps a Python callable (
func) that does the actual work.Async‑aware: can wrap both synchronous and asynchronous functions, with a unified
invoke()API.
In code, Tools live in gllm_core.schema.tool.Tool and are typically created via the @tool decorator.
Installation
# you can use a Conda environment
pip install --extra-index-url https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/ gllm-core# you can use a Conda environment
pip install --extra-index-url https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/ gllm-core# you can use a Conda environment
FOR /F "tokens=*" %T IN ('gcloud auth print-access-token') DO pip install --extra-index-url "https://oauth2accesstoken:%T@glsdk.gdplabs.id/gen-ai-internal/simple/" "gllm-core"Quickstart
This example shows the core ideas:
@toolwraps a function and returns aToolinstance.The original call semantics are preserved (
fetch_weather(...)).invoke()provides a standard async entrypoint for agents and infrastructure.Input and output schemata are derived from type hints and the docstring.
The @tool Decorator
The @tool decorator converts a regular function into a Tool:
Key behaviors:
Name resolution
If
nameis passed, it becomes the Tool identifier.Otherwise, the function’s
__name__is used.
Description resolution
If
descriptionis passed, it is used directly.Otherwise, the function docstring is cleaned and used.
Parameter docs inside a Google‑style
Arguments:section are parsed and attached to individual fields in the input schema.
Title resolution
titleis optional display text for UI clients.If omitted, consumers can fall back to the
nameor annotations.
Internally, @tool:
Inspects the function signature via
inspect.signature.Collects type hints with
typing.get_type_hints.Builds a Pydantic input model using
_build_field_definitions.Builds an optional Pydantic output model from the return type (if not
-> None).Constructs a
Toolinstance with these schemata and the function implementation.Copies key metadata (
__name__,__qualname__,__module__,__doc__,__wrapped__) onto the Tool instance so it still behaves like a function for introspection and IDEs.
Input and Output Schemata
Every Tool carries two schema fields:
input_schemaoutput_schema
These fields can be either:
JSON Schema–style dictionaries, or
Pydantic
BaseModelsubclasses.
The Tool validators normalize them:
If a Pydantic model class is provided,
model_json_schema()is called and the internal value becomes a JSON Schema dict.If a dict is provided, it is used as-is.
For
output_schema,Noneis also allowed to represent “no structured output”.
When you use @tool:
An input model named
<func_name>_inputis created with one field per parameter (excluding*args/**kwargs).An output model named
<func_name>_outputis created with a singleresultfield if the function has a non-Nonereturn type.
Example:
These schemata are crucial for MCP clients and LLM agents:
They define what arguments are allowed/required.
They define what structure the result will have.
They enable automatic validation, form building, and documentation.
Calling a Tool
The Tool class supports two primary ways to execute its underlying function.
Direct call:
If the underlying
funcis async, this returns a coroutine and youawaitit.If
funcis sync, it returns the result directly (no coroutine).
Standardized invoke call:
Works uniformly for both sync and async implementations.
For async functions,
invokesimply awaits the function.For sync functions,
invokeruns the function in a thread executor using the current event loop.Logs both the invocation parameters and the result via the Tool’s logger.
invoke is the preferred surface for agents and orchestration code, because it:
Is always async.
Accepts keyword arguments that are expected to match
input_schema.Provides consistent logging and error handling.
LangChain and Google ADK Adapters
The Tool class provides two constructors for external ecosystems:
Tool.from_langchain(langchain_tool)Tool.from_google_adk(function_declaration, func=None)
These are thin wrappers around adapter functions in gllm_core.adapters.tool:
from_langchain_tool()from_google_function()
Typical usage:
The adapters are responsible for:
Validating that external definitions have a valid name/description/schema.
Translating their argument specifications into JSON Schema.
Creating a
Toolinstance that looks the same as those built via@tool.
This keeps your internal agent and MCP tooling code agnostic to whether a Tool came from:
A local Python function via
@tool.A LangChain Tool object.
A Google ADK function declaration.
Logging and Error Handling
Each Tool instance exposes a private _logger property:
Uses
logging.getLoggerwith the fully-qualified class path.Applies a class-level
_log_level(defaultDEBUG).
invoke() uses this logger to:
Log debug information before execution (
Invoking tool 'name' with params: ...).Log the result after successful completion.
Log errors if the underlying function raises, then re-raise the exception.
Typical failure modes:
Missing implementation: if
funcisNone, both__call__andinvokeraiseValueErrorindicating the Tool has no implementation.Type mismatches: upstream validation is expected to be done using the tool’s JSON Schema; incorrect arguments passed directly to
invokemay result inTypeErroror domain-specific errors from the function body.
This pattern keeps Tools transparent to debuggers and logs, while still letting you treat them as simple callables.
Designing Good Tools
Some practical guidelines when authoring tools with @tool:
Type everything
Add full type hints to all parameters and the return value.
This ensures accurate schemata for agents and UIs.
Write Google-style docstrings
Use an
Arguments:section so_extract_param_doccan attach descriptions to individual fields.Keep parameter descriptions concise and action-oriented.
Avoid
*argsand**kwargsin Tool interfacesThey are ignored when building the input schema.
Prefer explicit, named parameters for clarity.
Return structured data
Use dicts or Pydantic models for results; avoid unstructured strings when possible.
This makes it easier for agents to reason about outputs.
Keep side effects clear
Tools are typically small, focused operations with a clear purpose.
Document external side effects (e.g., network calls, file writes) in the description.
By following these guidelines, your Tools will be:
Easier for LLM agents to understand and call correctly.
More interoperable across MCP-compatible runtimes.
Simpler to adapt from or into other ecosystems like LangChain or Google ADK.
Last updated