Code Interpreter
gllm-inference | Tutorial: Code Interpreter | API Reference
Supported by: OpenAILMInvoker
What is Code Interpreter?
Code interpreter is a native tool that allows the language model to write and run Python code in a sandboxed environment to solve complex problems in domains like data analysis, coding, and math. When it's enabled, code execution results are stored in the outputs attribute of the LMOutput object and can be accessed via the code_exec_results property.
Code interpreter tool can be enabled with several options:
import asyncio
from gllm_inference.lm_invoker import OpenAILMInvoker
from gllm_inference.model import OpenAILM
from gllm_inference.schema import NativeTool, NativeToolType
# Option 1: as string
code_interpreter_tool = "code_interpreter"
# Option 2: as enum
code_interpreter_tool = NativeToolType.CODE_INTERPRETER
# Option 3: as dictionary (useful for providing custom kwargs)
code_interpreter_tool = {"type": "code_interpreter", **kwargs}
# Option 4: as native tool object (useful for providing custom kwargs)
code_interpreter_tool = NativeTool.code_interpreter(**kwargs)
lm_invoker = OpenAILMInvoker(OpenAILM.GPT_5_NANO, tools=[code_interpreter_tool])Since OpenAI models internally recognize the code interpreter as the Python tool, it's recommended to explicitly instruct the model to use the Python tool when using code interpreter to ensure more reliable code execution. Let's try it to solve a simple math problem!
Output:
What's awesome about code intepreter is that it can produce more than just a text! In the example below, let's try creating a histogram using the code intrepreter. We're going to save any generated attachment to our local path.
Output:
Below is the generated histogram that has been saved in our local path. What an awesome way to use a language model!

Last updated