Code Interpreter
The code interpreter lets your LLM run Python code directly — perfect for data analysis, file generation, and visualizations. Instead of the LLM just telling you what the code should be, it can actually run it and return the result.
What We’ll Build
We’ll make:
A data analyst assistant that can run Python.
A prompt asking it to create a histogram from a list of numbers.
Save the generated chart as a
.png
file.
Step-by-Step
1. Create the LLM Invoker with Code Interpreter Enabled
from gllm_inference.lm_invoker import OpenAILMInvoker
lm_invoker = OpenAILMInvoker(
model_name="gpt-4.1",
code_interpreter=True
)
💡 The code_interpreter=True
flag gives GPT access to a Python execution sandbox — think of it like a built-in Jupyter Notebook that runs on demand.
2. Build the Prompt
We’ll send messages in chat format:
prompt = [
("system", ["You are a data analyst. Use the python tool to generate a file."]),
("user", ["Show a histogram of the following data: [1, 2, 1, 4, 1, 2, 4, 2, 3, 1]"]),
]
System message → tells GPT its role (a data analyst).
User message → our request (make a histogram).
3. Invoke the LLM
import asyncio
response = asyncio.run(lm_invoker.invoke(prompt))
Here:
The LLM might write Python code to plot the histogram.
The
code_interpreter
runs that code.Any files created (like
.png
) will be attached toresponse.code_exec_results
.
4. Save the Generated File
If the LLM produced a file, we can save it locally:
if response.code_exec_results:
response.code_exec_results[0].output[0].write_to_file("histogram.png")
💡 write_to_file()
takes the output from the interpreter and saves it to disk.
Example Run
Prompt:
Show a histogram of the following data: [1, 2, 1, 4, 1, 2, 4, 2, 3, 1]
Behind the scenes:
GPT writes Python code using
matplotlib
.The code interpreter runs it.
A histogram image is generated.
The file is saved as
histogram.png
.
Result:
(histogram.png saved in your project folder)
📊 Opening it will show a bar chart of the values.
Summary Table
code_interpreter=True
Enables Python execution inside GPT
prompt
Defines the role and request
invoke()
Sends the request to GPT
code_exec_results
Holds the output files from the Python run
write_to_file()
Saves the output locally
Last updated