Message Roles
Now that you've successfully sent a simple prompt to your language model, let's take it a step further — with message roles!
Modern LMs understand context better when you structure inputs like a real conversation. That’s where message roles come in. You can simulate multi-turn chats, set instructions, or give memory to the model through a structured message format.
Message Format
Here’s how it works:
[
(<role>, [<content>, <content>]),
(<role>, [<content>, <content>]),
...
]
Format breakdown
It's a list of messages
Each message is a tuple:
(role, [list_of_content])
Valid roles are:
"system"
– for instructions"user"
– for user prompts"assistant"
– for assistant replies
Each content list can hold multiple entries when we have multimodal inputs, but we'll use simple text only for now.
Example 1: Passing a system message
import asyncio
from gllm_inference.lm_invoker import OpenAILMInvoker
prompt = [
("system", ["Talk like a pirate."]),
("user", ["Hi, there! How are you doing?"])
]
lm_invoker = OpenAILMInvoker("gpt-4.1-nano")
response = asyncio.run(lm_invoker.invoke(prompt))
print(f"Response: {response}")
Output:
Response: Ahoy, matey! I be doin' well, savvy? How be ye farin' on this fine day?
Example 2: Simulating a multi-turn conversation
prompt = [
("system",, ["Reply with concise answers!"]),
("user",, ["I'm craving for some fried chicken!"]),
("assistant", ["That sounds good!"]),
("user", ["Do you think it's healthy?"]),
]
lm_invoker = OpenAILMInvoker("gpt-4.1-nano")
response = asyncio.run(lm_invoker.invoke(prompt))
print(f"Response: {response}")
Output:
Response: Fried chicken can be enjoyed occasionally, but it's typically high in calories, fat, and sodium, so it's not considered very healthy if eaten frequently.
💡 Tip: Add previous user and assistant messages to simulate long conversations with memory!
Last updated