Build conversational agents that remember context across messages. This pattern maintains conversation history in memory, enabling chatbots, assistants, and any multi-turn interaction.
How It Works
The Dedalus SDK’s runner.run() accepts a messages array. By appending user messages and updating with result.to_input_list() after each turn, you get persistent conversations:
- Append the new user message to history
- Run the model with the full history
- Update history using
result.to_input_list()
Multi-turn Chat
import asyncio
from dedalus_labs import AsyncDedalus, DedalusRunner
async def main():
client = AsyncDedalus()
runner = DedalusRunner(client)
messages: list[dict] = []
while True:
user_input = input("You: ").strip()
if not user_input:
break
messages.append({"role": "user", "content": user_input})
result = await runner.run(
model="openai/gpt-4o",
messages=messages,
)
messages = result.to_input_list()
print(f"Assistant: {result.final_output}\n")
asyncio.run(main())
Key Concepts
The Dedalus SDK uses the OpenAI message format:
[
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi! How can I help?"},
{"role": "user", "content": "What did I just say?"},
]
After each runner.run(), call result.to_input_list() to get the complete conversation history including tool calls and assistant responses. This preserves the full context for the next turn.
Persisting to Disk
For conversations that survive restarts, save to JSON:
import asyncio
import json
from pathlib import Path
from dedalus_labs import AsyncDedalus, DedalusRunner
HISTORY_FILE = Path("chat_history.json")
def load_messages() -> list[dict]:
if HISTORY_FILE.exists():
return json.loads(HISTORY_FILE.read_text())
return []
def save_messages(messages: list[dict]):
HISTORY_FILE.write_text(json.dumps(messages, indent=2))
async def main():
client = AsyncDedalus()
runner = DedalusRunner(client)
messages = load_messages()
while True:
user_input = input("You: ").strip()
if not user_input:
break
messages.append({"role": "user", "content": user_input})
result = await runner.run(
model="openai/gpt-4o",
messages=messages,
)
messages = result.to_input_list()
save_messages(messages)
print(f"Assistant: {result.final_output}\n")
asyncio.run(main())
Storage Options
| Storage | Use Case |
|---|
| In-memory | Single session, no persistence needed |
| JSON file | Local development, single user |
| SQLite | Local apps, moderate scale |
| Redis | High-performance, distributed |
| PostgreSQL | Production, with JSONB columns |
Last modified on March 10, 2026