Skip to main content
Build conversational agents that remember context across messages. This pattern maintains conversation history in memory, enabling chatbots, assistants, and any multi-turn interaction.

How It Works

The Dedalus SDK’s runner.run() accepts a messages array. By appending user messages and updating with result.to_input_list() after each turn, you get persistent conversations:
  1. Append the new user message to history
  2. Run the model with the full history
  3. Update history using result.to_input_list()

Multi-turn Chat

import asyncio
from dedalus_labs import AsyncDedalus, DedalusRunner

async def main():
    client = AsyncDedalus()
    runner = DedalusRunner(client)
    messages: list[dict] = []

    while True:
        user_input = input("You: ").strip()
        if not user_input:
            break

        messages.append({"role": "user", "content": user_input})

        result = await runner.run(
            model="openai/gpt-4o",
            messages=messages,
        )

        messages = result.to_input_list()
        print(f"Assistant: {result.final_output}\n")

asyncio.run(main())

Key Concepts

Message Format

The Dedalus SDK uses the OpenAI message format:
[
    {"role": "user", "content": "Hello"},
    {"role": "assistant", "content": "Hi! How can I help?"},
    {"role": "user", "content": "What did I just say?"},
]

Persistence with to_input_list()

After each runner.run(), call result.to_input_list() to get the complete conversation history including tool calls and assistant responses. This preserves the full context for the next turn.

Persisting to Disk

For conversations that survive restarts, save to JSON:
import asyncio
import json
from pathlib import Path
from dedalus_labs import AsyncDedalus, DedalusRunner

HISTORY_FILE = Path("chat_history.json")

def load_messages() -> list[dict]:
    if HISTORY_FILE.exists():
        return json.loads(HISTORY_FILE.read_text())
    return []

def save_messages(messages: list[dict]):
    HISTORY_FILE.write_text(json.dumps(messages, indent=2))

async def main():
    client = AsyncDedalus()
    runner = DedalusRunner(client)
    messages = load_messages()

    while True:
        user_input = input("You: ").strip()
        if not user_input:
            break

        messages.append({"role": "user", "content": user_input})

        result = await runner.run(
            model="openai/gpt-4o",
            messages=messages,
        )

        messages = result.to_input_list()
        save_messages(messages)
        print(f"Assistant: {result.final_output}\n")

asyncio.run(main())

Storage Options

StorageUse Case
In-memorySingle session, no persistence needed
JSON fileLocal development, single user
SQLiteLocal apps, moderate scale
RedisHigh-performance, distributed
PostgreSQLProduction, with JSONB columns
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.