Skip to main content
The core of the Dedalus SDK: send a message, get a response. Works with any model from any provider.

Start with chat

import asyncio
from dedalus_labs import AsyncDedalus, DedalusRunner
from dotenv import load_dotenv

load_dotenv()

async def main():
    client = AsyncDedalus()
    runner = DedalusRunner(client)

    response = await runner.run(
        input=(
            "I want to find the nearest basketball games in January in San Francisco.\n\n"
            "For now, do NOT make up events. Instead:\n"
            "1) Ask any clarifying questions you need.\n"
            "2) Propose a short plan for how you would find events.\n"
            "3) List the fields you'd extract for each event (for a table later)."
        ),
        model="anthropic/claude-opus-4-5",
    )

    print(response.final_output)

if __name__ == "__main__":
    asyncio.run(main())

Next steps

  • Add actions: Tools — Let the model call your functions
  • Connect external tools: MCP Servers — Use hosted MCP servers
  • Stream the workflow: Streaming — Show progress in real time
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.