Skip to main content
Quick examples you can run immediately. Each demonstrates a core pattern with the Dedalus SDK.

Hello World

The minimal viable request:
import asyncio
from dedalus_labs import AsyncDedalus, DedalusRunner
from dotenv import load_dotenv

load_dotenv()

async def main():
    client = AsyncDedalus()
    runner = DedalusRunner(client)

    response = await runner.run(
        input="What's the capital of France?",
        model="openai/gpt-4o-mini"
    )

    print(response.final_output)

if __name__ == "__main__":
    asyncio.run(main())

MCP Server

Connect to hosted tools. Here, a web search:
import asyncio
from dedalus_labs import AsyncDedalus, DedalusRunner
from dotenv import load_dotenv

load_dotenv()

async def main():
    client = AsyncDedalus()
    runner = DedalusRunner(client)

    result = await runner.run(
        input="Who won Wimbledon 2025?",
        model="openai/gpt-4o-mini",
        mcp_servers=["tsion/brave-search-mcp"]
    )

    print(result.final_output)

if __name__ == "__main__":
    asyncio.run(main())

Local Tools

Define functions as tools. The SDK handles schema generation:
import asyncio
from dedalus_labs import AsyncDedalus, DedalusRunner
from dotenv import load_dotenv

load_dotenv()

def calculate_tip(amount: float, percentage: float = 18.0) -> float:
    """Calculate tip for a bill."""
    return amount * (percentage / 100)

async def main():
    client = AsyncDedalus()
    runner = DedalusRunner(client)

    result = await runner.run(
        input="What's a 20% tip on $85?",
        model="openai/gpt-4o-mini",
        tools=[calculate_tip]
    )

    print(result.final_output)

if __name__ == "__main__":
    asyncio.run(main())

Streaming

Show output as it generates:
import asyncio
from dedalus_labs import AsyncDedalus, DedalusRunner
from dedalus_labs.utils.stream import stream_async
from dotenv import load_dotenv

load_dotenv()

async def main():
    client = AsyncDedalus()
    runner = DedalusRunner(client)

    result = runner.run(
        input="Write a short story about a robot",
        model="openai/gpt-4o-mini",
        stream=True
    )

    await stream_async(result)

if __name__ == "__main__":
    asyncio.run(main())

Multiple MCP Servers

Combine capabilities from different servers:
result = await runner.run(
    input="Search for the latest AI news and get the weather in San Francisco",
    model="openai/gpt-4o-mini",
    mcp_servers=["tsion/brave-search-mcp", "joerup/open-meteo-mcp"]
)

Sync Client (Python)

For Python scripts without async:
from dedalus_labs import Dedalus, DedalusRunner
from dotenv import load_dotenv

load_dotenv()

def main():
    client = Dedalus()
    runner = DedalusRunner(client)

    response = runner.run(
        input="Explain recursion in one sentence.",
        model="openai/gpt-4o-mini"
    )

    print(response.final_output)

if __name__ == "__main__":
    main()

Next Steps

For more complex patterns, see the Use Cases section.