Skip to main content
Agents become useful when they can do things beyond generating text. Tools let them call functions, query databases, make API requests—anything you can express in code.

How It Works

Define a function with type hints and a docstring. Pass it to runner.run(). The SDK extracts the schema automatically and handles execution when the model decides to use it.
import asyncio
from dedalus_labs import AsyncDedalus, DedalusRunner
from dotenv import load_dotenv

load_dotenv()

def add(a: int, b: int) -> int:
    """Add two numbers."""
    return a + b

def multiply(a: int, b: int) -> int:
    """Multiply two numbers."""
    return a * b

async def main():
    client = AsyncDedalus()
    runner = DedalusRunner(client)

    result = await runner.run(
        input="Calculate (15 + 27) * 2",
        model="openai/gpt-4o-mini",
        tools=[add, multiply]
    )

    print(result.final_output)

if __name__ == "__main__":
    asyncio.run(main())
The model sees the tool schemas, decides which to call, and the Runner executes them. Multi-step reasoning happens automatically—if a calculation requires calling add then multiply, the Runner handles the loop.

Tool Requirements

Good tools have:
  • Type hints on all parameters and return values
  • Docstrings that explain what the tool does (the model reads these)
  • Clear names that indicate purpose
# Good: typed, documented, clear name
def get_weather(city: str, units: str = "celsius") -> dict:
    """Get current weather for a city. Returns temperature and conditions."""
    return {"temp": 22, "conditions": "sunny"}

# Bad: no types, no docs, unclear name
def do_thing(x):
    return some_api_call(x)

Async Tools

Tools can be async. The Runner awaits them automatically:
async def fetch_user(user_id: int) -> dict:
    """Fetch user profile from database."""
    async with db.connection() as conn:
        return await conn.fetchone("SELECT * FROM users WHERE id = $1", user_id)

Combining with MCP Servers

Local tools and MCP servers work together. Use local tools for custom logic, MCP servers for common capabilities:
def calculate_discount(price: float, percentage: float) -> float:
    """Calculate discounted price."""
    return price * (1 - percentage / 100)

result = await runner.run(
    input="Find the price of AirPods Pro and calculate a 15% discount",
    model="openai/gpt-4o-mini",
    tools=[calculate_discount],
    mcp_servers=["tsion/brave-search-mcp"]
)

Model Selection

Tool calling quality varies by model. For reliable multi-step tool use:
openai/gpt-4o-mini and openai/gpt-4.1 handle complex tool chains well. Older or smaller models may struggle with multi-step reasoning.

Next Steps