Agents become useful when they can do things beyond generating text. Tools let them call functions, query databases, make API requests—anything you can express in code.
How It Works
Define a function with type hints and a docstring. Pass it to runner.run(). The Dedalus SDK extracts the schema automatically and handles execution when the model decides to use it.
import asyncio
from dedalus_labs import AsyncDedalus, DedalusRunner
from dotenv import load_dotenv
load_dotenv()
def as_bullets(items: list[str]) -> str:
"""Format items as a bulleted list."""
return "\n".join(f"• {item}" for item in items)
async def main():
client = AsyncDedalus()
runner = DedalusRunner(client)
result = await runner.run(
input=(
"Take the following events and call as_bullets with a list of items (one per event).\n\n"
"Events:\n"
"- Warriors vs Lakers — San Francisco — 2026-01-18\n"
"- Warriors vs Suns — San Francisco — 2026-01-22\n"
"- Warriors vs Celtics — San Francisco — 2026-01-29\n\n"
"Return only the list."
),
model="openai/gpt-5.2",
tools=[as_bullets],
)
print(result.final_output)
if **name** == "**main**":
asyncio.run(main())
The model sees the tool schemas, decides which to call, and the Runner executes them. Multi-step reasoning happens automatically—the Runner keeps calling the model until it can complete the task.
Good tools typically have:
- Type hints on all parameters and return values
- Docstrings that explain what the tool does (the model reads these)
- Clear names that indicate purpose
# Good: typed, documented, clear name
def get_weather(city: str, units: str = "celsius") -> dict:
"""Get current weather for a city. Returns temperature and conditions."""
return {"temp": 22, "conditions": "sunny"}
# Bad: no types, no docs, unclear name
def do_thing(x):
return some_api_call(x)
Tools can be async. The Runner awaits them automatically:
async def fetch_user(user_id: int) -> dict:
"""Fetch user profile from database."""
async with db.connection() as conn:
return await conn.fetchone("SELECT * FROM users WHERE id = $1", user_id)
Wrap a specialized agent as a tool. The coordinator delegates specific tasks to specialists without giving up conversation control.
This differs from handoffs:
- Handoffs: New agent takes over the conversation with full history
- Agent as tool: Specialist receives specific input, returns output, coordinator continues
import asyncio
from dedalus_labs import AsyncDedalus, DedalusRunner
async def main():
client = AsyncDedalus()
runner = DedalusRunner(client)
# Specialist: wrap another runner call as a tool
async def research_specialist(query: str) -> str:
"""Deep research on a topic. Use for questions requiring thorough analysis."""
result = await runner.run(
input=query,
model="openai/gpt-5.2", # Stronger model for research
instructions="You are a research analyst. Be thorough and cite sources.",
mcp_servers=["windsor/brave-search-mcp"] # Web search via Brave Search MCP
)
return result.final_output
async def code_specialist(spec: str) -> str:
"""Generate production code from specifications."""
result = await runner.run(
input=spec,
model="anthropic/claude-opus-4-5", # Strong at code
instructions="Write clean, tested, production-ready code."
)
return result.final_output
# Coordinator: cheap model that delegates to specialists
result = await runner.run(
input="Research quantum computing breakthroughs in 2025, then write a Python simulator for a basic quantum gate",
model="openai/gpt-4o-mini",
tools=[research_specialist, code_specialist]
)
print(result.final_output)
if **name** == "**main**":
asyncio.run(main())
When to use this pattern:
| Scenario | Why Agent-as-Tool |
|---|
| Vision/OCR tasks | Text-only coordinator delegates images to vision model |
| Code generation | Fast model triages, strong model writes code |
| Domain specialists | Generic router → specialized instructions/model |
| Cost optimization | Cheap coordinator, expensive specialists only when needed |
Model Selection
Tool calling quality varies by model. For reliable multi-step tool use:
openai/gpt-5.2 and openai/gpt-4.1 handle complex tool chains well. Older or smaller models may
struggle with multi-step reasoning.
Next steps
- Combine with MCP servers: MCP Servers — Use local tools for custom logic + hosted tools for external capabilities
- Return typed data: Structured Outputs — Validate and parse JSON into schemas
- Control execution: Policies — Dynamically modify behavior at runtime
- See full examples: Use Cases — End-to-end agent patterns