Skip to main content
Different models excel at different tasks. GPT handles reasoning and tool use well. Claude writes better prose. Specialized models exist for code, math, and domain-specific work. Handoffs let agents route subtasks to the right model. If you’ve already built an MCP + tools workflow, handoffs let you keep a fast “coordinator” model most of the time and route to stronger models only when needed.
import asyncio
from dedalus_labs import AsyncDedalus, DedalusRunner
from dotenv import load_dotenv

load_dotenv()

async def main():
client = AsyncDedalus()
runner = DedalusRunner(client)

    result = await runner.run(
        input=(
            "Find me the nearest basketball games in January in San Francisco, then write a concise plan for attending."
        ),
        model=["openai/gpt-5.2", "anthropic/claude-opus-4-5"],
        mcp_servers=["windsor/ticketmaster-mcp"],  # Discover events via Ticketmaster
    )

    print(result.final_output)

if **name** == "**main**":
asyncio.run(main())

When to Use Handoffs

Handoffs shine when a task has distinct phases requiring different capabilities:
  • Research → Writing: GPT gathers information, Claude writes the final piece
  • Analysis → Code: A reasoning model plans the approach, a code model implements it
  • Triage → Specialist: A general model routes to domain-specific models
For simple tasks where one model handles everything, stick to a single model.

Model Strengths

A rough guide to model selection:
TaskGood Models
Tool calling, reasoningopenai/gpt-5.2, xai/grok-4-1-fast-reasoning
Writing, creative workanthropic/claude-opus-4-5
Code generationanthropic/claude-opus-4-5, openai/gpt-5-codex
Fast, cheap responsesgpt-5-mini

Next steps

  • Add multimodality: Images & Vision — Add image generation/vision to your workflow
  • See workflows: Use Cases — Multi-capability patterns
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.
Last modified on February 28, 2026