Skip to main content
MCP (Model Context Protocol) standardizes how AI agents interact with external services. Build once, works with any MCP-compatible client. dedalus_mcp is our Python framework for building these servers. TypeScript support coming Q1 2026.
from dedalus_mcp import MCPServer, tool

@tool(description="Add two numbers")
def add(a: int, b: int) -> int:
    return a + b

server = MCPServer("calculator")
server.collect(add)

if __name__ == "__main__":
    import asyncio
    asyncio.run(server.serve())
Type hints become JSON Schema automatically. That’s it.

With Dedalus SDK

MCP integration is trivial. Pass servers directly to mcp_servers:
from dedalus_labs import AsyncDedalus, DedalusRunner

client = AsyncDedalus()
runner = DedalusRunner(client)

# Hosted MCP server (marketplace slug)
response = await runner.run(
    input="Search for authentication docs",
    model="anthropic/claude-sonnet-4-20250514",
    mcp_servers=["your-org/your-server"],
)

# Local MCP server URL
response = await runner.run(
    input="Search for authentication docs",
    model="anthropic/claude-sonnet-4-20250514",
    mcp_servers=["http://localhost:8000/mcp"],
)
That’s it. The SDK handles connection, tool discovery, and execution.

Server primitives

MCP servers expose three types of capabilities:
PrimitiveControlDescription
ToolsModelFunctions the LLM calls during reasoning
ResourcesModel/UserData the LLM can read for context
PromptsUserMessage templates users select and render
Tools are model-controlled: the LLM decides when to call them. Prompts are user-controlled: users choose which prompt to run. Resources can be either.

Additional capabilities

CapabilityHow
Progressctx.progress() for long-running tasks
Loggingctx.info(), ctx.debug(), etc.
Cancellationctx.cancelled flag

Next