from dedalus_mcp import MCPServer, tool
@tool(description="Add two numbers")
def add(a: int, b: int) -> int:
return a + b
server = MCPServer("calculator")
server.collect(add)
if __name__ == "__main__":
import asyncio
asyncio.run(server.serve())
Type hints become JSON Schema automatically. Register tools with collect(). Same pattern works for resources and prompts.
Server name must match your slug. The name in MCPServer("my-server") must match your deployment slug and ctx.dispatch() calls. This ensures OAuth callbacks and request routing work correctly.
With Dedalus SDK
MCP integration is trivial. Pass servers directly to mcp_servers:
from dedalus_labs import AsyncDedalus, DedalusRunner
client = AsyncDedalus()
runner = DedalusRunner(client)
# Hosted MCP server (marketplace slug)
response = await runner.run(
input="Search for authentication docs",
model="anthropic/claude-sonnet-4-20250514",
mcp_servers=["your-org/your-server"],
)
# Local MCP server URL
response = await runner.run(
input="Search for authentication docs",
model="anthropic/claude-sonnet-4-20250514",
mcp_servers=["http://localhost:8000/mcp"],
)
That’s it. The SDK handles connection, tool discovery, and execution.
Server primitives
MCP servers expose three types of capabilities:
| Primitive | Control | Description |
|---|
| Tools | Model | Functions the LLM calls during reasoning. |
| Resources | Model/User | Data the LLM can read for context. |
| Prompts | User | Message templates users select and render. |
Tools are model-controlled: the LLM decides when to call them. Prompts are user-controlled: users choose which prompt to run. Resources can be either.
Additional capabilities
| Capability | How |
|---|
| Progress | ctx.progress() for long-running tasks |
| Logging | ctx.info(), ctx.debug(), etc. |
| Cancellation | ctx.cancelled flag |
Next