Skip to main content
Tools let agents call your Python functions. Decorate, register, serve.

Basic tool

from dedalus_mcp import MCPServer, tool

@tool(description="Add two numbers")
def add(a: int, b: int) -> int:
    return a + b

server = MCPServer("math")
server.collect(add)
The description tells the LLM what the tool does. Type hints become JSON Schema.

Async tools

@tool(description="Fetch user data")
async def get_user(user_id: str) -> dict:
    async with httpx.AsyncClient() as client:
        resp = await client.get(f"https://api.example.com/users/{user_id}")
        return resp.json()
Prefer async for I/O. Sync tools run in a thread pool so they don’t block.

Type inference

Type hints become JSON Schema automatically:
from typing import Literal
from pydantic import BaseModel

class SearchFilters(BaseModel):
    category: str | None = None
    min_price: float = 0.0

@tool(description="Search products")
def search(
    query: str,
    limit: int = 10,
    sort: Literal["relevance", "price", "date"] = "relevance",
    filters: SearchFilters | None = None,
) -> list[dict]:
    ...
Supported: primitives, list, dict, Literal, Enum, Optional, Pydantic models, dataclasses, unions, nested models. Required parameters have no default. Optional parameters have one.

Decorator options

@tool(
    name="find_products",           # Override function name
    description="Search catalog",   # Required
    tags={"search", "catalog"},     # For filtering
)
def search_products_impl(query: str) -> list[dict]:
    ...

Structured returns

Return any JSON-serializable value:
@tool(description="Analyze text")
def analyze(text: str) -> dict:
    return {"word_count": len(text.split()), "char_count": len(text)}
Dicts become TextContent with JSON. Strings pass through. For explicit control:
from mcp.types import CallToolResult, TextContent

@tool(description="Custom result")
def custom() -> CallToolResult:
    return CallToolResult(
        content=[TextContent(type="text", text="Custom message")],
        isError=False,
    )

Context access

Logging, progress, and cancellation via get_context():
from dedalus_mcp import tool, get_context

@tool(description="Process files")
async def process_files(paths: list[str]) -> dict:
    ctx = get_context()
    
    await ctx.info("Starting", data={"count": len(paths)})
    
    async with ctx.progress(total=len(paths)) as tracker:
        results = []
        for path in paths:
            if ctx.cancelled:
                break
            result = await process(path)
            results.append(result)
            await tracker.advance(1)
    
    return {"processed": len(results)}
Check ctx.cancelled in loops. When true, clean up and return.

Allow-lists

Restrict visible tools:
server = MCPServer("gated")
server.collect(add, multiply, divide)
server.allow_tools({"add", "multiply"})  # divide is hidden
Calling a hidden tool returns “unknown tool”.

Error handling

Raise exceptions normally:
@tool(description="Divide")
def divide(a: float, b: float) -> float:
    if b == 0:
        raise ValueError("Cannot divide by zero")
    return a / b
The error message goes to the LLM. Use clear messages that help it recover.

Testing

Test tools as normal functions:
def test_add():
    assert add(2, 3) == 5
For tools using context, test via integration or separate the logic:
@tool(description="Process with logging")
async def process(data: str) -> dict:
    ctx = get_context()
    await ctx.info("Processing")
    return do_work(data)  # Test do_work separately

def test_do_work():
    assert do_work("input") == {"result": "output"}