# Use the llms.txt file Source: https://docs.dedaluslabs.ai/ai-optimizations/llms-txt Give your AI assistant instant access to Dedalus documentation. Point your AI assistant to our `llms.txt` for instant access to all Dedalus documentation and examples. ## Access URL For a directory of links to each part of the documentation: ```text theme={"theme":{"light":"github-light","dark":"github-dark"}} https://docs.dedaluslabs.ai/llms.txt ``` or for the entire documentation in one file: ```text theme={"theme":{"light":"github-light","dark":"github-dark"}} https://docs.dedaluslabs.ai/llms-full.txt ``` ## Usage Tell your AI assistant: > "Use the documentation at [https://docs.dedaluslabs.ai/llms.txt](https://docs.dedaluslabs.ai/llms.txt) to help me with Dedalus" Your AI will instantly understand: * All API endpoints and parameters * Complete code examples * Best practices and patterns * Troubleshooting guides # MCP server for our docs Source: https://docs.dedaluslabs.ai/ai-optimizations/using-mintlify-mcp Make your LLM an expert at Dedalus. The Dedalus documentation MCP server is automatically available at: ```text theme={"theme":{"light":"github-light","dark":"github-dark"}} https://docs.dedaluslabs.ai/mcp ``` ### Install with npx The simplest way to add the Dedalus MCP server to your AI assistant: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} npx mint-mcp add dedaluslabs.ai ``` Or if using a subdomain: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} npx mint-mcp add docs.dedaluslabs.ai ``` ### Add to Claude Desktop To add the MCP server to Claude Desktop: 1. Open Claude Desktop 2. Go to **Settings** → **Developer** → **Connectors** 3. Click **Add MCP Server** 4. Enter the server URL: `https://docs.dedaluslabs.ai/mcp` 5. Provide your API key if prompted ### Add to Cursor IDE For Cursor users, add to your `mcp.json` configuration: ```json theme={"theme":{"light":"github-light","dark":"github-dark"}} { "servers": { "dedalus-docs": { "url": "https://docs.dedaluslabs.ai/mcp", "apiKey": "your-api-key-here" } } } ``` [Connect these docs programmatically](/contextual/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers. # Quickstart Source: https://docs.dedaluslabs.ai/api/index Unified API for chat completions, embeddings, audio, and image generation across multiple AI providers ## Welcome to the Dedalus API The Dedalus API provides a unified interface to interact with multiple AI model providers through a single, OpenAI-compatible API. Connect to models from OpenAI, Anthropic, Google, xAI, Mistral, DeepSeek, and more. ## Base URL ``` https://api.dedaluslabs.ai ``` ## Authentication All API endpoints require authentication using Bearer tokens. Include your API key in the `Authorization` header: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} Authorization: Bearer YOUR_API_KEY ``` Or use the `X-API-Key` header: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} X-API-Key: YOUR_API_KEY ``` Get your API key from the [Dedalus Dashboard](https://www.dedaluslabs.ai/dashboard/api-keys). ## Key Features * **Multi-Provider Support**: Access models from OpenAI, Anthropic, Google, xAI, and more through a single API * **MCP Integration**: Connect to Model Context Protocol servers for enhanced tool calling * **Streaming Support**: Real-time response streaming for all chat endpoints * **Tool Calling**: Execute functions and tools during conversations * **Multi-Model Routing**: Intelligent handoffs between different models ## SDKs Use our official SDKs for easy integration: Install with `uv pip install dedalus-labs` Install with `bun install dedalus-labs` [Connect these docs programmatically](/contextual/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers. # OCR Source: https://docs.dedaluslabs.ai/api/ocr POST /v1/ocr Extract text from PDFs and images ## Overview The OCR endpoint extracts text from documents and images, returning clean markdown. Powered by Mistral's OCR model. **Supported formats:** PDF, PNG, JPEG, WebP ## Quick Start ```bash cURL theme={"theme":{"light":"github-light","dark":"github-dark"}} curl -X POST https://api.dedaluslabs.ai/v1/ocr \ -H "Authorization: Bearer $DEDALUS_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "mistral-ocr-latest", "document": { "type": "document_url", "document_url": "https://arxiv.org/pdf/1706.03762" } }' ``` ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import httpx import os response = httpx.post( "https://api.dedaluslabs.ai/v1/ocr", headers={"Authorization": f"Bearer {os.environ['DEDALUS_API_KEY']}"}, json={ "model": "mistral-ocr-latest", "document": { "type": "document_url", "document_url": "https://arxiv.org/pdf/1706.03762" } }, timeout=120.0 ) for page in response.json()["pages"]: print(f"Page {page['index']}:\n{page['markdown'][:200]}...") ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} const response = await fetch("https://api.dedaluslabs.ai/v1/ocr", { method: "POST", headers: { Authorization: `Bearer ${process.env.DEDALUS_API_KEY}`, "Content-Type": "application/json", }, body: JSON.stringify({ model: "mistral-ocr-latest", document: { type: "document_url", document_url: "https://arxiv.org/pdf/1706.03762", }, }), }); const data = await response.json(); for (const page of data.pages) { console.log(`Page ${page.index}:\n${page.markdown.slice(0, 200)}...`); } ``` For local files, encode as base64 data URI: `data:application/pdf;base64, {base64_data}` ## Response ```json theme={"theme":{"light":"github-light","dark":"github-dark"}} { "pages": [ { "index": 0, "markdown": "# Attention Is All You Need\n\nAshish Vaswani, Noam Shazeer...\n\n# Abstract\n\nThe dominant sequence transduction models..." }, { "index": 1, "markdown": "## 1 Introduction\n\nRecurrent neural networks..." } ], "model": "mistral-ocr-latest" } ``` ## Use Cases ### Invoice Processing Extract line items, totals, and dates from invoices for automated bookkeeping. ### Receipt Scanning Parse receipts for expense tracking—amounts, vendors, dates extracted as structured text. ### Document Digitization Convert scanned documents to searchable, editable markdown while preserving tables and formatting. ## Parameters | Parameter | Type | Required | Description | | ----------------------- | ------ | -------- | ---------------------------------------- | | `model` | string | No | OCR model. Default: `mistral-ocr-latest` | | `document.type` | string | Yes | Always `document_url` | | `document.document_url` | string | Yes | HTTPS URL or data URI | ## Limits * **Max file size:** 50 MB * **Max pages:** 1,000 per document * **Timeout:** 120 seconds # Response Schemas Source: https://docs.dedaluslabs.ai/api/schemas Reference for all API response objects and their structure This page documents all response schemas returned by the Dedalus API. All responses follow OpenAI-compatible formats. *** ## Dedalus Runner Response object returned by the `DedalusRunner` for non-streaming tool execution runs. Final text output from the conversation after all tool executions complete List of all tool execution results from the run Name of the tool that was executed The result returned by the tool execution The step number when this tool was executed Error message if the tool execution failed Total number of steps (LLM calls) used during the run List of tool names that were called during the run Full conversation history including system prompts, user messages, assistant responses, and tool calls/results. Useful for debugging, logging, or continuing conversations. Optional list of detected intents (when `return_intent=true`) Alias for `final_output` (legacy compatibility) Alias for `final_output` (legacy compatibility) Returns a copy of the full conversation history (`messages`) for use in follow-up runs. Enables multi-turn conversations by passing the result to subsequent `runner.run()` calls. ```python Example theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_labs import Dedalus, DedalusRunner client = Dedalus(api_key="YOUR_API_KEY") runner = DedalusRunner(client) def get_weather(location: str) -> str: """Get the current weather for a location.""" return f"The weather in {location} is sunny and 72°F" result = runner.run( input="What's the weather like in San Francisco?", tools=[get_weather], model="openai/gpt-5-nano", max_steps=5 ) # Access result properties print(result.final_output) # "The weather in San Francisco is sunny and 72°F" print(result.steps_used) # e.g., 2 print(result.tools_called) # ["get_weather"] print(result.tool_results) # [{"name": "get_weather", "result": "The weather...", "step": 1}] ``` ```python Accessing Message History theme={"theme":{"light":"github-light","dark":"github-dark"}} import json # Print the full conversation history for msg in result.messages: role = msg.get("role") content = msg.get("content", "") if role == "user": print(f"User: {content}") elif role == "assistant": if msg.get("tool_calls"): tools = [tc["function"]["name"] for tc in msg["tool_calls"]] print(f"Assistant: [calling {', '.join(tools)}]") else: print(f"Assistant: {content}") elif role == "tool": print(f"Tool Result: {content[:100]}...") # Store message history to JSON for logging/debugging with open("conversation_log.json", "w") as f: json.dump(result.messages, f, indent=2) # Continue the conversation with message history follow_up = runner.run( messages=result.to_input_list(), # Pass previous conversation input="What about New York?", # Add new user message tools=[get_weather], model="openai/gpt-5-nano" ) ``` ```json Example Response theme={"theme":{"light":"github-light","dark":"github-dark"}} { "final_output": "The weather in San Francisco is sunny and 72°F", "tool_results": [ { "name": "get_weather", "result": "The weather in San Francisco is sunny and 72°F", "step": 1 } ], "steps_used": 2, "tools_called": ["get_weather"], "messages": [ { "role": "user", "content": "What's the weather like in San Francisco?" }, { "role": "assistant", "tool_calls": [ { "id": "call_abc123", "type": "function", "function": { "name": "get_weather", "arguments": "{\"location\": \"San Francisco\"}" } } ] }, { "role": "tool", "tool_call_id": "call_abc123", "content": "The weather in San Francisco is sunny and 72°F" }, { "role": "assistant", "content": "The weather in San Francisco is sunny and 72°F" } ], "intents": null } ``` *** ## Chat Completions The complete response object for non-streaming chat completions. Unique identifier for the chat completion Object type, always `chat.completion` Unix timestamp (seconds) when the completion was created The model used for completion (e.g., `openai/gpt-5-nano`) List of completion choices Index of this choice The generated message Role of the message author (`assistant`, `tool`, etc.) The content of the message List of tool calls made by the model Why the generation stopped: `stop`, `length`, `tool_calls`, `content_filter` Log probability information for tokens Token usage statistics Number of tokens in the prompt Number of tokens in the completion Total tokens used (prompt + completion) System fingerprint for reproducibility ```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}} { "id": "chatcmpl-abc123", "object": "chat.completion", "created": 1677652288, "model": "openai/gpt-5-nano", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Hello! I'm doing well, thank you for asking." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 13, "completion_tokens": 12, "total_tokens": 25 } } ``` *** Streamed response chunks for streaming completions (`stream=true`). Unique identifier for the chat completion Object type, always `chat.completion.chunk` Unix timestamp when the chunk was created The model being used List of chunk choices Index of this choice Incremental content delta Role (only in first chunk) Incremental content string Incremental tool call updates Reason for completion (only in final chunk): `stop`, `length`, `tool_calls`, `content_filter`, or `null` ```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}} { "id": "chatcmpl-abc123", "object": "chat.completion.chunk", "created": 1677652288, "model": "openai/gpt-5-nano", "choices": [ { "index": 0, "delta": { "content": "Hello" }, "finish_reason": null } ] } ``` *** ## Embeddings Response object for embedding creation requests. Object type, always `list` List of embedding objects Object type, always `embedding` The embedding vector (array of floats) Index of this embedding The model used to generate embeddings Token usage information Number of tokens in the input Total tokens processed ```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}} { "object": "list", "data": [ { "object": "embedding", "embedding": [0.0023064255, -0.009327292, -0.0028842222], "index": 0 } ], "model": "openai/text-embedding-3-small", "usage": { "prompt_tokens": 8, "total_tokens": 8 } } ``` *** ## Models Response object for listing available models. Includes rich metadata about capabilities and routing. Object type, always `list` List of model objects Model identifier with provider prefix (e.g., `openai/gpt-4o`, `anthropic/claude-opus-4-5`) Provider name: `openai`, `anthropic`, `google`, `xai`, `deepseek`, `mistral`, etc. ISO 8601 timestamp when the model was created Human-readable display name (optional) Model description (optional) Model capabilities Supports text generation via chat completions Supports image input / multimodal Can generate images Supports audio input/output Supports tool/function calling Supports structured JSON output Supports streaming responses Supports extended reasoning (e.g., o1, o3, Claude thinking) Maximum input context window in tokens Maximum output tokens Provider-specific metadata Model status: `enabled`, `disabled`, `preview`, `deprecated` Which upstream API this model uses (e.g., `openai/chat/completions`, `anthropic/messages`) ```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}} { "object": "list", "data": [ { "id": "openai/gpt-4o", "provider": "openai", "created_at": "1970-01-01T00:00:00Z", "display_name": null, "description": null, "capabilities": { "text": true, "vision": null, "image_generation": null, "audio": null, "tools": null, "structured_output": null, "streaming": null, "thinking": null, "input_token_limit": null, "output_token_limit": null }, "provider_info": { "status": "enabled", "upstream_api": "openai/chat/completions" } }, { "id": "openai/o1", "provider": "openai", "created_at": "1970-01-01T00:00:00Z", "capabilities": { "text": true, "thinking": true }, "provider_info": { "status": "enabled", "upstream_api": "openai/chat/completions" } }, { "id": "anthropic/claude-opus-4-5", "provider": "anthropic", "created_at": "1970-01-01T00:00:00Z", "capabilities": { "text": true, "vision": true, "tools": true }, "provider_info": { "status": "enabled", "upstream_api": "anthropic/messages" } } ] } ``` *** ## Images Response object for image generation requests. Unix timestamp when the images were generated List of generated image objects URL of the generated image (when `response_format="url"`) Base64-encoded image data (when `response_format="b64_json"`) The revised prompt used to generate the image (may differ from input for safety) ```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}} { "created": 1677652288, "data": [ { "url": "https://images.example.com/abc123.png", "revised_prompt": "A cute baby sea otter floating on its back in calm blue water" } ] } ``` *** ## Audio Response object for audio transcription requests. The transcribed text from the audio file ```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}} { "text": "Hello, this is a test of audio transcription." } ``` *** Response object for audio translation requests (always translates to English). The translated text from the audio file (in English) ```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}} { "text": "Hello, this is a test of audio translation." } ``` *** ## Errors All endpoints may return errors with this structure. Error information object Human-readable error message Error type: `invalid_request_error`, `authentication_error`, `rate_limit_error`, `server_error` Specific error code for programmatic handling Parameter that caused the error (if applicable) ```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}} { "error": { "message": "Invalid API key provided", "type": "authentication_error", "code": "invalid_api_key" } } ``` # Use docs programmatically Source: https://docs.dedaluslabs.ai/api/use-these-docs Connect Dedalus documentation to your AI tools and workflows We want to make our documentation as accessible as possible. We've included several ways for you to use these docs programmatically through AI assistants, code editors, and direct integrations, such as Model Context Protocol (MCP). ## Quick access options On any page in our documentation, you'll find a contextual menu dropdown in the top right corner with quick access options including our `llms.txt`, MCP server connection, and other integrations such as ChatGPT and Claude. Quick access menu showing Copy page, View as Markdown, Open in ChatGPT, Open in Claude, and Copy MCP Server options ## Use our MCP server Our documentation includes a built-in **Model Context Protocol (MCP) server** that lets AI applications query the latest docs in real-time. The Dedalus docs MCP server is available at: ```txt theme={"theme":{"light":"github-light","dark":"github-dark"}} https://docs.dedaluslabs.ai/mcp ``` Once connected, you can ask your AI assistant questions about Dedalus SDK, MCP servers, and our platform, and it will search our documentation to provide accurate, current answers. ### Connect with Claude Code If you're using Claude Code, run this command in your terminal to add the server to your current project: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} claude mcp add --transport http docs-dedalus https://docs.dedaluslabs.ai/mcp ``` **Project (local) scoped** The command above adds the MCP server only to your current project/working directory. To add the MCP server globally and access it in all projects, add the user scope by adding `--scope user` to the command: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} claude mcp add --transport http docs-dedalus --scope user https://docs.dedaluslabs.ai/mcp ``` ### Connect with Claude Desktop 1. Open Claude Desktop 2. Go to **Settings** → **Developer** → **Connectors** 3. Click **Add MCP Server** 4. Add our MCP server URL: `https://docs.dedaluslabs.ai/mcp` ### Connect with Codex CLI If you're using OpenAI Codex CLI, run this command in your terminal to add the server globally: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} codex mcp add dedalus-docs --url https://docs.dedaluslabs.ai/mcp ``` ### Connect with Cursor Install in one click: Add to Cursor Or add this configuration to `.cursor/mcp.json`: ```json theme={"theme":{"light":"github-light","dark":"github-dark"}} { "mcpServers": { "docs-dedalus": { "url": "https://docs.dedaluslabs.ai/mcp" } } } ``` ### Connect with VS Code Install in one click: Install in VS Code Or add this configuration to `.vscode/mcp.json`: ```json theme={"theme":{"light":"github-light","dark":"github-dark"}} { "servers": { "docs-dedalus": { "type": "http", "url": "https://docs.dedaluslabs.ai/mcp" } } } ``` ### Connect with Antigravity Add the following to your MCP settings configuration file: ```json theme={"theme":{"light":"github-light","dark":"github-dark"}} { "mcpServers": { "docs-dedalus": { "serverUrl": "https://docs.dedaluslabs.ai/mcp" } } } ``` ## Learn more Have questions or feedback? Join our [Discord community](https://discord.gg/RuDhZKnq5R) or [email us](mailto:support@dedaluslabs.ai). # Create Chat Completion Source: https://docs.dedaluslabs.ai/api/v1/create-chat-completion /openapi.json post /v1/chat/completions Generate a model response. Supports streaming, tools, and MCP servers. # Create Embeddings Source: https://docs.dedaluslabs.ai/api/v1/create-embeddings /openapi.json post /v1/embeddings Create embeddings using the configured provider. # Create Image Source: https://docs.dedaluslabs.ai/api/v1/create-image /openapi.json post /v1/images/generations Generate images from text prompts. Pure image generation models only (DALL-E, GPT Image). For multimodal models like gemini-2.5-flash-image, use /v1/chat/completions. # Create Speech Source: https://docs.dedaluslabs.ai/api/v1/create-speech /openapi.json post /v1/audio/speech Generate speech audio from text. Generates audio from the input text using text-to-speech models. Supports multiple voices and output formats including mp3, opus, aac, flac, wav, and pcm. Returns streaming audio data that can be saved to a file or streamed directly to users. # Create Transcription Source: https://docs.dedaluslabs.ai/api/v1/create-transcription /openapi.json post /v1/audio/transcriptions Transcribe audio into text. Transcribes audio files using OpenAI's Whisper model. Supports multiple audio formats including mp3, mp4, mpeg, mpga, m4a, wav, and webm. Maximum file size is 25 MB. Args: file: Audio file to transcribe (required) model: Model ID to use (e.g., "openai/whisper-1") language: ISO-639-1 language code (e.g., "en", "es") - improves accuracy prompt: Optional text to guide the model's style response_format: Format of the output (json, text, srt, verbose_json, vtt) temperature: Sampling temperature between 0 and 1 Returns: Transcription object with the transcribed text # Create Translation Source: https://docs.dedaluslabs.ai/api/v1/create-translation /openapi.json post /v1/audio/translations Translate audio into English. Translates audio files in any supported language to English text using OpenAI's Whisper model. Supports the same audio formats as transcription. Maximum file size is 25 MB. Args: file: Audio file to translate (required) model: Model ID to use (e.g., "openai/whisper-1") prompt: Optional text to guide the model's style response_format: Format of the output (json, text, srt, verbose_json, vtt) temperature: Sampling temperature between 0 and 1 Returns: Translation object with the English translation # Use docs programmatically Source: https://docs.dedaluslabs.ai/contextual/use-these-docs Connect Dedalus documentation to your AI tools and workflows We want to make our documentation as accessible as possible. We've included several ways for you to use these docs programmatically through AI assistants, code editors, and direct integrations, such as Model Context Protocol (MCP). ## Quick access options On any page in our documentation, you'll find a contextual menu dropdown in the top right corner with quick access options including our `llms.txt`, MCP server connection, and other integrations such as ChatGPT and Claude. Quick access menu showing Copy page, View as Markdown, Open in ChatGPT, Open in Claude, and Copy MCP Server options ## Use our MCP server Our documentation includes a built-in **Model Context Protocol (MCP) server** that lets AI applications query the latest docs in real-time. The Dedalus docs MCP server is available at: ```txt theme={"theme":{"light":"github-light","dark":"github-dark"}} https://docs.dedaluslabs.ai/mcp ``` Once connected, you can ask your AI assistant questions about Dedalus SDK, MCP servers, and our platform, and it will search our documentation to provide accurate, current answers. ### Connect with Claude Code If you're using Claude Code, run this command in your terminal to add the server to your current project: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} claude mcp add --transport http docs-dedalus https://docs.dedaluslabs.ai/mcp ``` **Project (local) scoped** The command above adds the MCP server only to your current project/working directory. To add the MCP server globally and access it in all projects, add the user scope by adding `--scope user` to the command: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} claude mcp add --transport http docs-dedalus --scope user https://docs.dedaluslabs.ai/mcp ``` ### Connect with Claude Desktop 1. Open Claude Desktop 2. Go to **Settings** → **Developer** → **Connectors** 3. Click **Add MCP Server** 4. Add our MCP server URL: `https://docs.dedaluslabs.ai/mcp` ### Connect with Codex CLI If you're using OpenAI Codex CLI, run this command in your terminal to add the server globally: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} codex mcp add dedalus-docs --url https://docs.dedaluslabs.ai/mcp ``` ### Connect with Cursor Install in one click: Add to Cursor Or add this configuration to `.cursor/mcp.json`: ```json theme={"theme":{"light":"github-light","dark":"github-dark"}} { "mcpServers": { "docs-dedalus": { "url": "https://docs.dedaluslabs.ai/mcp" } } } ``` ### Connect with VS Code Install in one click: Install in VS Code Or add this configuration to `.vscode/mcp.json`: ```json theme={"theme":{"light":"github-light","dark":"github-dark"}} { "servers": { "docs-dedalus": { "type": "http", "url": "https://docs.dedaluslabs.ai/mcp" } } } ``` ### Connect with Antigravity Add the following to your MCP settings configuration file: ```json theme={"theme":{"light":"github-light","dark":"github-dark"}} { "mcpServers": { "docs-dedalus": { "serverUrl": "https://docs.dedaluslabs.ai/mcp" } } } ``` ## Learn more Have questions or feedback? Join our [Discord community](https://discord.gg/RuDhZKnq5R) or [email us](mailto:support@dedaluslabs.ai). *** # Getting Started Source: https://docs.dedaluslabs.ai/dmcp/authorization Protect MCP servers with DAuth or external OAuth MCP servers can require OAuth 2.1 tokens. Choose between **DAuth** (Dedalus Auth) for managed authentication with credential isolation, or bring your own authorization server. ## DAuth (Dedalus Auth) DAuth is Dedalus's managed authorization system. It provides OAuth 2.1 token issuance with a key security property: **credentials never leave a sealed execution boundary**. ### Why DAuth? Traditional credential handling exposes secrets to your application code. DAuth isolates credentials in a secure enclave—your MCP server receives an opaque connection handle, not raw API keys. * **Credentials never exposed** — Encrypted client-side, decrypted only in a sealed execution boundary * **Opaque handles** — Your code references connections by handle, never sees raw secrets * **Sender-constrained tokens** — Tokens are cryptographically bound to the client; stolen tokens are unusable * **Networkless execution** — Credential decryption and API calls happen entirely within an isolated enclave; raw secrets never traverse the network Learn how credential isolation and sealed execution protect your secrets. ### Quick Start ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import MCPServer from dedalus_mcp.server import AuthorizationConfig server = MCPServer( "protected-server", authorization=AuthorizationConfig( enabled=True, required_scopes=["read"], ), ) ``` By default, `authorization_servers` points to `https://as.dedaluslabs.ai` (the DAuth control plane). For a complete working example with GitHub and Supabase integrations: Production-ready server with GitHub and Supabase integrations. Unauthenticated requests get `401` with a `WWW-Authenticate` challenge pointing to the protected resource metadata. ### Server-level Scopes All requests must have these scopes. Scope names are arbitrary strings you define—common patterns are `read`/`write` for general access or `resource:action` (e.g., `files:delete`) for fine-grained control. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} authorization=AuthorizationConfig( enabled=True, required_scopes=["read", "write"], # Required for all tools ``` ### Per-tool Scopes Gate sensitive tools with additional scope requirements: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import tool @tool(description="List files") def list_files(path: str) -> list[str]: return os.listdir(path) # No extra scopes needed @tool(description="Delete file", required_scopes=["files:delete"]) def delete_file(path: str) -> dict: os.remove(path) return {"deleted": path} ``` A token with `read` can call `list_files`. Calling `delete_file` without `files:delete` returns an error: ```json theme={"theme":{"light":"github-light","dark":"github-dark"}} { "isError": true, "content": [ { "type": "text", "text": "Tool \"delete_file\" requires scopes: ['files:delete']. Missing: ['files:delete']" } ] } ``` ### Configuration Options ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} AuthorizationConfig( enabled=True, authorization_servers=["https://as.dedaluslabs.ai"], # DAuth (default) required_scopes=["read"], metadata_path="/.well-known/oauth-protected-resource", cache_ttl=300, fail_open=False, ) ``` | Option | Default | Description | | ----------------------- | --------------------------------------- | ------------------------------------ | | `enabled` | `False` | Enable authorization enforcement | | `authorization_servers` | `["https://as.dedaluslabs.ai"]` | DAuth or custom OAuth AS URLs | | `required_scopes` | `[]` | Scopes required for all requests | | `metadata_path` | `/.well-known/oauth-protected-resource` | PRM endpoint path | | `cache_ttl` | `300` | Cache duration for metadata | | `fail_open` | `False` | Allow requests when validation fails | ### Access Claims in Tools Inspect the authenticated user in your tools: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import tool, get_context @tool(description="Get current user") def whoami() -> dict: ctx = get_context() auth = ctx.auth # AuthorizationContext or None if auth is None: return {"user": "anonymous"} return { "subject": auth.subject, "scopes": auth.scopes, "claims": auth.claims, } ``` ### DPoP Support DAuth uses DPoP (Demonstrating Proof-of-Possession) by default. Tokens are cryptographically bound to the client's key—even if a token is stolen, it's useless without the corresponding private key. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} server = MCPServer( "dpop-server", authorization=AuthorizationConfig( enabled=True, dpop_required=True, ), ) ``` ### Environment variable Remember to add these variable to your environment. DAuth works natively with Dedalus SDK, therefore an API key is needed. Get your API key from the [dashboard](https://www.dedaluslabs.ai/dashboard/api-keys) ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} # Dedalus Platform (for _client.py testing) DEDALUS_API_KEY=dsk-live-... DEDALUS_API_URL=https://api.dedaluslabs.ai DEDALUS_AS_URL=https://as.dedaluslabs.ai ``` ## External Authorization Servers Use your own OAuth 2.1 provider instead of DAuth. This is useful when integrating with existing identity infrastructure. External authorization servers don't provide the sealed execution model. Your MCP server will handle credentials directly. ### Custom Provider ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.server import AuthorizationConfig, JWTValidatorConfig, JWTValidator server = MCPServer( "my-server", authorization=AuthorizationConfig( enabled=True, authorization_servers=["https://auth.mycompany.com"], required_scopes=["api:access"], ), ) # Configure JWT validation for your provider jwt_config = JWTValidatorConfig( jwks_uri="https://auth.mycompany.com/.well-known/jwks.json", issuer="https://auth.mycompany.com", audience="https://my-mcp-server.example.com", ) server.set_authorization_provider(JWTValidator(jwt_config)) ``` ### Auth0 ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} jwt_config = JWTValidatorConfig( jwks_uri="https://YOUR_DOMAIN.auth0.com/.well-known/jwks.json", issuer="https://YOUR_DOMAIN.auth0.com/", audience="https://my-mcp-api", ) ``` ### Okta ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} jwt_config = JWTValidatorConfig( jwks_uri="https://YOUR_DOMAIN.okta.com/oauth2/default/v1/keys", issuer="https://YOUR_DOMAIN.okta.com/oauth2/default", audience="api://my-mcp-server", ) ``` ### Keycloak ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} jwt_config = JWTValidatorConfig( jwks_uri="https://keycloak.example.com/realms/myrealm/protocol/openid-connect/certs", issuer="https://keycloak.example.com/realms/myrealm", audience="my-mcp-client", ) ``` *** ## Testing Test authorization with the full server and client: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import pytest from dedalus_mcp import MCPServer, tool from dedalus_mcp.server import AuthorizationConfig from dedalus_mcp.client import MCPClient, BearerAuth @tool(description="Delete file", required_scopes=["files:delete"]) def delete_file(path: str) -> dict: return {"deleted": path} @pytest.fixture async def protected_server(): server = MCPServer( "test", authorization=AuthorizationConfig(enabled=True, required_scopes=["read"]), ) server.collect(delete_file) task = asyncio.create_task(server.serve()) await asyncio.sleep(0.1) yield server task.cancel() async def test_with_valid_token(protected_server): # Use a test token with required scopes client = await MCPClient.connect( "http://127.0.0.1:8000/mcp", auth=BearerAuth(access_token="test-token-with-scopes") ) result = await client.call_tool("delete_file", {"path": "/tmp/test.txt"}) await client.close() ``` # Bearer Auth Source: https://docs.dedaluslabs.ai/dmcp/client/bearer-auth Authenticate with Bearer tokens # Bearer Auth > Authenticate with Bearer tokens Bearer tokens are the simplest way to authenticate to protected APIs and MCP servers. They work well for API keys, service accounts, and other non-interactive workflows. ## With Dedalus SDK (DAuth) If you’re using the Dedalus runner / marketplace, you typically declare a **Connection** schema (what secrets you need) and then bind it to real values at runtime. ### Step 1: Define a connection ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import Connection, SecretKeys x = Connection( name="x", secrets=SecretKeys(token="X_BEARER_TOKEN"), base_url="https://api.x.com", # your API provider base URL auth_header_format="Bearer {api_key}", ) ``` ### Step 2: Bind credentials ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import os from dedalus_mcp import SecretValues # SecretValues binds actual values to the Connection schema. x_secrets = SecretValues(x, token=os.getenv("X_BEARER_TOKEN", "")) ``` ### Step 3: Pass to your runner ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_labs import AsyncDedalus, DedalusRunner async def main(): client = AsyncDedalus() runner = DedalusRunner(client) response = await runner.run( input="Find trending topics", model="anthropic/claude-sonnet-4-20250514", mcp_servers=["windsor/x-api-mcp"], credentials=[x_secrets], ) ``` ### Step 4: Environment variables ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} # Dedalus credentials DEDALUS_API_KEY=dsk_your_key DEDALUS_AS_URL=https://as.dedaluslabs.ai # Your external API credentials X_BEARER_TOKEN=AAAA... ``` ## `auth_header_format` reference `auth_header_format` controls how the server formats the `Authorization` header when calling **external APIs** through dispatch. The format string **must include `{api_key}`**. | API | Format | Header sent | | ----------- | ------------------ | -------------------------------- | | X (Twitter) | `Bearer {api_key}` | `Authorization: Bearer AAAA...` | | GitHub | `token {api_key}` | `Authorization: token ghp_...` | | Slack | `Bearer {api_key}` | `Authorization: Bearer xoxb-...` | **When to omit `auth_header_format`:** * APIs that authenticate via **query params** or **request body** * APIs with custom auth mechanisms (not a standard auth header) **How to find the right format:** * Check the external API’s auth docs * Look at the API’s 401 response for hints * Most modern APIs use `Bearer {api_key}` ## When to use **Bearer tokens work well for:** * API keys and service tokens * Service-to-service calls * CI/CD pipelines * Backend integrations **Use OAuth instead for:** * User-facing apps * Delegated access (“act on behalf of a user”) * Consent flows and refresh tokens ## Standalone `dedalus_mcp` client If you’re connecting directly to an MCP server that expects a Bearer token, pass `BearerAuth` into `MCPClient.connect(...)`: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.client import BearerAuth, MCPClient client = await MCPClient.connect( "http://127.0.0.1:8000/mcp", auth=BearerAuth(access_token="your-token"), ) ``` This sends: ```http theme={"theme":{"light":"github-light","dark":"github-dark"}} Authorization: Bearer your-token ``` # Elicitation Source: https://docs.dedaluslabs.ai/dmcp/client/elicitation Gather user input during tool execution Servers can request user input during tool execution via elicitation. The client handler collects input matching a JSON schema and returns it to the server. ## Handler ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.client import ClientCapabilitiesConfig, open_connection from dedalus_mcp.types import ( ElicitRequestParams, ElicitResult, ErrorData, ) async def elicitation_handler( context: object, params: ElicitRequestParams, ) -> ElicitResult | ErrorData: """Handle elicitation requests from the server.""" print(f"\nServer requests input: {params.message}") # Collect values for each field in the schema content = {} schema = params.requestedSchema properties = schema.get("properties", {}) required = schema.get("required", []) for field, field_schema in properties.items(): field_type = field_schema.get("type", "string") is_required = field in required value = input(f"{field} ({field_type}): ") if not value and is_required: return ElicitResult(action="decline") # Type coercion if field_type == "boolean": content[field] = value.lower() in ("true", "yes", "1", "y") elif field_type == "integer": content[field] = int(value) elif field_type == "number": content[field] = float(value) else: content[field] = value return ElicitResult(action="accept", content=content) ``` ## Usage ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} capabilities = ClientCapabilitiesConfig(elicitation=elicitation_handler) async with open_connection( url="http://127.0.0.1:8000/mcp", transport="streamable-http", capabilities=capabilities, ) as client: # Server can now request user input during tool execution result = await client.call_tool("deploy", {"env": "production"}) ``` ## Actions The handler returns one of three actions: | Action | Description | | --------- | ------------------------------------------------- | | `accept` | Submit collected content to the server | | `decline` | Reject the request without canceling the workflow | | `cancel` | Terminate the operation entirely | ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} # Accept with content return ElicitResult(action="accept", content={"confirmed": True}) # Decline (user refused) return ElicitResult(action="decline") # Cancel (abort operation) return ElicitResult(action="cancel") ``` ## Auto-Accept Example For non-interactive environments, auto-accept with defaults: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} async def elicitation_handler(context, params): """Auto-accept with default values for testing.""" content = {} properties = params.requestedSchema.get("properties", {}) for field, schema in properties.items(): field_type = schema.get("type", "string") if field_type == "boolean": content[field] = True elif field_type in ("integer", "number"): content[field] = 0 else: content[field] = "auto-filled" return ElicitResult(action="accept", content=content) ``` # Overview Source: https://docs.dedaluslabs.ai/dmcp/client/index Programmatic client for interacting with MCP servers `MCPClient` is an async Python client for talking to any MCP server. It handles the MCP handshake (`initialize`), transport setup, and session management—so you can focus on the operations you want to perform. Use `MCPClient` when you want **explicit, predictable control**, like: * **Testing MCP servers** during development * **Building applications** that need reliable MCP interactions * **Building higher-level clients** (including "agentic" flows) on top of a typed protocol layer ## Quick start Connect → do work → close. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_mcp.client import MCPClient async def main(): # Local dev default when running `await server.serve()` client = await MCPClient.connect("http://127.0.0.1:8000/mcp") try: tools = await client.list_tools() result = await client.call_tool("add", {"a": 5, "b": 3}) print(result) finally: await client.close() asyncio.run(main()) ``` Prefer `async with` if you want guaranteed cleanup: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.client import MCPClient async with await MCPClient.connect("http://127.0.0.1:8000/mcp") as client: tools = await client.list_tools() ``` ## Connection ### Connect ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.client import MCPClient client = await MCPClient.connect("http://127.0.0.1:8000/mcp") ``` ### Protocol info After connecting, `initialize_result` is populated: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} print(f"Server: {client.initialize_result.serverInfo.name}") print(f"Protocol: {client.initialize_result.protocolVersion}") print(f"Session: {client.session_id}") ``` ### Close ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} await client.close() ``` ### Alternative: `open_connection(...)` `open_connection(...)` is a convenience wrapper that yields a connected `MCPClient` and cleans up automatically: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.client import open_connection async with open_connection("http://127.0.0.1:8000/mcp") as client: tools = await client.list_tools() ``` ### Ping ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} await client.ping() ``` ## Operations Most client code falls into one of these buckets: List and call server-side functions List and read data sources List and get message templates ## Client capabilities (server → client) Some MCP features are initiated by the server. If you want to support them, you configure handlers when connecting. Handle LLM completion requests from servers Gather user input during tool execution Advertise filesystem boundaries to servers Receive log messages from servers ## Authentication For protected MCP servers: API keys and service tokens User consent and delegated access via browser flow ### DPoP authentication For servers using DPoP (RFC 9449) sender-constrained tokens: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.auth.dpop import generate_dpop_keypair from dedalus_mcp.client import DPoPAuth, MCPClient dpop_key, _ = generate_dpop_keypair() auth = DPoPAuth(access_token="eyJ...", dpop_key=dpop_key) # Local dev: client = await MCPClient.connect("http://127.0.0.1:8000/mcp", auth=auth) # Production: # client = await MCPClient.connect("", auth=auth) ``` # Logging Source: https://docs.dedaluslabs.ai/dmcp/client/logging Receive log messages from servers Servers can send log messages to clients during tool execution. This provides visibility into server operations and helps with debugging. ## Handler ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.client import ClientCapabilitiesConfig, open_connection from dedalus_mcp.types import LoggingMessageNotificationParams def logging_handler(params: LoggingMessageNotificationParams) -> None: """Handle log messages from the server.""" level = params.level.upper() data = params.data print(f"[{level}] {data}") ``` ## Usage ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} capabilities = ClientCapabilitiesConfig(logging=logging_handler) async with open_connection( url="http://127.0.0.1:8000/mcp", transport="streamable-http", capabilities=capabilities, ) as client: # Server logs will be routed to your handler result = await client.call_tool("process", {"file": "data.csv"}) ``` ## Log Levels Servers can send logs at different levels: | Level | Description | | --------- | ------------------------------ | | `debug` | Detailed debugging information | | `info` | General operational messages | | `warning` | Warning conditions | | `error` | Error conditions | ## Structured Logging Route to your logging framework: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import logging logger = logging.getLogger("mcp.server") def logging_handler(params: LoggingMessageNotificationParams) -> None: level = getattr(logging, params.level.upper(), logging.INFO) logger.log(level, params.data) ``` ## Full Example ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_mcp.client import ClientCapabilitiesConfig, open_connection from dedalus_mcp.types import LoggingMessageNotificationParams def logging_handler(params: LoggingMessageNotificationParams) -> None: level = params.level.upper() print(f"[SERVER {level}] {params.data}") async def main(): capabilities = ClientCapabilitiesConfig(logging=logging_handler) async with open_connection( url="http://127.0.0.1:8000/mcp", transport="streamable-http", capabilities=capabilities, ) as client: print(f"Connected: {client.initialize_result.serverInfo.name}") tools = await client.list_tools() print(f"Available tools: {[t.name for t in tools.tools]}") # Call a tool - logs will appear via handler result = await client.call_tool("analyze", {"input": "test"}) print(f"Result: {result.content}") asyncio.run(main()) ``` # OAuth Source: https://docs.dedaluslabs.ai/dmcp/client/oauth Authenticate with OAuth browser flow Use OAuth for MCP servers that require user consent, like Gmail, Google Calendar, or other services with delegated access. ## OAuth Flow The OAuth flow is triggered when you call an MCP server that requires user authentication. The SDK calls the MCP server. If no valid token exists, the server returns `401` with a `WWW-Authenticate` header. The SDK fetches `/.well-known/oauth-protected-resource` (RFC 9728) to discover the authorization server and supported scopes. The SDK raises `AuthenticationError` containing a `connect_url`—the full OAuth authorization URL. Your app opens the user's browser to the `connect_url`. The user logs in and grants (or denies) the requested scopes. Upon approval, the authorization server exchanges the authorization code for tokens using PKCE. DAuth stores the tokens server-side. The user returns to your app and triggers a retry. The SDK re-sends the request, now with valid credentials. The access token is automatically included for subsequent requests to the MCP server. If the access token expires, DAuth automatically uses the refresh token to obtain a new access token. ## How It Works ```mermaid theme={"theme":{"light":"github-light","dark":"github-dark"}} sequenceDiagram participant U as User participant S as SDK participant M as MCP Server participant A as Auth Server U->>S: Request S->>M: Call tool M-->>S: 401 Unauthorized S->>M: GET /.well-known/oauth-protected-resource M-->>S: {authorization_servers, scopes} S-->>U: AuthenticationError (connect_url) U->>A: Open browser, login + consent A-->>U: Redirect (tokens stored) U->>S: Retry S->>M: Call tool + token M-->>S: Response ``` ## OAuth Retry Helper Handle the OAuth flow with a retry wrapper: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import webbrowser from collections.abc import Awaitable, Callable from typing import TypeVar from dedalus_labs import AuthenticationError T = TypeVar("T") async def with_oauth_retry(fn: Callable[[], Awaitable[T]]) -> T: """Run async function, handling OAuth browser flow if needed.""" try: return await fn() except AuthenticationError as e: body = e.body if isinstance(e.body, dict) else {} url = body.get("connect_url") or body.get("detail", {}).get("connect_url") if not url: raise print("\nOpening browser for OAuth...") print("If the browser does not open, visit:\n") print(url) webbrowser.open(url) input("\nPress Enter after completing OAuth...") return await fn() ``` ## Full Example: DedalusRunner ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio import webbrowser from collections.abc import Awaitable, Callable from typing import TypeVar from dotenv import load_dotenv load_dotenv() from dedalus_labs import AsyncDedalus, AuthenticationError, DedalusRunner T = TypeVar("T") async def with_oauth_retry(fn: Callable[[], Awaitable[T]]) -> T: try: return await fn() except AuthenticationError as e: body = e.body if isinstance(e.body, dict) else {} url = body.get("connect_url") or body.get("detail", {}).get("connect_url") if not url: raise webbrowser.open(url) input("\nPress Enter after completing OAuth...") return await fn() async def main(): client = AsyncDedalus() runner = DedalusRunner(client) result = await with_oauth_retry( lambda: runner.run( input="List my recent emails and summarize them", model="openai/gpt-4.1", mcp_servers=["anny_personal/gmail-mcp"], ) ) print(result.output) if result.mcp_results: for r in result.mcp_results: print(f"{r.tool_name} ({r.duration_ms}ms): {r.result}") asyncio.run(main()) ``` ## Full Example: Raw Client For single requests with full control over API response: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio import webbrowser from collections.abc import Awaitable, Callable from typing import TypeVar from dotenv import load_dotenv load_dotenv() from dedalus_labs import AsyncDedalus, AuthenticationError T = TypeVar("T") async def with_oauth_retry(fn: Callable[[], Awaitable[T]]) -> T: try: return await fn() except AuthenticationError as e: body = e.body if isinstance(e.body, dict) else {} url = body.get("connect_url") or body.get("detail", {}).get("connect_url") if not url: raise webbrowser.open(url) input("\nPress Enter after completing OAuth...") return await fn() async def main(): client = AsyncDedalus() async def do_request(): return await client.chat.completions.create( model="openai/gpt-4.1", messages=[ { "role": "user", "content": "List my recent emails and summarize them", } ], mcp_servers=["anny_personal/gmail-mcp"], ) resp = await with_oauth_retry(do_request) print(resp.choices[0].message.content) if resp.mcp_tool_results: for r in resp.mcp_tool_results: print(f"{r.tool_name} ({r.duration_ms}ms): {r.result}") asyncio.run(main()) ``` ## Environment ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} # .env DEDALUS_API_KEY=dsk-live-... DEDALUS_API_URL=https://api.dedaluslabs.ai DEDALUS_AS_URL=https://as.dedaluslabs.ai ``` No OAuth credentials needed client-side. The MCP server handles OAuth configuration, and DAuth manages token storage. ## When to Use **OAuth works for:** * User-facing applications * Delegated access (acting on behalf of users) * Services like Gmail, Google Calendar, Linear, GitHub **Use [Bearer Auth](/dmcp/client/bearer-auth) instead for:** * API keys and service tokens * Backend integrations without user context * Service-to-service calls # Prompts Source: https://docs.dedaluslabs.ai/dmcp/client/prompts List and render prompts from MCP servers # Prompts > List and render prompts from MCP servers Prompts are reusable message templates exposed by the server. A prompt can accept **string-valued arguments** and returns a sequence of messages you can feed into an LLM conversation. **If you’re new to this:** think of a prompt as a **pre-written chat script** stored on the server (like “summarize” or “code review”). Your client can **list** the available scripts, then **render** one into actual messages (optionally filling in arguments like `"language": "python"`). ## List prompts Discover available prompts: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.client import open_connection async with open_connection("http://127.0.0.1:8000/mcp") as client: prompts = await client.list_prompts() for p in prompts.prompts: print(f"{p.name}: {p.description}") ``` ## Prompt schema Each prompt includes: | Field | Type | Description | | ------------- | -------------- | --------------------------- | | `name` | `str` | Prompt identifier | | `description` | `str \| None` | What the prompt does | | `arguments` | `list \| None` | Required/optional arguments | ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} for p in prompts.prompts: print(f"Name: {p.name}") print(f"Description: {p.description}") if p.arguments: for arg in p.arguments: required = "(required)" if arg.required else "(optional)" print(f" - {arg.name}: {arg.description} {required}") ``` ## Get a prompt Render a prompt with arguments: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} result = await client.get_prompt("summarize", {"style": "brief"}) print(result.messages) ``` **Note**: MCP prompt arguments are strings. If a server expects structured input (lists/dicts), it will usually ask you to pass a JSON string and parse it server-side. ## Response structure A rendered prompt returns a `GetPromptResult`: | Field | Type | Description | | ------------- | ------------- | ------------------------- | | `messages` | `list` | Rendered message sequence | | `description` | `str \| None` | Optional description | Each message has a `role` and a `content` block. Many prompts return text content; some may return non-text content blocks. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import types result = await client.get_prompt("code-review", {"language": "python"}) for msg in result.messages: content = msg.content if isinstance(content, types.TextContent): text = content.text else: text = f"<{content.type} content>" print(f"[{msg.role}] {text}") ``` ## Example: Code assistant ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_mcp.client import MCPClient from dedalus_mcp import types async def main(): client = await MCPClient.connect("http://127.0.0.1:8000/mcp") try: prompts = await client.list_prompts() print("Available prompts:") for p in prompts.prompts: print(f" - {p.name}: {p.description}") result = await client.get_prompt( "explain-code", { "language": "python", "code": "def fib(n): return n if n < 2 else fib(n-1) + fib(n-2)", }, ) print("\nRendered prompt:") for msg in result.messages: if isinstance(msg.content, types.TextContent): print(f"[{msg.role}] {msg.content.text}") else: print(f"[{msg.role}] <{msg.content.type} content>") finally: await client.close() asyncio.run(main()) ``` ## Prompts without arguments Some prompts don’t require arguments: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} result = await client.get_prompt("greeting") # or explicitly: result = await client.get_prompt("greeting", {}) ``` ## Context manager ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.client import open_connection async with open_connection("http://127.0.0.1:8000/mcp") as client: prompts = await client.list_prompts() result = await client.get_prompt("analyze", {"data": "sample"}) ``` # Resources Source: https://docs.dedaluslabs.ai/dmcp/client/resources List and read resources from MCP servers Resources are **read-only data** exposed by the server. A resource might be a file, a generated report, or any other content the server can provide without side effects. ## List resources ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.client import open_connection async with open_connection("http://127.0.0.1:8000/mcp") as client: resources = await client.list_resources() for r in resources.resources: print(f"{r.uri}: {r.name}") ``` ## Resource schema Each resource includes: | Field | Type | Description | | ------------- | ------------- | -------------------------------------------------- | | `uri` | `str` | Resource identifier (e.g. `resource://config/app`) | | `name` | `str` | Human-readable name | | `description` | `str \| None` | What the resource contains | | `mimeType` | `str \| None` | Content type (e.g. `application/json`) | ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} for r in resources.resources: print(f"URI: {r.uri}") print(f"Name: {r.name}") print(f"Type: {r.mimeType}") ``` ## Read resources Read a specific resource by URI: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} result = await client.read_resource("resource://config/app") ``` `result` is a `ReadResourceResult` with a `contents` list. It can contain `TextResourceContents` (text) or `BlobResourceContents` (base64 blob). ## Response structure (text vs binary) ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import base64 from dedalus_mcp import types result = await client.read_resource("resource://data/report") if not result.contents: print("No contents (resource missing or empty)") else: item = result.contents[0] if isinstance(item, types.TextResourceContents): print(item.text) elif isinstance(item, types.BlobResourceContents): data = base64.b64decode(item.blob) # blob is base64 text with open("output.bin", "wb") as f: f.write(data) ``` ## Example: Configuration reader ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import json from dedalus_mcp.client import open_connection from dedalus_mcp import types async with open_connection("http://127.0.0.1:8000/mcp") as client: resources = await client.list_resources() print("Available resources:") for r in resources.resources: print(f" - {r.uri} ({r.mimeType})") result = await client.read_resource("resource://config/app") if not result.contents or not isinstance(result.contents[0], types.TextResourceContents): raise RuntimeError("Expected text config resource") data = json.loads(result.contents[0].text) print(f"App config: {data}") ``` ## Resource templates Resource templates are just "patterns" a server can publish (like resource://users/) to show what kinds of resource URLs exist. They don't automatically create resources—to read something, you still call read\_resource(...) with a real URL (like resource://users/123), and it only works if the server actually serves that exact URI. ## Context manager Again, `open_connection(...)` is an async context manager. It means you don't have to remember to call await `client.close()`. When the async with block exits, it automatically closes the underlying connection for you. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.client import open_connection async with open_connection("http://127.0.0.1:8000/mcp") as client: resources = await client.list_resources() result = await client.read_resource("resource://docs/readme") ``` # Roots Source: https://docs.dedaluslabs.ai/dmcp/client/roots Advertise filesystem boundaries to servers Roots inform servers about filesystem resources the client has access to. Servers can use this information to adjust behavior or restrict operations to safe boundaries. ## Configure Roots ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from pathlib import Path from dedalus_mcp.client import ClientCapabilitiesConfig, open_connection from dedalus_mcp.types import Root initial_roots = [ Root(uri=Path.cwd().as_uri(), name="Project Directory"), Root(uri=Path("/tmp").as_uri(), name="Temporary Files"), ] capabilities = ClientCapabilitiesConfig( enable_roots=True, initial_roots=initial_roots, ) async with open_connection( url="http://127.0.0.1:8000/mcp", transport="streamable-http", capabilities=capabilities, ) as client: # List advertised roots roots = await client.list_roots() for root in roots: print(f"{root.name}: {root.uri}") ``` ## Dynamic Updates Update roots during the session: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} async with open_connection( url="http://127.0.0.1:8000/mcp", transport="streamable-http", capabilities=capabilities, ) as client: # Add a new root new_roots = initial_roots + [ Root(uri=Path.home().as_uri(), name="Home Directory"), ] await client.update_roots(new_roots, notify=True) # Verify update roots = await client.list_roots() print(f"Now advertising {len(roots)} roots") ``` ## Root Structure ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.types import Root Root( uri="file:///path/to/directory", # File URI name="Human-readable name", # Optional display name ) ``` | Field | Type | Description | | ------ | ----- | -------------------------------------------- | | `uri` | `str` | File URI (e.g., `file:///home/user/project`) | | `name` | `str` | Optional human-readable name | ## Security Roots establish security boundaries. Servers should: * Only access files within advertised roots * Reject operations targeting paths outside root boundaries * Use roots to scope file searches and operations ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} # Server-side: check if path is within roots def is_path_allowed(path: Path, roots: list[Root]) -> bool: for root in roots: root_path = Path(str(root.uri).replace("file://", "")) if path.is_relative_to(root_path): return True return False ``` # Sampling Source: https://docs.dedaluslabs.ai/dmcp/client/sampling Handle LLM completion requests from servers # Sampling > Handle LLM completion requests from servers MCP servers can request LLM completions from clients during tool execution. This enables servers to delegate AI reasoning to the client, which controls which model is used. ## Handler To support sampling, register a sampling handler when you connect. The handler receives the server’s request (`CreateMessageRequestParams`) and should return either: * `CreateMessageResult` (success), or * `ErrorData` (failure) Here’s a complete example using **Anthropic**: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from anthropic import AsyncAnthropic from dedalus_mcp import types anthropic = AsyncAnthropic() async def sampling_handler( _ctx: object, params: types.CreateMessageRequestParams, ) -> types.CreateMessageResult | types.ErrorData: try: messages = [{"role": m.role, "content": m.content.text} for m in params.messages] resp = await anthropic.messages.create( model="claude-sonnet-4-20250514", messages=messages, max_tokens=params.maxTokens, ) return types.CreateMessageResult( model=resp.model, role="assistant", content=types.TextContent(type="text", text=resp.content[0].text), stopReason="end_turn", ) except Exception as e: return types.ErrorData(code=types.INTERNAL_ERROR, message=str(e)) ``` ## Usage Enable sampling by passing the handler in `ClientCapabilitiesConfig` when connecting: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} capabilities = ClientCapabilitiesConfig(sampling=sampling_handler) async with open_connection( url="http://127.0.0.1:8000/mcp", capabilities=capabilities, ) as client: # If the server calls sampling/createMessage during this tool run, # your sampling_handler will be invoked. result = await client.call_tool("analyze", {"data": "..."}) ``` ## Error handling When something goes wrong inside your handler, return an `ErrorData` (don’t raise). The server will receive this as an MCP error response to its sampling request. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import types async def sampling_handler(context: object, params: types.CreateMessageRequestParams): try: # ... call LLM ... return types.CreateMessageResult( model="claude-3-5-sonnet-20241022", role="assistant", content=types.TextContent(type="text", text="ok"), stopReason="end_turn", ) except Exception as e: return types.ErrorData(code=types.INTERNAL_ERROR, message=str(e)) ``` # Tools Source: https://docs.dedaluslabs.ai/dmcp/client/tools List and call tools on MCP servers Tools are server-side functions that a client can execute with arguments. With `MCPClient`, the basic flow is: ## List tools Discover available tools on the server: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.client import open_connection async with open_connection("http://127.0.0.1:8000/mcp") as client: tools = await client.list_tools() for tool in tools.tools: print(f"{tool.name}: {tool.description}") ``` ## Tool schema Each tool includes: | Field | Type | Description | | ------------- | ------ | ------------------------- | | `name` | `str` | Tool identifier | | `description` | `str` | What the tool does | | `inputSchema` | `dict` | JSON Schema for arguments | ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} for tool in tools.tools: print(f"Name: {tool.name}") print(f"Description: {tool.description}") print(f"Parameters schema: {tool.inputSchema}") ``` ## Call tools Execute a tool with arguments: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} result = await client.call_tool("add", {"a": 5, "b": 3}) # Tool results are content blocks; most tools return TextContent. print(result.content[0].text) ``` ## Error handling If the server returns a JSON-RPC error (common when a tool raises), `call_tool(...)` raises `McpError`: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from mcp.shared.exceptions import McpError try: result = await client.call_tool("divide", {"a": 10, "b": 0}) print(result.content[0].text) except McpError as e: print(f"Tool call failed: {e}") ``` ## Example: Calculator ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_mcp.client import MCPClient from mcp.shared.exceptions import McpError async def main(): client = await MCPClient.connect("http://127.0.0.1:8000/mcp") try: tools = await client.list_tools() print("Available tools:") for tool in tools.tools: print(f" - {tool.name}") add_result = await client.call_tool("add", {"a": 5, "b": 3}) print(f"5 + 3 = {add_result.content[0].text}") mul_result = await client.call_tool("multiply", {"a": 4, "b": 7}) print(f"4 * 7 = {mul_result.content[0].text}") except McpError as e: print(f"Tool call failed: {e}") finally: await client.close() asyncio.run(main()) ``` ## Context manager `open_connection(...)` is an async context manager. It means you don't have to remember to call await `client.close()`. When the async with block exits, it automatically closes the underlying connection for you. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.client import open_connection async with open_connection("http://127.0.0.1:8000/mcp") as client: tools = await client.list_tools() result = await client.call_tool("greet", {"name": "World"}) print(result.content[0].text) ``` # Connections Source: https://docs.dedaluslabs.ai/dmcp/connections How connections are named and resolved ## Single-Connection Servers For single-connection servers, name is optional and dispatch auto-resolves: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} gmail = Connection(secrets=SecretKeys(token="GMAIL_TOKEN")) # No name needed async def _req(method, path, body=None): ctx = get_context() return await ctx.dispatch(HttpRequest(method=method, path=path, body=body)) ``` ## Connection Naming Connection names are derived from your server's slug: `windsor/gmail-mcp` -> `gmail-mcp`. If you hardcode a different name in `dispatch("gmail", ...)`, it will fail. Use auto-dispatch for single-connection servers. ## Multi-Connection Servers When you have multiple connections, specify the target explicitly: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} await ctx.dispatch("gmail", HttpRequest(...)) await ctx.dispatch("calendar", HttpRequest(...)) ``` Each connection requires a separate OAuth flow. Pass `connection_name` when creating the session: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} # API creates session with explicit connection name session = await admin_api.create_oauth_session( server_id=deployment_id, scopes=["gmail.readonly"], connection_name="gmail", # Stored with this name ) ``` ## Debugging Connection not found? Check what's in the JWT: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} @tool(description="Debug") async def debug() -> dict: ctx = get_context() return {"connections": list(ctx.runtime.get("connections", {}).keys())} ``` # DAuth Architecture Source: https://docs.dedaluslabs.ai/dmcp/dauth-architecture How DAuth protects credentials with sealed execution DAuth provides credential isolation through a sealed execution model. Your MCP server never sees raw credentials—only opaque handles that reference secrets stored and decrypted within a secure boundary. ## The Flow ```mermaid theme={"theme":{"light":"github-light","dark":"github-dark"}} sequenceDiagram participant U as SDK participant P as Dedalus participant CP as DAuth participant M as MCP Server participant S as Sealed Enclave Note over U: 1) Encrypt credentials client-side U->>U: Encrypt credentials Note over U,P: 2) Send request with encrypted credentials U->>P: Request + encrypted credentials Note over P,CP: 3) Send scoped request P->>CP: Scoped request CP-->>P: Scoped token Note over P,M: 4) Call MCP server with scoped token P->>M: Tool call + scoped token M->>M: Validate token Note over M,S: 5) Execute in sealed boundary M->>S: Execute request S->>S: External API S-->>M: Result M-->>P: Tool result P-->>U: Response ``` ## Step-by-Step ### 1. Client-Side Encryption Credentials are encrypted on your device before transmission. Plaintext secrets never travel over the network. ### 2. Request with Encrypted Credentials The SDK sends your request along with encrypted credentials to Dedalus. ### 3. Scoped Token Issuance DAuth stores the encrypted credentials and issues a scoped token that: * Is bound to specific MCP servers * Is cryptographically bound to your client's key (DPoP) * Can only be used for authorized operations ### 4. MCP Server Receives Token Your MCP server receives the scoped token and validates it against DAuth's public keys. The server never sees raw credentials. ### 5. Sealed Execution When the MCP server needs to call an external API (GitHub, Slack, etc.), it dispatches to a **sealed enclave**: * Decrypts credentials using hardware-backed keys * Calls the external API via TLS connection * Returns only the response * Scrubs credentials from memory immediately The response flows back through the MCP server to your application. At no point did your code have access to raw secrets. ## Security Properties | Property | What It Means | | ----------------------------- | ------------------------------------------------------ | | **Client-side encryption** | Credentials encrypted before leaving your device | | **Scoped tokens** | Tokens are limited to specific servers and connections | | **Sealed execution** | Decryption happens in isolated hardware boundary | | **Sender-constrained (DPoP)** | Stolen tokens are useless without the private key | | **No credential persistence** | Secrets decrypted on-demand, scrubbed after use | ## Why This Matters Traditional architectures require your application to handle credentials directly: ``` User → App → [credentials in memory] → External API → User ``` With DAuth: ``` User → Encrypted Token → Scoped Request → Sealed Boundary (App or External API) → User ``` Your application code, logs, and error traces never contain raw secrets. Even if your MCP server is compromised, attackers cannot extract credentials—they exist only within the sealed execution boundary. ## Using DAuth See [Authorization](/dmcp/authorization) for implementation details: * Enable DAuth with `AuthorizationConfig` * Configure server-level and per-tool scopes * Access authenticated user claims in your tools # Debugging Source: https://docs.dedaluslabs.ai/dmcp/debugging Debug MCP servers MCP involves two processes talking over HTTP or stdio. Here's how to debug them. ## The tmux pattern Run server and client in split terminals: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} tmux new-session -d -s mcp tmux split-window -h tmux send-keys -t mcp:0.0 'python server.py' C-m sleep 2 tmux send-keys -t mcp:0.1 'python client.py' C-m tmux attach -t mcp ``` See both outputs side by side. Kill when done: `tmux kill-session -t mcp` ## Structured logging Log from inside tools: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool @tool(description="Process data") async def process(data: str) -> dict: ctx = get_context() await ctx.debug("Starting", data={"length": len(data)}) # ... work ... await ctx.info("Done", data={"result": 42}) return {"ok": True} ``` ## Verbose mode ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} await server.serve(verbose=True, log_level="debug") ``` Or via environment: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} LOG_LEVEL=debug python server.py ``` ## Client-side log capture ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp.client import MCPClient, ClientCapabilitiesConfig def log_handler(params): print(f"[{params.level}] {params.data}") config = ClientCapabilitiesConfig(logging=log_handler) client = await MCPClient.connect(url, capabilities=config) ``` ## Common issues **"Client does not advertise the sampling capability"** Pass a sampling handler: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} config = ClientCapabilitiesConfig(sampling=sampling_handler) client = await MCPClient.connect(url, capabilities=config) ``` **"Tool mutation attempted after server startup"** Enable dynamic tools: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} server = MCPServer("my-server", allow_dynamic_tools=True) ``` **Connection refused** Check the port: `lsof -i :8000` Kill orphans: `lsof -ti :8000 | xargs kill -9` **Wrong URL** The endpoint is `http://127.0.0.1:8000/mcp`, not just `:8000`. ## When stuck 1. Strip to minimal reproduction 2. Check MCP spec compliance 3. Test with MCP Inspector: `npx @anthropic/mcp-inspector` 4. Read server logs # Deploy Source: https://docs.dedaluslabs.ai/dmcp/deploy Host your MCP server and share it with others. Deploy your server to the Dedalus platform. Once deployed, you can: * **Access it from anywhere** — No local server required. * **Share with others** — Let anyone use your MCP server. * **Monetize** — In the future, earn revenue sharing when others use your server. Go to [dedaluslabs.ai](https://dedaluslabs.ai) and click **Dashboard**. Dedalus homepage with Dashboard button Click `Add Server` to create a new deployment. Servers page with Add Server button Select your GitHub repository. Dedalus pulls from your repo on each deploy. Connect GitHub repository modal Configure your server: * **Environment Variables**: Your API keys (e.g., `OPENAI_API_KEY`). Encrypted and only accessible to your server. * **Required Credentials**: Fields users must provide (e.g., Supabase key, X API key). Users supply their own credentials at runtime. Click `Deploy` when ready. Deploy server configuration Once deployed, click `Publish` to list your server on the Dedalus MCP marketplace. In the future, when others use your server, you earn revenue sharing on every API call. Publish server to marketplace Your server is now live. Use your slug in the Dedalus SDK: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} mcp_servers=["your-org/your-server"] ``` Pro users get a server URL to add your server to Cursor, Claude, or any MCP client in one click. ## Tips Your repository should follow this structure: ``` my-server/ ├── main.py # Required: server entry point ├── pyproject.toml # Required: dependencies ├── tools/ # Optional: organize tools in a folder │ ├── __init__.py │ ├── search.py │ └── fetch.py └── ... ``` You can define tools directly in `main.py` or split them into a `tools/` folder for larger servers. For OAuth servers, the `name` parameter in `Connection("my-server", ...)` must match your deployment slug exactly (not including the org prefix). Same applies to `ctx.dispatch("my-server")`. This ensures OAuth callbacks route correctly. **Environment Variables** are your secrets (e.g., `OPENAI_API_KEY`). They're encrypted and only accessible to your server. **Required Credentials** are fields users must provide when connecting to your server (e.g., their own API keys). Users supply these at runtime. If your build fails, check the build logs in your dashboard. Common issues: * Missing dependencies in `pyproject.toml` * Environment variables not set # Examples Source: https://docs.dedaluslabs.ai/dmcp/examples Production MCP server examples Real-world MCP servers you can deploy today. Each example shows credential setup, tool implementation, and server configuration. ## GitHub + Supabase A multi-connection MCP server exposing GitHub and Supabase tools. ### Environment ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} # .env DEDALUS_API_KEY=dsk-live-... # Supabase SUPABASE_URL=https://xxx.supabase.co SUPABASE_SECRET_KEY=eyJ... # GitHub GITHUB_TOKEN=ghp_... GITHUB_BASE_URL=https://api.github.com ``` ### Server ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} # server.py import os from dedalus_mcp import MCPServer from dedalus_mcp.server import TransportSecuritySettings from dedalus_mcp.auth import Connection, SecretKeys # Define connections (credentials provided by client at runtime) github = Connection( name="github", secrets=SecretKeys(token="GITHUB_TOKEN"), auth_header_format="token {api_key}" ) supabase = Connection( name="supabase", secrets=SecretKeys(token="SUPABASE_SECRET_KEY"), auth_header_format="Bearer {api_key}" ) def create_server() -> MCPServer: return MCPServer( name="example-dedalus-mcp", connections=[github, supabase], http_security=TransportSecuritySettings(enable_dns_rebinding_protection=False), streamable_http_stateless=True, authorization_server=os.getenv("DEDALUS_AS_URL", "https://as.dedaluslabs.ai"), ) ``` ### GitHub tools ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} # gh.py from dataclasses import dataclass from typing import Any from dedalus_mcp import HttpMethod, HttpRequest, get_context, tool from dedalus_mcp.types import ToolAnnotations @dataclass(frozen=True) class GhResult: """GitHub API result.""" success: bool data: Any = None error: str | None = None @dataclass(frozen=True) class GhUser: """GitHub user profile.""" login: str name: str | None bio: str | None public_repos: int followers: int @dataclass(frozen=True) class GhRepo: """GitHub repository summary.""" name: str full_name: str stars: int language: str | None updated_at: str async def _gh_request(method: HttpMethod, path: str, body: Any = None) -> GhResult: """Execute GitHub API request.""" ctx = get_context() resp = await ctx.dispatch("github", HttpRequest(method=method, path=path, body=body)) if resp.success: return GhResult(success=True, data=resp.response.body) return GhResult(success=False, error=resp.error.message if resp.error else "Request failed") @tool( description="Get the authenticated GitHub user's profile", tags=["user", "read"], annotations=ToolAnnotations(readOnlyHint=True), ) async def gh_whoami() -> GhResult: result = await _gh_request(HttpMethod.GET, "/user") if result.success and result.data: user = GhUser( login=result.data["login"], name=result.data.get("name"), bio=result.data.get("bio"), public_repos=result.data.get("public_repos", 0), followers=result.data.get("followers", 0), ) return GhResult(success=True, data=user) return result @tool( description="List repositories for the authenticated user", tags=["repos", "read"], annotations=ToolAnnotations(readOnlyHint=True), ) async def gh_list_repos(per_page: int = 10) -> GhResult: result = await _gh_request(HttpMethod.GET, f"/user/repos?per_page={per_page}&sort=updated") if result.success and isinstance(result.data, list): repos = [ GhRepo( name=r["name"], full_name=r["full_name"], stars=r.get("stargazers_count", 0), language=r.get("language"), updated_at=r.get("updated_at", ""), ) for r in result.data ] return GhResult(success=True, data=repos) return result ``` *** ## X (Twitter) API Read-only X API tools using OAuth 2.0 App-Only authentication. ### Environment ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} # .env X_BEARER_TOKEN=AAAA... # App-only bearer token X_API_KEY=... # Optional: for user-context endpoints X_API_KEY_SECRET=... # Optional: for user-context endpoints ``` Free tier limits: 100 tweet reads, 500 writes/month. For production, consider Basic (\$100/mo) or Pro tier. ### Connection ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} # x.py from dataclasses import dataclass from typing import Any from dedalus_mcp import HttpMethod, HttpRequest, get_context, tool from dedalus_mcp.auth import Connection, SecretKeys from dedalus_mcp.types import ToolAnnotations x = Connection( name="x", secrets=SecretKeys(token="X_BEARER_TOKEN"), auth_header_format="Bearer {api_key}", ) DEFAULT_TWEET_FIELDS = "id,text,author_id,created_at,public_metrics" DEFAULT_USER_FIELDS = "id,name,username,description,public_metrics" @dataclass(frozen=True) class XResult: """X API result.""" success: bool data: Any = None meta: dict | None = None error: str | None = None @dataclass(frozen=True) class XUser: """X user profile.""" id: str name: str username: str description: str | None followers_count: int following_count: int @dataclass(frozen=True) class XTweet: """X tweet.""" id: str text: str author_id: str created_at: str retweet_count: int like_count: int async def _x_request(path: str) -> XResult: """Execute X API request.""" ctx = get_context() resp = await ctx.dispatch("x", HttpRequest(method=HttpMethod.GET, path=path)) if resp.success: body = resp.response.body or {} return XResult(success=True, data=body.get("data"), meta=body.get("meta")) return XResult(success=False, error=resp.error.message if resp.error else "Request failed") ``` ### Tools ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} @tool( description="Get an X user by their username", tags=["user", "read"], annotations=ToolAnnotations(readOnlyHint=True), ) async def x_get_user_by_username(username: str) -> XResult: result = await _x_request(f"/2/users/by/username/{username}?user.fields={DEFAULT_USER_FIELDS}") if result.success and result.data: metrics = result.data.get("public_metrics", {}) user = XUser( id=result.data["id"], name=result.data["name"], username=result.data["username"], description=result.data.get("description"), followers_count=metrics.get("followers_count", 0), following_count=metrics.get("following_count", 0), ) return XResult(success=True, data=user) return result @tool( description="Search recent tweets (last 7 days)", tags=["search", "read"], annotations=ToolAnnotations(readOnlyHint=True), ) async def x_search_recent(query: str, max_results: int = 10) -> XResult: from urllib.parse import quote max_results = max(10, min(100, max_results)) result = await _x_request( f"/2/tweets/search/recent?query={quote(query)}&tweet.fields={DEFAULT_TWEET_FIELDS}&max_results={max_results}" ) if result.success and result.data: tweets = [ XTweet( id=t["id"], text=t["text"], author_id=t["author_id"], created_at=t.get("created_at", ""), retweet_count=t.get("public_metrics", {}).get("retweet_count", 0), like_count=t.get("public_metrics", {}).get("like_count", 0), ) for t in result.data ] return XResult(success=True, data=tweets, meta=result.meta) return result @tool( description="Get a user's recent tweets", tags=["tweet", "read"], annotations=ToolAnnotations(readOnlyHint=True), ) async def x_get_user_tweets(user_id: str, max_results: int = 10) -> XResult: max_results = max(5, min(100, max_results)) result = await _x_request( f"/2/users/{user_id}/tweets?tweet.fields={DEFAULT_TWEET_FIELDS}&max_results={max_results}" ) if result.success and result.data: tweets = [ XTweet( id=t["id"], text=t["text"], author_id=t["author_id"], created_at=t.get("created_at", ""), retweet_count=t.get("public_metrics", {}).get("retweet_count", 0), like_count=t.get("public_metrics", {}).get("like_count", 0), ) for t in result.data ] return XResult(success=True, data=tweets, meta=result.meta) return result ``` *** ## Gmail (OAuth 2.0) Gmail MCP server with true OAuth 2.0 user authentication. ### Environment ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} # .env OAUTH_ENABLED=true OAUTH_AUTHORIZE_URL=https://accounts.google.com/o/oauth2/auth OAUTH_TOKEN_URL=https://oauth2.googleapis.com/token OAUTH_CLIENT_ID=your-client-id.apps.googleusercontent.com OAUTH_CLIENT_SECRET=your-client-secret OAUTH_SCOPES_AVAILABLE=https://www.googleapis.com/auth/gmail.readonly,https://www.googleapis.com/auth/gmail.modify OAUTH_BASE_URL=https://gmail.googleapis.com DEDALUS_API_KEY=dsk-live-... DEDALUS_API_URL=https://api.dedaluslabs.ai DEDALUS_AS_URL=https://as.dedaluslabs.ai ``` ### Setup 1. **Enable Gmail API**: Go to [Gmail API](https://console.cloud.google.com/apis/library/gmail.googleapis.com) and click "Enable" 2. **Create OAuth credentials**: Go to [APIs & Services → Credentials](https://console.cloud.google.com/apis/credentials) and: * Click "Create Credentials" → "OAuth client ID" * Application type: "Web application" * Add authorized redirect URIs for your deployment * Copy the Client ID and Client Secret to your `.env` 3. **Configure consent screen**: Set up the OAuth consent screen with the Gmail scopes listed above ### Server ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} # server.py import os from dedalus_mcp import MCPServer from dedalus_mcp.server import TransportSecuritySettings from gmail import gmail, gmail_tools def create_server() -> MCPServer: return MCPServer( name="gmail-mcp", connections=[gmail], http_security=TransportSecuritySettings(enable_dns_rebinding_protection=False), streamable_http_stateless=True, authorization_server=os.getenv("DEDALUS_AS_URL", "https://as.dedaluslabs.ai"), ) async def main() -> None: server = create_server() server.collect(*gmail_tools) await server.serve(port=8080) ``` ### Connection ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} # gmail.py import os from dataclasses import dataclass from typing import Any from dedalus_mcp import HttpMethod, HttpRequest, get_context, tool from dedalus_mcp.auth import Connection, OAuthConfig from dedalus_mcp.types import ToolAnnotations gmail = Connection( name="gmail", oauth=OAuthConfig( client_id=os.getenv("OAUTH_CLIENT_ID"), client_secret=os.getenv("OAUTH_CLIENT_SECRET"), authorize_url=os.getenv("OAUTH_AUTHORIZE_URL"), token_url=os.getenv("OAUTH_TOKEN_URL"), scopes=os.getenv("OAUTH_SCOPES_AVAILABLE", "").split(","), ), auth_header_format="Bearer {access_token}", ) @dataclass(frozen=True) class GmailResult: """Gmail API result.""" success: bool data: Any = None error: str | None = None @dataclass(frozen=True) class GmailMessage: """Gmail message summary.""" id: str thread_id: str snippet: str | None label_ids: list[str] async def _gmail_request(method: HttpMethod, path: str) -> GmailResult: """Execute Gmail API request.""" ctx = get_context() resp = await ctx.dispatch("gmail", HttpRequest(method=method, path=path)) if resp.success: return GmailResult(success=True, data=resp.response.body) return GmailResult(success=False, error=resp.error.message if resp.error else "Request failed") ``` ### Tools ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} @tool( description="List recent emails", tags=["email", "read"], annotations=ToolAnnotations(readOnlyHint=True), ) async def gmail_list_messages(max_results: int = 10, query: str = "") -> GmailResult: path = f"/gmail/v1/users/me/messages?maxResults={max_results}" if query: path += f"&q={query}" result = await _gmail_request(HttpMethod.GET, path) if result.success and result.data: messages = [ GmailMessage( id=m["id"], thread_id=m.get("threadId", ""), snippet=None, label_ids=[], ) for m in result.data.get("messages", []) ] return GmailResult(success=True, data=messages) return result @tool( description="Get email details by ID", tags=["email", "read"], annotations=ToolAnnotations(readOnlyHint=True), ) async def gmail_get_message(message_id: str) -> GmailResult: result = await _gmail_request(HttpMethod.GET, f"/gmail/v1/users/me/messages/{message_id}") if result.success and result.data: msg = result.data message = GmailMessage( id=msg["id"], thread_id=msg.get("threadId", ""), snippet=msg.get("snippet"), label_ids=msg.get("labelIds", []), ) return GmailResult(success=True, data=message) return result ``` *** ## Running the examples All examples follow the same pattern: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} # Clone the repo git clone https://github.com/dedalus-labs/example-dedalus-mcp cd example-dedalus-mcp # For the main example (GitHub + Supabase) cp .env.example .env # Edit .env with your credentials uv sync uv run python src/main.py # For X cd x-mcp cp .env.example .env uv sync uv run python src/main.py # For Gmail cd gmail-mcp cp .env.example .env uv sync uv run python src/main.py ``` Server runs on `http://127.0.0.1:8080/mcp`. # Welcome Source: https://docs.dedaluslabs.ai/dmcp/index Connect to MCP servers or build your own with DAuth — a simple, performant framework. We believe MCP is the interface between models and the world. To use any tools, we believe it must be: 1. **Secure**: Credentials handled correctly and securely. 2. **Modular**: Tools that compose, instead of locked to one server. 3. **Spec-faithful**: Every behavior traceable. With Dedalus MCP, we achieve all above. We don't bundle CLI scaffolding or opinionated middleware. We integrate with your existing stack. ## What is MCP? Model Context Protocol (MCP) is a standard for AI agents to interact with external services. Instead of hardcoding integrations, you expose **tools** (functions), **resources** (data), and **prompts** (templates) that any MCP-compatible client—Claude, GPT, Cursor—can discover and use. Think of it as a universal API for agent capabilities: you define what's available, the model decides when to use it. ## Why Dedalus ### Security by Default Other frameworks leave authentication as an exercise for the server builder. We built [DAuth](/dmcp/authorization): * **Zero-trust**: Dedalus never sees raw API keys or access tokens. * **Hardware enclave**: Credentials are validated momentarily, then zeroed from memory. * **Host-blind**: Credentials are encrypted client-side before leaving your device. * **Intent-based**: Define access intents (e.g., `slack_read`) to prevent permission hijacking. Production-grade auth in a few lines of code, not weeks of infrastructure work. ### Modular Servers Most MCP frameworks couple tools to servers: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from fastmcp import FastMCP mcp = FastMCP(name="CalculatorServer") # Tool is bound to this server instance @mcp.tool def add(a: int, b: int) -> int: return a + b ``` Dedalus decouples decoration from registration: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import tool # @tool attaches metadata. That's it. @tool def add(a: int, b: int) -> int: return a + b # collect() registers. Same tool, multiple servers. server_a.collect(add) server_b.collect(add) ``` Tools become portable units. No global state. Tests stay isolated. Functions work everywhere. ### MCP Observability and Versioning MCP has multiple spec versions with real behavioral differences. Unlike other frameworks, Dedalus MCP tracks exactly which version your client negotiated. ### Fast and Production Ready **122 KB**. FastMCP is 8.6 MB—**70× smaller**. We ship **code**, not dependencies. # Creative Patterns Source: https://docs.dedaluslabs.ai/dmcp/patterns Unconventional ways to use MCP primitives MCP's three primitives (tools, resources, prompts) are building blocks. Combine them creatively. ## Sequential Thinking Keep the model on track during complex reasoning. The model calls this tool repeatedly, building a chain of thoughts it can revise or branch. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dataclasses import dataclass, field from dedalus_mcp import MCPServer, tool @dataclass class ThinkingState: thoughts: list[dict] = field(default_factory=list) branches: dict[str, list[dict]] = field(default_factory=dict) state = ThinkingState() @tool(description="""Step-by-step reasoning with revision support. Use when: - Breaking down complex problems - Planning that might need course correction - Analysis where the full scope isn't clear initially You can revise previous thoughts, branch into alternatives, or extend beyond your initial estimate.""") def think( thought: str, thought_number: int, total_thoughts: int, next_thought_needed: bool, is_revision: bool = False, revises_thought: int | None = None, branch_id: str | None = None, ) -> dict: entry = { "number": thought_number, "thought": thought, "is_revision": is_revision, "revises": revises_thought, } if branch_id: state.branches.setdefault(branch_id, []).append(entry) else: state.thoughts.append(entry) return { "thought_number": thought_number, "total_thoughts": total_thoughts, "next_thought_needed": next_thought_needed, "history_length": len(state.thoughts), "branches": list(state.branches.keys()), } server = MCPServer("reasoning") server.collect(think) ``` The model decides when to think, revise, or branch. You just provide the infrastructure. ## Context Rehydration Persist important context to a database. After model compaction (when context window fills up), fetch it back instantly. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import MCPServer, tool, resource import json # Could be Redis, SQLite, Postgres, etc. memory_store: dict[str, dict] = {} @tool(description="Save important context for later retrieval") def remember(key: str, content: str, tags: list[str] | None = None) -> dict: memory_store[key] = { "content": content, "tags": tags or [], "saved_at": datetime.now().isoformat(), } return {"saved": key} @tool(description="Retrieve previously saved context") def recall(key: str) -> dict: if key not in memory_store: return {"error": f"No memory for key: {key}"} return memory_store[key] @tool(description="Find memories by tag") def search_memories(tag: str) -> list[dict]: return [ {"key": k, **v} for k, v in memory_store.items() if tag in v.get("tags", []) ] @resource(uri="memory://index", description="All saved memory keys") def memory_index() -> dict: return { "keys": list(memory_store.keys()), "count": len(memory_store), } server = MCPServer("memory") server.collect(remember, recall, search_memories, memory_index) ``` Start conversations with: "Check memory://index for context from previous sessions." ## Live Data Feeds Resources can push updates. Build dashboards, monitoring, or real-time collaboration. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import MCPServer, resource import asyncio metrics = {"cpu": 0.0, "memory": 0.0, "requests": 0} @resource(uri="system://metrics", description="Live system metrics") def get_metrics() -> dict: return {"timestamp": datetime.now().isoformat(), **metrics} server = MCPServer("monitoring") server.collect(get_metrics) async def update_metrics(): while True: metrics["cpu"] = get_cpu_usage() metrics["memory"] = get_memory_usage() metrics["requests"] = get_request_count() await server.notify_resource_updated("system://metrics") await asyncio.sleep(5) ``` Subscribed clients receive `notifications/resources/updated` when data changes. ## Persona Switching Prompts define behavior. Let users switch the model's persona on demand. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dataclasses import dataclass from dedalus_mcp import MCPServer, prompt, Message @dataclass(frozen=True) class PersonaArgs: """Arguments for persona prompts.""" context: str | None = None verbosity: str = "normal" focus_area: str | None = None @prompt("persona/architect", description="Senior software architect") def architect_persona(args: PersonaArgs) -> list[Message]: context = f" Current context: {args.context}" if args.context else "" return [ Message(role="assistant", content=f"""You are a senior software architect with 20 years of experience. You think in systems, not features. You ask clarifying questions before proposing solutions. You consider maintainability, scalability, and team dynamics.{context}"""), ] @prompt("persona/reviewer", description="Strict code reviewer") def reviewer_persona(args: PersonaArgs) -> list[Message]: focus = f" Focus especially on {args.focus_area}." if args.focus_area else "" return [ Message(role="assistant", content=f"""You are a meticulous code reviewer. You catch bugs others miss. You insist on tests. You're constructive but don't let things slide.{focus}"""), ] @prompt("persona/rubber-duck", description="Patient debugging companion") def rubber_duck_persona(args: PersonaArgs) -> list[Message]: return [ Message(role="assistant", content="""You help by asking questions, not giving answers. When someone explains their problem, ask what they've tried. Help them think through it systematically."""), ] server = MCPServer("personas") server.collect(architect_persona, reviewer_persona, rubber_duck_persona) ``` Users select `prompts/get persona/architect` to shift behavior mid-conversation. ## Guardrails Tools can validate and constrain model behavior. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import MCPServer, tool ALLOWED_PATHS = ["/app/data", "/app/config"] BLOCKED_PATTERNS = ["password", "secret", "api_key"] @tool(description="Read a file (with safety checks)") def safe_read(path: str) -> dict: # Path validation if not any(path.startswith(allowed) for allowed in ALLOWED_PATHS): return {"error": f"Access denied: {path}"} content = open(path).read() # Content filtering for pattern in BLOCKED_PATTERNS: if pattern in content.lower(): return {"error": "Content contains sensitive data"} return {"content": content} @tool(description="Execute a command (restricted)") def safe_exec(command: str) -> dict: allowed = ["ls", "cat", "grep", "find"] cmd = command.split()[0] if cmd not in allowed: return {"error": f"Command not allowed: {cmd}"} # Execute safely... return {"output": "..."} ``` The model can only do what you allow. ## Workflow Orchestration Chain tools into multi-step workflows with checkpoints. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import MCPServer, tool, resource from enum import Enum class WorkflowStatus(Enum): PENDING = "pending" RUNNING = "running" COMPLETED = "completed" FAILED = "failed" workflows: dict[str, dict] = {} @tool(description="Start a new workflow") def start_workflow(workflow_id: str, steps: list[str]) -> dict: workflows[workflow_id] = { "status": WorkflowStatus.RUNNING.value, "steps": steps, "current_step": 0, "results": [], } return {"workflow_id": workflow_id, "status": "started"} @tool(description="Complete current step and advance") def complete_step(workflow_id: str, result: str) -> dict: wf = workflows.get(workflow_id) if not wf: return {"error": "Workflow not found"} wf["results"].append(result) wf["current_step"] += 1 if wf["current_step"] >= len(wf["steps"]): wf["status"] = WorkflowStatus.COMPLETED.value return { "step_completed": wf["current_step"], "next_step": wf["steps"][wf["current_step"]] if wf["current_step"] < len(wf["steps"]) else None, "status": wf["status"], } @resource(uri="workflows://active", description="All active workflows") def active_workflows() -> dict: return { wid: wf for wid, wf in workflows.items() if wf["status"] == WorkflowStatus.RUNNING.value } ``` The model manages complex multi-step processes with clear state. ## Audit Trail Log everything the model does for compliance or debugging. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import MCPServer, tool, resource from datetime import datetime audit_log: list[dict] = [] def log_action(action: str, details: dict): audit_log.append({ "timestamp": datetime.now().isoformat(), "action": action, **details, }) @tool(description="Perform a sensitive operation") def sensitive_operation(operation: str, target: str) -> dict: log_action("sensitive_operation", {"operation": operation, "target": target}) # ... do the thing ... return {"status": "completed"} @resource(uri="audit://log", description="Complete audit trail") def get_audit_log() -> list[dict]: return audit_log @resource(uri="audit://recent", description="Last 10 actions") def recent_actions() -> list[dict]: return audit_log[-10:] ``` ## Mix and Match The real power is combining patterns: * **Sequential thinking + memory**: Save reasoning chains for later reference * **Guardrails + audit trail**: Log blocked attempts * **Live feeds + workflows**: Monitor workflow progress in real-time * **Personas + prompts**: Context-aware behavior switching Build what your use case needs. # Quickstart Source: https://docs.dedaluslabs.ai/dmcp/quickstart Build and deploy an MCP server in 5 minutes. ## Install ```bash pip theme={"theme":{"light":"github-light","dark":"github-dark"}} pip install dedalus-mcp dedalus-labs ``` ```bash uv theme={"theme":{"light":"github-light","dark":"github-dark"}} uv add dedalus-mcp dedalus-labs ``` ## API Key Claim your API key from the [dashboard](https://www.dedaluslabs.ai/dashboard/api-keys) and set it: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} export DEDALUS_API_KEY="your-api-key" ``` ## Create a server ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} # server.py from dedalus_mcp import MCPServer, tool @tool(description="Add two numbers") def add(a: int, b: int) -> int: return a + b @tool(description="Multiply two numbers") def multiply(a: int, b: int) -> int: return a * b server = MCPServer("calculator") server.collect(add, multiply) if __name__ == "__main__": import asyncio asyncio.run(server.serve()) ``` Run it: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} python server.py ``` Server starts on `http://127.0.0.1:8000/mcp`. ## Test with a client ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} # client.py from dedalus_mcp.client import MCPClient import asyncio async def main(): client = await MCPClient.connect("http://127.0.0.1:8000/mcp") # List available tools tools = await client.list_tools() print([t.name for t in tools.tools]) # ['add', 'multiply'] # Call a tool result = await client.call_tool("add", {"a": 2, "b": 3}) print(result.content[0].text) # "5" await client.close() asyncio.run(main()) ``` ## Deploy Ready to host your server remotely? Deploy to the Dedalus platform to access it from anywhere, share with others, or monetize your work. Step-by-step deployment walkthrough with screenshots. ## Next Steps Most secure MCP auth framework in the industry. Examples for production servers (e.g., GitHub and Supabase). # Context Source: https://docs.dedaluslabs.ai/dmcp/server/context Access MCP request features inside handlers The `Context` object gives you access to request-scoped utilities while a tool/resource/prompt is executing—primarily **logging**, **progress**, and **request metadata**. Use `get_context()` to fetch the active context. **Note**: `get_context()` only works inside an active MCP request handler. Calling it elsewhere raises `LookupError`. ## Get Context ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool @tool(description="Process data with logging") async def process(data: str) -> str: ctx = get_context() await ctx.info("Processing", data={"bytes": len(data)}) return "done" ``` ## Auto-injection (tools + dependencies) In tools (and dependency callables), parameters annotated as `Context` can be **auto-injected** by the framework—no need to call `get_context()` manually: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import Context, tool @tool(description="Same as process(), but Context is injected") async def process_injected(data: str, ctx: Context) -> str: await ctx.info("Processing", data={"bytes": len(data)}) return "done" ``` ## Available features | Feature | API | What it does | | -------------------------------- | ------------------------------------------------------------------------ | ------------------------------------------------------------------------------------- | | Logging | `ctx.debug()`, `ctx.info()`, `ctx.warning()`, `ctx.error()`, `ctx.log()` | Send log messages to the client | | Request metadata | `ctx.request_id`, `ctx.session_id`, `ctx.progress_token` | Identify the current request/session and progress token | | Server/runtime access | `ctx.server`, `ctx.runtime` | Access runtime wiring (if present) | | Auth context | `ctx.auth_context` | Access the auth context (if authorization is enabled) | | Progress | `ctx.report_progress(...)`, `ctx.progress(...)` | Emit progress notifications (if the client provided a progress token) | | Dispatch (optional) | `ctx.dispatch(...)` | Send authenticated HTTP requests via the configured dispatch backend (if configured) | | Connection resolution (optional) | `ctx.resolve_client(...)` | Resolve a connection handle into a client via the configured resolver (if configured) | ## Request metadata ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool @tool(description="Show request metadata") async def my_tool() -> dict: ctx = get_context() return { "request_id": ctx.request_id, "session_id": ctx.session_id, # may be None (e.g. stdio) "progress_token": ctx.progress_token, # may be None if client didn't request progress } ``` ## Authorization context If authorization is enabled, `ctx.auth_context` may be set; otherwise it's `None`: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool @tool(description="Show auth context presence") async def whoami() -> dict: ctx = get_context() if ctx.auth_context is None: return {"user": "anonymous"} return {"auth_context": "present"} ``` ## Progress example ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import anyio from dedalus_mcp import get_context, tool @tool(description="Process files with progress reporting") async def process_files(paths: list[str]) -> dict: ctx = get_context() processed = 0 async with ctx.progress(total=len(paths)) as tracker: for _path in paths: await anyio.sleep(0.01) processed += 1 await tracker.advance(1) return {"processed": processed} ``` # Elicitation Source: https://docs.dedaluslabs.ai/dmcp/server/elicitation Request user input during tool execution Elicitation lets your server ask the client to collect **structured input from the user** while a tool is running—confirmations, missing parameters, or a small "form". ## Basic usage ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool, types @tool(description="Deploy to environment") async def deploy(env: str) -> str: ctx = get_context() server = ctx.server if server is None: raise RuntimeError("No active server in context") result = await server.request_elicitation( types.ElicitRequestParams( message=f"Deploy to {env}?", requestedSchema={ "type": "object", "properties": { "confirm": {"type": "boolean"}, }, "required": ["confirm"], }, ) ) if result.action == "accept" and result.content and result.content.get("confirm"): return f"Deployed to {env}" return "Deployment cancelled" ``` ## Parameters `request_elicitation(...)` takes an `ElicitRequestParams` with: * **`message: str`**: text shown to the user * **`requestedSchema: dict`**: a restricted JSON Schema object describing the expected fields ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} params = types.ElicitRequestParams( message="Enter configuration", requestedSchema={ "type": "object", "properties": { "name": {"type": "string"}, "replicas": {"type": "integer"}, "dry_run": {"type": "boolean"}, }, "required": ["name"], }, ) result = await ctx.server.request_elicitation(params) ``` **Schema limitations (enforced by Dedalus MCP):** * top-level `type` must be `"object"` * `properties` must be a non-empty object * property `type` must be one of: `"string"`, `"number"`, `"integer"`, `"boolean"` * nested objects/arrays are not supported ## Response actions The client returns an `ElicitResult` with: * `action`: `"accept" | "decline" | "cancel"` * `content`: optional mapping (present when `action == "accept"`) ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} result = await ctx.server.request_elicitation(params) match result.action: case "accept": data = result.content or {} return f"Got: {data}" case "decline": return "User declined" case "cancel": return "User cancelled" ``` ## Example: Progressive disclosure Collect complex information step-by-step: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool, types @tool(description="Create new project") async def create_project() -> str: ctx = get_context() server = ctx.server if server is None: raise RuntimeError("No active server in context") name = await server.request_elicitation( types.ElicitRequestParams( message="Project name:", requestedSchema={ "type": "object", "properties": {"name": {"type": "string"}}, "required": ["name"], }, ) ) if name.action != "accept" or not name.content: return "Cancelled" project_type = await server.request_elicitation( types.ElicitRequestParams( message="Project type (web/api/cli):", requestedSchema={ "type": "object", "properties": {"type": {"type": "string"}}, "required": ["type"], }, ) ) if project_type.action != "accept" or not project_type.content: return "Cancelled" confirm = await server.request_elicitation( types.ElicitRequestParams( message=f"Create {project_type.content['type']} project '{name.content['name']}'?", requestedSchema={ "type": "object", "properties": {"confirm": {"type": "boolean"}}, "required": ["confirm"], }, ) ) if confirm.action == "accept" and confirm.content and confirm.content.get("confirm"): return f"Created {name.content['name']}" return "Cancelled" ``` ## Error handling Elicitation requires an active MCP session and a client that advertises the `elicitation` capability. If not, `request_elicitation(...)` raises `McpError` (typically `METHOD_NOT_FOUND`), and timeouts raise `McpError` as well. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from mcp.shared.exceptions import McpError from dedalus_mcp import types try: result = await ctx.server.request_elicitation(params) except McpError as e: return f"Elicitation unavailable: {e}" ``` # Logging Source: https://docs.dedaluslabs.ai/dmcp/server/logging Send log messages to connected clients Logging lets your server send debug/info/warning/error messages to MCP clients while handling a request. This is helpful for visibility during tool execution and for debugging. **Note**: Clients decide how (or whether) to display these logs. ## Basic usage ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool @tool(description="Process data") async def process(data: str) -> str: ctx = get_context() await ctx.info("Processing", data={"bytes": len(data)}) # ... your work ... await ctx.info("Processing complete") return "done" ``` ## Log levels ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} await ctx.debug("Detailed debugging info") await ctx.info("General operational messages") await ctx.warning("Warning conditions") await ctx.error("Error conditions") ``` | Method | Level | Use case | | --------------- | ------- | ------------------------------ | | `ctx.debug()` | DEBUG | Detailed debugging information | | `ctx.info()` | INFO | General operational messages | | `ctx.warning()` | WARNING | Warning conditions | | `ctx.error()` | ERROR | Error conditions | ## Example: Data pipeline ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool @tool(description="Run data pipeline") async def run_pipeline(source: str) -> dict: ctx = get_context() await ctx.info("Starting pipeline", data={"source": source}) # Load await ctx.debug("Loading data...") data = load_data(source) # your code await ctx.info("Loaded records", data={"count": len(data)}) # Transform await ctx.debug("Transforming data...") try: transformed = transform(data) # your code except ValueError as e: await ctx.warning("Transform warning", data={"error": str(e)}) transformed = fallback_transform(data) # your code # Save await ctx.debug("Saving results...") try: save(transformed) # your code await ctx.info("Pipeline complete", data={"records": len(transformed)}) except OSError as e: await ctx.error("Save failed", data={"error": str(e)}) raise return {"records": len(transformed)} ``` ## Example: Batch processing ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool @tool(description="Process items in batch") async def batch_process(items: list[str]) -> dict: ctx = get_context() results = {"success": 0, "failed": 0} await ctx.info("Starting batch", data={"items": len(items)}) for i, item in enumerate(items, start=1): await ctx.debug("Processing item", data={"index": i, "total": len(items), "item": item}) try: process_item(item) # your code results["success"] += 1 except Exception as e: await ctx.warning("Item failed", data={"item": item, "error": str(e)}) results["failed"] += 1 if results["failed"]: await ctx.warning("Batch completed with failures", data=results) else: await ctx.info("Batch completed successfully", data=results) return results ``` ## Structured logging Pass structured fields using `data=`: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} await ctx.info( "Request processed", data={ "duration_ms": 150, "items_processed": 42, }, ) ``` **Tip**: Avoid using the key `"msg"` inside `data`—Dedalus MCP uses `"msg"` internally for the main message text. # Overview Source: https://docs.dedaluslabs.ai/dmcp/server/overview Build MCP servers with dedalus_mcp ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import MCPServer, tool @tool(description="Add two numbers") def add(a: int, b: int) -> int: return a + b server = MCPServer("calculator") server.collect(add) if __name__ == "__main__": import asyncio asyncio.run(server.serve()) ``` Type hints become JSON Schema automatically. Register tools with `collect()`. Same pattern works for resources and prompts. **Server name must match your slug.** The `name` in `MCPServer("my-server")` must match your deployment slug and `ctx.dispatch()` calls. This ensures OAuth callbacks and request routing work correctly. ## With Dedalus SDK MCP integration is trivial. Pass servers directly to `mcp_servers`: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_labs import AsyncDedalus, DedalusRunner client = AsyncDedalus() runner = DedalusRunner(client) # Hosted MCP server (marketplace slug) response = await runner.run( input="Search for authentication docs", model="anthropic/claude-sonnet-4-20250514", mcp_servers=["your-org/your-server"], ) # Local MCP server URL response = await runner.run( input="Search for authentication docs", model="anthropic/claude-sonnet-4-20250514", mcp_servers=["http://localhost:8000/mcp"], ) ``` That's it. The SDK handles connection, tool discovery, and execution. ## Server primitives MCP servers expose three types of capabilities: | Primitive | Control | Description | | --------------------------------------- | ---------- | ------------------------------------------ | | [**Tools**](/dmcp/server/tools) | Model | Functions the LLM calls during reasoning. | | [**Resources**](/dmcp/server/resources) | Model/User | Data the LLM can read for context. | | [**Prompts**](/dmcp/server/prompts) | User | Message templates users select and render. | Tools are model-controlled: the LLM decides when to call them. Prompts are user-controlled: users choose which prompt to run. Resources can be either. ## Additional capabilities | Capability | How | | ---------------- | --------------------------------------- | | **Progress** | `ctx.progress()` for long-running tasks | | **Logging** | `ctx.info()`, `ctx.debug()`, etc. | | **Cancellation** | `ctx.cancelled` flag | ## Next Build an MCP server in 5 minutes Expose functions to AI agents Reusable message templates Expose data for agents to read X, GitHub, Gmail production servers # Progress Source: https://docs.dedaluslabs.ai/dmcp/server/progress Report progress of long-running operations Progress reporting lets your server send progress updates to the client while a tool is running. Clients can use these updates to show spinners, progress bars, and "still working" UI during slow operations. **Note**: `ctx.report_progress(...)` only sends a notification if the client provided a progress token for this request. If the client didn't request progress, it's a no-op. ## Basic usage ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool @tool(description="Process files") async def process_files(files: list[str]) -> str: ctx = get_context() for i, file in enumerate(files, start=1): await ctx.report_progress(i, total=len(files), message=f"Processing {file}") process_file(file) # your code; if this blocks, offload or make it async return f"Processed {len(files)} files" ``` ## Parameters ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} await ctx.report_progress( progress=50, # current progress value total=100, # optional total (for percentage) message="Halfway done...", # optional status message ) ``` | Parameter | Type | Description | | ---------- | ---------------------- | ------------------------------------------- | | `progress` | `int \| float` | Current progress value | | `total` | `int \| float \| None` | Optional total value; enables percentage UI | | `message` | `str \| None` | Optional human-friendly status text | ## Example: File download ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool @tool(description="Download files") async def download_files(urls: list[str]) -> dict: ctx = get_context() downloaded: list[str] = [] for i, url in enumerate(urls, start=1): await ctx.report_progress(i - 1, total=len(urls), message=f"Downloading {url}") await ctx.info("Downloading", data={"url": url}) path = await download(url) # your code downloaded.append(path) await ctx.report_progress(i, total=len(urls)) return {"files": downloaded} ``` ## Example: Data processing pipeline ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool @tool(description="Process large dataset") async def process_dataset(dataset_id: str) -> dict: ctx = get_context() await ctx.report_progress(0, total=100, message="Loading dataset") data = load_dataset(dataset_id) # your code await ctx.report_progress(30, total=100, message="Transforming data") total_items = len(data) if total_items: for i, item in enumerate(data, start=1): # map item progress into the 30..70 range progress = 30 + int((i / total_items) * 40) await ctx.report_progress(progress, total=100) transform(item) # your code await ctx.report_progress(70, total=100, message="Saving results") save_results(data) # your code await ctx.report_progress(100, total=100, message="Done") return {"processed": len(data)} ``` ## Example: Indeterminate progress If you don't know the total up front, omit `total` and send periodic updates: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool @tool(description="Search until found") async def search(query: str) -> str: ctx = get_context() pages_searched = 0 while True: pages_searched += 1 await ctx.report_progress(pages_searched, message=f"Searched {pages_searched} pages") result = search_page(query, pages_searched) # your code if result: return result if pages_searched > 100: return "Not found" ``` ## Tips * **Prefer async work**: progress updates are most useful when your tool is doing I/O (`async def`). If you do CPU-heavy or blocking work, consider offloading it so progress notifications can still flow. * **Use `message` sparingly**: short messages like "Downloading…", "Transforming…", "Saving…" are easiest for clients to display. * **Don't spam updates**: sending progress on every tiny step can be noisy. For very large loops, you may want to report every N items. # Prompts Source: https://docs.dedaluslabs.ai/dmcp/server/prompts Expose reusable message templates to MCP clients with the @prompt decorator Prompts are user-controlled message templates. Unlike tools (which the LLM calls), prompts are selected by users and rendered into conversation messages. Prompts flow through the MCP protocol like this: 1. A client discovers prompts via `prompts/list` (each prompt may include `arguments` and metadata). 2. A client renders a prompt via `prompts/get` with **string-valued** `arguments`. 3. The server executes your prompt renderer. 4. The server returns a `GetPromptResult` containing `messages` (and optional `description`) per MCP spec. Define prompts with `@prompt(...)` and register them with `server.collect(...)` (or inside `with server.binding(): ...`). ### Decorator signature * **`prompt(...)`**: `prompt(name: str, *, description=None, title=None, arguments=None, icons=None, meta=None)` Your renderer is called like: * **`fn(arguments: dict[str, str] | None)`** → returns messages / mapping / `GetPromptResult` / `None` *** ## Basic prompt ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import MCPServer, prompt, types @prompt( "code-review", description="Review code for issues", arguments=[ types.PromptArgument(name="language", required=False), types.PromptArgument(name="focus", required=False), ], ) def code_review(arguments: dict[str, str] | None): args = arguments or {} language = args.get("language", "python") focus = args.get("focus") focus_text = f" Focus on: {focus}." if focus else "" return [ ("assistant", "You are a senior code reviewer."), ("user", f"Review the following {language} code.{focus_text}"), ] server = MCPServer("assistant") server.collect(code_review) ``` The description tells the client/LLM what the prompt is for. `arguments=[...]` defines what the client should pass to `prompts/get`. *** ## Arguments (required vs optional) Dedalus MCP treats arguments as required **only** if you mark them `required=True` in the decorator's `arguments=[...]`. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import MCPServer, prompt, types @prompt( "translate", description="Translate text", arguments=[ types.PromptArgument(name="text", required=True), types.PromptArgument(name="target_lang", required=False), ], ) def translate(arguments: dict[str, str] | None): args = arguments or {} text = args["text"] target = args.get("target_lang", "English") return [("user", f"Translate this to {target}: {text}")] server = MCPServer("assistant") server.collect(translate) ``` If required args are missing, Dedalus raises an MCP error with `INVALID_PARAMS`. *** ## Complex argument values (lists/dicts) MCP prompt arguments are strings. If you need structured values, pass JSON strings and parse them yourself: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import json from dedalus_mcp import prompt, types @prompt( "summarize", description="Summarize a document", arguments=[types.PromptArgument(name="focus_areas_json", required=False)], ) def summarize(arguments: dict[str, str] | None): args = arguments or {} focus_areas = json.loads(args.get("focus_areas_json", "[]")) return [("user", f"Summarize this document. Focus on: {focus_areas}")] ``` *** ## Return formats Prompt renderers can return: * **A list/iterable of messages**, where each item can be: * a `(role, content)` tuple * a mapping like `{"role": "...", "content": ...}` * a `PromptMessage` instance * **A mapping** with explicit control: * **required**: `"messages"` * **optional**: `"description"` * **`GetPromptResult`** * **`None`** (produces zero messages) **Not supported**: returning a raw `str` (Dedalus raises `TypeError` so you always provide role + content). ### Explicit control (mapping) ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import prompt @prompt("status", description="Daily status template") def status(arguments: dict[str, str] | None): return { "description": "Status template", "messages": [ ("assistant", "You summarize daily status reports."), ("user", "Write yesterday/today/blockers."), ], } ``` *** ## Message content For message `content`, you can use: * a `str` (auto-coerced to text content) * a full content-block mapping (e.g. `{"type": "text", "text": "..."}`) * a content-block instance from `dedalus_mcp.types` (e.g. `TextContent`, `ImageContent`, etc.) *** ## Async prompts ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import prompt @prompt("db-summary", description="Summarize database state") async def db_summary(arguments: dict[str, str] | None): # await fetch_db_stats(...) return [ ("assistant", "You analyze database metrics."), ("user", "Summarize current DB health and recent anomalies."), ] ``` Prefer `async def` for I/O. *** ## Decorator options ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import prompt @prompt( "analyze", description="Analyze code", title="Code Analysis", icons=[{"type": "url", "url": "https://example.com/icon.png"}], meta={"category": "code"}, arguments=[{"name": "language", "required": False}], ) def analyze(arguments: dict[str, str] | None): language = (arguments or {}).get("language", "python") return [("user", f"Analyze this {language} code for bugs, style, and security.")] ``` *** ## Context access If a prompt is rendered during an MCP request, it can access context via `get_context()` (for logging, progress, etc.). **Note**: `get_context()` only works inside an active MCP request handler; calling it outside a request raises `LookupError`. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import prompt, get_context @prompt("generate-report", description="Generate a report request") async def generate_report(arguments: dict[str, str] | None): ctx = get_context() await ctx.info("Rendering prompt", data={"args": arguments or {}}) return [("user", "Generate a concise weekly report for this project.")] ``` *** ## Testing Test prompt renderers like normal functions: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} def test_code_review_prompt(): msgs = code_review({"language": "python", "focus": "error handling"}) assert len(msgs) == 2 ``` Integration-style test via the server API (mirrors `prompts/get`): ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import pytest from dedalus_mcp import MCPServer @pytest.mark.asyncio async def test_prompt_rendering(): server = MCPServer("test") server.collect(code_review) result = await server.invoke_prompt("code-review", arguments={"language": "rust"}) assert result.messages ``` # Resources Source: https://docs.dedaluslabs.ai/dmcp/server/resources Expose data for AI agents to read with the @resource decorator Resources are **read-only data** the client/LLM can pull in as context. Unlike tools (actions), resources are meant to provide information **without side effects**. Clients can also subscribe to resource URIs and receive change notifications. Resources flow through the MCP protocol like this: 1. A client discovers resources via `resources/list` (each resource includes `uri`, optional `name`, optional `mimeType`, etc.). 2. A client reads a resource via `resources/read` with a `uri`. 3. The server executes your resource handler. 4. The server returns a `ReadResourceResult` containing `contents` per MCP spec. Define resources with `@resource(...)` and register them with `server.collect(...)` (or inside `with server.binding(): ...`). ### Decorator signature * **`resource(...)`**: `resource(uri: str, *, name=None, description=None, mime_type=None)` Your handler is called like: * **`fn()`** → returns `str` (text) or `bytes` (binary). (`async def` is also supported.) *** ## Basic resource ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import MCPServer, resource @resource("resource://config/app", name="config", description="Application config", mime_type="application/json") def app_config() -> str: return '{"debug": true, "version": "1.2.0"}' server = MCPServer("config-server") server.collect(app_config) ``` The decorator defines the resource URI. `collect()` registers it. Clients list resources with `resources/list` and read them with `resources/read`. *** ## Text vs binary Return `str` for text: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import resource @resource("resource://docs/readme", mime_type="text/markdown") def readme() -> str: with open("README.md", "r", encoding="utf-8") as f: return f.read() ``` Return `bytes` for binary (Dedalus encodes it as base64 in the MCP response): ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import resource @resource("resource://assets/logo", mime_type="image/png") def logo() -> bytes: with open("logo.png", "rb") as f: return f.read() ``` *** ## Async resources ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import anyio from dedalus_mcp import resource @resource("resource://api/users", description="Active users", mime_type="application/json") async def users() -> str: await anyio.sleep(0.1) # simulate I/O return '{"users": ["ada", "grace"]}' ``` Prefer `async def` for I/O. (Like tools, sync handlers run inline.) *** ## MIME types ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import json from dedalus_mcp import resource @resource("resource://data/report", mime_type="application/json") def report() -> str: return json.dumps({"users": 100, "active": 42}) ``` Common defaults: * `text/plain` (default when returning `str` and no `mime_type` is provided) * `application/octet-stream` (default when returning `bytes` and no `mime_type` is provided) *** ## Subscriptions Clients can subscribe to resource changes via `resources/subscribe` and unsubscribe via `resources/unsubscribe`. When your underlying data changes, notify subscribers: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import MCPServer server = MCPServer("live-data") server.collect(users) # When the data behind this URI changes: await server.notify_resource_updated("resource://api/users") ``` Subscribed clients receive `notifications/resources/updated`. *** ## Decorator options ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import resource @resource( "resource://config/app", name="app-config", # Human-friendly name (shown in resources/list) description="App settings", # Shown to clients mime_type="application/json", ) def config() -> str: return '{"debug": false}' ``` *** ## Error handling If your resource handler raises, Dedalus returns a **text fallback** resource: * `mimeType="text/plain"` * `text="Resource error: "` If a URI is not registered, `resources/read` returns an empty `contents` list. *** ## Testing Test resource handlers as normal functions: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import json def test_config_resource(): data = json.loads(app_config()) assert data["version"] == "1.2.0" ``` Integration-style test via the server API (mirrors `resources/read`): ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import pytest from dedalus_mcp import MCPServer @pytest.mark.asyncio async def test_resource_read(): server = MCPServer("test") server.collect(app_config) result = await server.invoke_resource("resource://config/app") assert result.contents[0].text == '{"debug": true, "version": "1.2.0"}' ``` *** ## Resource templates Use `@resource_template(...)` to advertise **URI patterns** via `resources/templates/list` (for client discovery and completions). Dedalus MCP currently registers templates for listing, but **does not automatically route `resources/read` to a template function**—you still need to register concrete resource URIs with `@resource(...)`. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import resource_template @resource_template( "user-profile", uri_template="resource://users/{user_id}", description="User profile by ID", ) async def user_profile(user_id: str) -> str: user = await fetch_user(user_id) return json.dumps(user) ``` # Roots Source: https://docs.dedaluslabs.ai/dmcp/server/roots Access client filesystem boundaries Roots are filesystem boundaries **advertised by the client**. Servers can use roots to understand what parts of the client's filesystem are intended to be in-scope (for example, "this project folder"), and to **enforce guardrails** when reading or writing files. In MCP, Roots are a **client capability** (`roots/list`). Dedalus MCP provides a server-side `RootsService` that: * fetches roots from the client, * caches a per-session snapshot, and * offers a `RootGuard` helper for path checks. ## Basic usage Fetch the latest roots for the current session, then use them: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool @tool(description="List client roots") async def list_roots() -> list[str]: ctx = get_context() server = ctx.server if server is None: raise RuntimeError("No active server in context") roots = await server.roots.refresh(ctx.session) # fetch from client return [f"{r.name}: {r.uri}" for r in roots] ``` You don't need to re-fetch every time, you can read the cached snapshot: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} roots = ctx.server.roots.snapshot(ctx.session) ``` *** ## Root structure Each root contains: | Field | Type | Description | | ------ | ----- | ------------------------------------ | | `uri` | `str` | Root URI (typically a `file://` URI) | | `name` | `str` | Human-readable name | *** ## Example: Safe file operations (RootGuard) Use `RootGuard` to check whether a path is inside one of the allowed roots: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from pathlib import Path from dedalus_mcp import get_context, tool @tool(description="Read a file, restricted to roots") async def safe_read(filepath: str) -> str: ctx = get_context() server = ctx.server if server is None: raise RuntimeError("No active server in context") # Make sure we have an up-to-date snapshot await server.roots.refresh(ctx.session) guard = server.roots.guard(ctx.session) target = Path(filepath).expanduser().resolve() if not guard.within(target): raise ValueError("Path is outside allowed roots") return target.read_text(encoding="utf-8") ``` *** ## Example: Project discovery (file:// roots) If your client roots are `file://...` URIs, you can walk them to discover projects. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from pathlib import Path from urllib.parse import urlparse, unquote from dedalus_mcp import get_context, tool def file_uri_to_path(uri: str) -> Path: parsed = urlparse(uri) if parsed.scheme != "file": raise ValueError(f"Unsupported root scheme: {parsed.scheme!r}") return Path(unquote(parsed.path)).expanduser().resolve() @tool(description="Find project roots by marker files") async def find_projects() -> list[dict]: ctx = get_context() server = ctx.server if server is None: raise RuntimeError("No active server in context") roots = await server.roots.refresh(ctx.session) projects: list[dict] = [] for root in roots: root_path = file_uri_to_path(str(root.uri)) for marker in ["package.json", "pyproject.toml", "Cargo.toml"]: if (root_path / marker).exists(): projects.append( { "root": root.name, "path": str(root_path), "type": marker, } ) return projects ``` *** ## Example: Scoped search Search only within roots (and log what you're doing): ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from pathlib import Path from urllib.parse import urlparse, unquote from dedalus_mcp import get_context, tool def file_uri_to_path(uri: str) -> Path: parsed = urlparse(uri) if parsed.scheme != "file": raise ValueError(f"Unsupported root scheme: {parsed.scheme!r}") return Path(unquote(parsed.path)).expanduser().resolve() @tool(description="Search for files within roots") async def search_files(pattern: str) -> list[str]: ctx = get_context() server = ctx.server if server is None: raise RuntimeError("No active server in context") roots = await server.roots.refresh(ctx.session) await ctx.info("Searching roots", data={"roots": len(roots), "pattern": pattern}) matches: list[str] = [] for root in roots: await ctx.debug("Searching root", data={"root": root.name, "uri": str(root.uri)}) root_path = file_uri_to_path(str(root.uri)) for match in root_path.rglob(pattern): matches.append(str(match)) await ctx.info("Search complete", data={"matches": len(matches)}) return matches ``` *** ## Notes * **Caching**: `server.roots.snapshot(session)` returns the cached roots. `await server.roots.refresh(session)` updates the cache by calling the client. * **Client-driven updates**: If the client sends `roots/list_changed`, Dedalus MCP updates the snapshot (debounced) for that session automatically. * **Security**: Roots are guidance + a boundary for your own checks. If you're doing file I/O, always enforce a guard (`RootGuard.within(...)`) before reading/writing. # Sampling Source: https://docs.dedaluslabs.ai/dmcp/server/sampling Request LLM completions from the client Sampling lets your server ask the client to run an LLM and return a completion **while a tool is executing**. This enables tools to leverage AI for analysis and generation without the client having to orchestrate multiple tool calls. ## Basic usage ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool, types @tool(description="Analyze data with Guassian assumptions") async def analyze(data: str) -> str: ctx = get_context() params = types.CreateMessageRequestParams( messages=[ types.SamplingMessage( role="user", content=types.TextContent(type="text", text=f"Analyze this data with Guassian assumptions and expose the estimators: {data}"), ) ], maxTokens=400, ) result = await ctx.server.request_sampling(params) return result.content.text ``` ## Parameters Sampling requests are expressed as `CreateMessageRequestParams` (field names match the MCP schema, e.g. `maxTokens`, `systemPrompt`). ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} params = types.CreateMessageRequestParams( messages=[ types.SamplingMessage( role="user", content=types.TextContent(type="text", text="Analyze this data"), ) ], systemPrompt="You are an expert analyst", temperature=0.7, # 0.0 = deterministic, 1.0 = creative maxTokens=1024, # maximum output tokens ) result = await ctx.server.request_sampling(params) ``` | Parameter | Type | Description | | ------------------ | ------------------------------------------------ | ---------------------------------------------------------- | | `messages` | `list[SamplingMessage]` | Prompt or conversation messages | | `systemPrompt` | `str \| None` | Instructions for the LLM | | `temperature` | `float \| None` | Randomness/creativity | | `maxTokens` | `int` | Maximum output tokens (**required**) | | `model` | `str \| None` | Optional model hint | | `stopSequences` | `list[str] \| None` | Stop strings | | `includeContext` | `"none" \| "thisServer" \| "allServers" \| None` | Whether the client should include additional context | | `modelPreferences` | `ModelPreferences \| None` | Model selection hints (client may ignore) | | `metadata` | `dict[str, object] \| None` | Opaque metadata; Dedalus will add a `requestId` if missing | ## Response `request_sampling(...)` returns a `CreateMessageResult`. Most clients return `TextContent`: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} result = await ctx.server.request_sampling(params) print(result.content.text) ``` ## Multi-turn conversations Pass a list of messages for multi-turn context: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import types params = types.CreateMessageRequestParams( messages=[ types.SamplingMessage(role="user", content=types.TextContent(type="text", text="What is Python?")), types.SamplingMessage(role="assistant", content=types.TextContent(type="text", text="A programming language.")), types.SamplingMessage(role="user", content=types.TextContent(type="text", text="What are its main features?")), ], maxTokens=200, ) result = await ctx.server.request_sampling(params) ``` ## Example: Code review ````python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import get_context, tool, types @tool(description="Review code for issues in the repo") async def review_code(code: str, language: str) -> str: ctx = get_context() params = types.CreateMessageRequestParams( messages=[ types.SamplingMessage( role="user", content=types.TextContent( type="text", text=f"Review this {language} code:\n\n```{language}\n{code}\n```", ), ) ], systemPrompt="You are an expert code reviewer. Be concise and actionable.", temperature=0.2, maxTokens=500, ) result = await ctx.server.request_sampling(params) return result.content.text ```` ## Error handling Sampling requires the client to advertise the sampling capability. If the client doesn't support sampling, `request_sampling(...)` raises `McpError` (typically `METHOD_NOT_FOUND`): ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from mcp.shared.exceptions import McpError from dedalus_mcp import get_context, tool, types @tool(description="Analyze with AI with Guassian assumption") async def analyze_with_fallback(data: str) -> str: ctx = get_context() params = types.CreateMessageRequestParams( messages=[types.SamplingMessage(role="user", content=types.TextContent(type="text", text=f"Analyze: {data}"))], maxTokens=256, ) try: result = await ctx.server.request_sampling(params) return result.content.text except McpError as e: return f"Sampling unavailable: {e}" ``` # Tools Source: https://docs.dedaluslabs.ai/dmcp/server/tools Expose functions to AI agents with the @tool decorator Tools let agents call your Python functions. Decorate, register, serve. Tools are the core building blocks that allow an MCP client to invoke your Python functions via the MCP protocol: 1. A client discovers tools via `tools/list` (each tool includes `inputSchema` and optional `outputSchema`). 2. A client calls a tool via `tools/call` with `arguments` matching the schema. 3. The server executes your callable. 4. The server returns a `CallToolResult` containing `content` (and optionally `structuredContent`) per MCP Spec. ## Basic tool ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import MCPServer, tool @tool(description="Add two numbers") def add(a: int, b: int) -> int: return a + b server = MCPServer("math") server.collect(add) ``` The description tells the LLM what the tool does. Type hints become JSON Schema. ## Async tools ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import anyio from dedalus_mcp import tool @tool(description="Fetch user data (simulated I/O)") async def get_user(user_id: str) -> dict: await anyio.sleep(0.1) return {"user_id": user_id, "status": "ok"} ``` Prefer async for I/O. **Important**: in Dedalus MCP, **sync tools run inline** (they are not automatically moved to a thread pool). If you need concurrency for blocking work, use `async def` and offload explicitly. ## Type inference Type hints become JSON Schema automatically: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from typing import Literal from pydantic import BaseModel from dedalus_mcp import tool class SearchFilters(BaseModel): category: str | None = None min_price: float = 0.0 @tool(description="Search products") def search( query: str, limit: int = 10, sort: Literal["relevance", "price", "date"] = "relevance", filters: SearchFilters | None = None, ) -> list[dict]: return [{"query": query, "limit": limit, "sort": sort, "filters": filters.model_dump() if filters else None}] ``` Supported: primitives, `list`, `dict`, `Literal`, `Enum`, optionals/unions, Pydantic models, dataclasses, nested models. Required parameters have no default. Optional parameters have one. ## Decorator options ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import tool @tool( name="find_products", # Override tool name description="Search catalog", # Tool description tags={"search", "catalog"}, # For filtering/metadata ) def search_products_impl(query: str) -> list[dict]: return [{"id": "p_1", "name": "Widget", "query": query}] ``` ## Structured returns Return JSON-serializable values: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import tool @tool(description="Analyze text") def analyze(text: str) -> dict: return {"word_count": len(text.split()), "char_count": len(text)} ``` For explicit control, return `CallToolResult`: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import tool from dedalus_mcp.types import CallToolResult, TextContent @tool(description="Custom result") def custom() -> CallToolResult: return CallToolResult( content=[TextContent(type="text", text="Custom message")], isError=False, ) ``` ## Context access Logging and progress via `get_context()`: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import anyio from dedalus_mcp import tool, get_context @tool(description="Process files with progress reporting") async def process_files(paths: list[str]) -> dict: ctx = get_context() await ctx.info("Starting", data={"count": len(paths)}) processed = 0 try: async with ctx.progress(total=len(paths)) as tracker: for path in paths: # Simulate work; cancellation is delivered as task cancellation await anyio.sleep(0.01) processed += 1 await tracker.advance(1) except anyio.get_cancelled_exc_class(): await ctx.warning("Cancelled", data={"processed": processed}) raise return {"processed": processed} ``` ## Allow-lists Restrict visible tools: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import MCPServer, tool @tool(description="Add") def add(a: int, b: int) -> int: return a + b @tool(description="Multiply") def multiply(a: int, b: int) -> int: return a * b server = MCPServer("gated") server.collect(add, multiply) server.allow_tools({"add"}) ``` Calling a hidden tool returns an error `CallToolResult` indicating the tool is not available. ## Error handling Raise exceptions normally: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_mcp import tool @tool(description="Divide") def divide(a: float, b: float) -> float: if b == 0: raise ValueError("Cannot divide by zero") return a / b ``` ## Testing Test tools as normal functions: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} def test_add(): assert add(2, 3) == 5 ``` For tools using context, test the core logic separately (or use an integration-style harness). # Testing Source: https://docs.dedaluslabs.ai/dmcp/testing Test MCP servers ## Unit test tools directly Tools are just functions. Test them without a server: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} def test_add(): assert add(2, 3) == 5 async def test_async_tool(): result = await fetch_user("123") assert result["id"] == "123" ``` ## Tools using context Tools that call `get_context()` require an active request. Test them via integration tests with a real server, or structure your tool to make the context-dependent part mockable: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} @tool(description="Process with logging") async def process(data: str) -> dict: ctx = get_context() await ctx.info("Processing") return do_work(data) # Test do_work separately def test_do_work(): assert do_work("input") == {"result": "output"} ``` ## Integration test with MCPClient Test the full server: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import pytest from dedalus_mcp import MCPServer, tool from dedalus_mcp.client import MCPClient @tool(description="Add") def add(a: int, b: int) -> int: return a + b @pytest.fixture async def server(): server = MCPServer("test") server.collect(add) # Start server in background task = asyncio.create_task(server.serve()) await asyncio.sleep(0.1) # Let it start yield server task.cancel() @pytest.fixture async def client(server): client = await MCPClient.connect("http://127.0.0.1:8000/mcp") yield client await client.close() async def test_call_tool(client): result = await client.call_tool("add", {"a": 2, "b": 3}) assert result.content[0].text == "5" ``` ## Test registration Verify tools register correctly: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} def test_registration(): server = MCPServer("test") server.collect(add, multiply) names = list(server.tool_names) assert "add" in names assert "multiply" in names ``` ## Isolation Since decorators don't bind to servers at import time, each test gets clean state: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} def test_a(): server = MCPServer("test-a") server.collect(tool_a) # No teardown needed def test_b(): server = MCPServer("test-b") server.collect(tool_b) # Completely independent ``` # Use docs programmatically Source: https://docs.dedaluslabs.ai/dmcp/use-these-docs Connect Dedalus documentation to your AI tools and workflows We want to make our documentation as accessible as possible. We've included several ways for you to use these docs programmatically through AI assistants, code editors, and direct integrations, such as Model Context Protocol (MCP). ## Quick access options On any page in our documentation, you'll find a contextual menu dropdown in the top right corner with quick access options including our `llms.txt`, MCP server connection, and other integrations such as ChatGPT and Claude. Quick access menu showing Copy page, View as Markdown, Open in ChatGPT, Open in Claude, and Copy MCP Server options ## Use our MCP server Our documentation includes a built-in **Model Context Protocol (MCP) server** that lets AI applications query the latest docs in real-time. The Dedalus docs MCP server is available at: ```txt theme={"theme":{"light":"github-light","dark":"github-dark"}} https://docs.dedaluslabs.ai/mcp ``` Once connected, you can ask your AI assistant questions about Dedalus SDK, MCP servers, and our platform, and it will search our documentation to provide accurate, current answers. ### Connect with Claude Code If you're using Claude Code, run this command in your terminal to add the server to your current project: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} claude mcp add --transport http docs-dedalus https://docs.dedaluslabs.ai/mcp ``` **Project (local) scoped** The command above adds the MCP server only to your current project/working directory. To add the MCP server globally and access it in all projects, add the user scope by adding `--scope user` to the command: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} claude mcp add --transport http docs-dedalus --scope user https://docs.dedaluslabs.ai/mcp ``` ### Connect with Claude Desktop 1. Open Claude Desktop 2. Go to **Settings** → **Developer** → **Connectors** 3. Click **Add MCP Server** 4. Add our MCP server URL: `https://docs.dedaluslabs.ai/mcp` ### Connect with Codex CLI If you're using OpenAI Codex CLI, run this command in your terminal to add the server globally: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} codex mcp add dedalus-docs --url https://docs.dedaluslabs.ai/mcp ``` ### Connect with Cursor Install in one click: Add to Cursor Or add this configuration to `.cursor/mcp.json`: ```json theme={"theme":{"light":"github-light","dark":"github-dark"}} { "mcpServers": { "docs-dedalus": { "url": "https://docs.dedaluslabs.ai/mcp" } } } ``` ### Connect with VS Code Install in one click: Install in VS Code Or add this configuration to `.vscode/mcp.json`: ```json theme={"theme":{"light":"github-light","dark":"github-dark"}} { "servers": { "docs-dedalus": { "type": "http", "url": "https://docs.dedaluslabs.ai/mcp" } } } ``` ### Connect with Antigravity Add the following to your MCP settings configuration file: ```json theme={"theme":{"light":"github-light","dark":"github-dark"}} { "mcpServers": { "docs-dedalus": { "serverUrl": "https://docs.dedaluslabs.ai/mcp" } } } ``` ## Learn more Have questions or feedback? Join our [Discord community](https://discord.gg/RuDhZKnq5R) or [email us](mailto:support@dedaluslabs.ai). # Bring Your Own Key (BYOK) Source: https://docs.dedaluslabs.ai/guides/byok Use your own API keys to call providers directly through Dedalus BYOK lets you send requests through Dedalus using your own provider API key. The request still flows through our unified API (routing, tool calling, streaming, format normalization), but the LLM call is billed to your account with the provider. ## When to use BYOK * You have negotiated pricing or credits with a provider. * You want to use a model tier or region not available on our shared keys. * Your compliance policy requires that API keys stay under your control. ## Quick start Pass three headers (or SDK options) alongside your normal Dedalus API key: | Header | SDK option | Description | | ------------------ | ---------------- | ----------------------------------------------------- | | `X-Provider` | `provider` | Provider name (`openai`, `anthropic`, `google`, etc.) | | `X-Provider-Key` | `provider_key` | Your API key for that provider | | `X-Provider-Model` | `provider_model` | Model identifier at the provider (optional) | Only `X-Provider-Key` is strictly required. If you omit `X-Provider`, it is inferred from the model name. If you omit `X-Provider-Model`, the model from the request body is used. ## Examples ### curl ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} curl https://api.dedaluslabs.ai/v1/chat/completions \ -H "Authorization: Bearer $DEDALUS_API_KEY" \ -H "X-Provider: openai" \ -H "X-Provider-Key: $OPENAI_API_KEY" \ -H "X-Provider-Model: gpt-4o" \ -H "Content-Type: application/json" \ -d '{ "model": "openai/gpt-4o", "messages": [{"role": "user", "content": "Hello"}] }' ``` ### Python SDK ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_labs import AsyncDedalus client = AsyncDedalus( provider="openai", provider_key="sk-your-openai-key", provider_model="gpt-4o", ) response = await client.chat.completions.create( model="openai/gpt-4o", messages=[{"role": "user", "content": "Hello"}], ) ``` ### TypeScript SDK ```typescript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; const client = new Dedalus({ provider: "openai", providerKey: "sk-your-openai-key", providerModel: "gpt-4o", }); const response = await client.chat.completions.create({ model: "openai/gpt-4o", messages: [{ role: "user", content: "Hello" }], }); ``` ### Environment variables You can also set BYOK options via environment variables instead of passing them in code: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} export DEDALUS_PROVIDER="anthropic" export DEDALUS_PROVIDER_KEY="sk-ant-your-key" export DEDALUS_PROVIDER_MODEL="claude-sonnet-4-5-20250929" ``` The SDK picks these up automatically. No code changes needed. ## Per-request overrides The SDK options set defaults for every request. You can also override per-request by setting the headers directly: ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} response = await client.chat.completions.create( model="google/gemini-2.5-pro", messages=[{"role": "user", "content": "Hello"}], extra_headers={ "X-Provider": "google", "X-Provider-Key": "your-google-key", }, ) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} const response = await client.chat.completions.create( { model: "google/gemini-2.5-pro", messages: [{ role: "user", content: "Hello" }], }, { headers: { "X-Provider": "google", "X-Provider-Key": "your-google-key", }, }, ); ``` ## Supported providers Any provider in our [model list](/sdk/guides/providers) works with BYOK: openai anthropic google xai mistral deepseek groq cohere perplexity cerebras together\_ai fireworks\_ai moonshot ## How it works Your request still goes through Dedalus. We handle routing, format normalization, streaming, and tool calling. The only difference is which API key is used for the upstream LLM call. ``` You → Dedalus API (your Dedalus key) → Provider (your provider key) → Response → You ``` BYOK keys are sent over HTTPS and are never stored. They are used for the duration of the request and discarded. If you need Dedalus to manage keys on your behalf, contact us at [support@dedaluslabs.ai](mailto:support@dedaluslabs.ai). ## Error handling | Scenario | What happens | | ------------------------------- | -------------------------------------------------- | | Invalid provider name | HTTP 400 with supported provider list | | Missing or invalid provider key | Provider returns its own auth error (usually 401) | | Model not available on provider | Provider returns its own model error (usually 404) | The error response always includes the upstream provider's error message so you can debug directly. # OAuth Account Management Source: https://docs.dedaluslabs.ai/guides/oauth-accounts Best practices for managing OAuth connections across accounts ## One API Key Per OAuth Account OAuth connections are scoped to your **API key** (not organization). This enables multiple OAuth accounts within the same org—just use different API keys. ``` API Key → api_key_id → connection → OAuth tokens ``` **Example**: To use Gmail MCP with two different email addresses: 1. Create two API keys in your dashboard 2. Use API key A with Gmail account A 3. Use API key B with Gmail account B 4. Each key sees only its own OAuth connections ## Why This Design? Connections are keyed by `(api_key_id, deployment_id, name)`. Different API keys within the same org get separate OAuth connections. This provides: * **Multi-account support**: Easy to use multiple OAuth identities * **Isolation**: Each API key's connections are independent * **Simplicity**: No account switching needed—just use different keys ## Switching Accounts To "switch" OAuth accounts, simply use a different API key. If you need to re-authenticate the same key: 1. Revoke access in the provider's settings (e.g., [Google Account](https://myaccount.google.com/permissions)) 2. Next MCP call triggers fresh OAuth flow Deleting an API key will delete all OAuth connections associated with it (CASCADE DELETE). Create new connections with your new key. ## Data Model Reference | Entity | Key | Notes | | ----------------- | ----------------------------------- | -------------------------------- | | API Key | `id` (uuid) | Belongs to an organization | | Connection | `(api_key_id, deployment_id, name)` | Unique per key + server + name | | OAuth Credentials | `connection_id` (FK) | Encrypted tokens, auto-refreshed | # Health Check Source: https://docs.dedaluslabs.ai/health/health-check /openapi.json GET /health Simple health check. # Dedalus Docs Source: https://docs.dedaluslabs.ai/index Build model-agnostic agents powered by MCP
01

Any Model

OpenAI, Anthropic, Google, xAI, DeepSeek, Mistral. Switch models with one line. No vendor lock-in.

02

MCP Native

Build and deploy MCP servers. Connect to any hosted server. Full OAuth and multi-tenant auth built in.

03

Production Ready

Streaming, structured outputs, tool calling, handoffs, and runtime policies. Battle-tested at scale.

# Chat Source: https://docs.dedaluslabs.ai/sdk/chat Send messages and get responses from any model The core of the Dedalus SDK: send a message, get a response. Works with any model from any provider. ## Start with chat ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() async def main(): client = AsyncDedalus() runner = DedalusRunner(client) response = await runner.run( input=( "I want to find the nearest basketball games in January in San Francisco.\n\n" "For now, do NOT make up events. Instead:\n" "1) Ask any clarifying questions you need.\n" "2) Propose a short plan for how you would find events.\n" "3) List the fields you'd extract for each event (for a table later)." ), model="anthropic/claude-opus-4-5", ) print(response.final_output) if __name__ == "__main__": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; import { DedalusRunner } from "dedalus-labs"; const client = new Dedalus(); const runner = new DedalusRunner(client); async function main() { const response = await runner.run({ input: "I want to find the nearest basketball games in January in San Francisco.\n\n" + "For now, do NOT make up events. Instead:\n" + "1) Ask any clarifying questions you need.\n" + "2) Propose a short plan for how you would find events.\n" + "3) List the fields you'd extract for each event (for a table later).", model: "anthropic/claude-opus-4-5", }); console.log(response.finalOutput); } main(); ``` ## Next steps * **Add actions**: [Tools](/sdk/tools) — Let the model call your functions * **Connect external tools**: [MCP Servers](/sdk/mcp) — Use hosted MCP servers * **Stream the workflow**: [Streaming](/sdk/streaming) — Show progress in real time [Connect these docs programmatically](/contextual/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers. # Chat Server with UI Source: https://docs.dedaluslabs.ai/sdk/cookbook/chat-server Full-stack chat application with FastAPI, WebSockets, and model selection Build a real-time chat interface with streaming responses, model switching, and MCP server integration. This example creates a complete web application using FastAPI and WebSockets. ## How It Works The server combines several patterns: 1. **WebSocket connection** for real-time bidirectional communication 2. **In-memory sessions** to maintain conversation history per client 3. **Streaming responses** that display tokens as they arrive 4. **Dynamic configuration** for model and MCP server selection ## Key Concepts ### WebSocket Chat Flow ```mermaid theme={"theme":{"light":"github-light","dark":"github-dark"}} sequenceDiagram participant C as Client participant S as Server C->>S: Connect (WebSocket) C->>S: Message (includes model + MCP config) S-->>C: Start loop Streaming tokens S-->>C: Chunk end S-->>C: Done ``` ### Session Management Each WebSocket connection uses a session ID to maintain separate conversation histories: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} sessions: dict[str, list[dict]] = {} # In the WebSocket handler if session_id not in sessions: sessions[session_id] = [] # Append user message sessions[session_id].append({"role": "user", "content": message}) # After response, save assistant message sessions[session_id].append({"role": "assistant", "content": full_response}) ``` ### Streaming to WebSocket The runner's streaming response is forwarded chunk-by-chunk to the client: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} response_stream = runner.run(messages=history, model=model, stream=True) async for chunk in response_stream: if hasattr(chunk, "choices") and chunk.choices: delta = chunk.choices[0].delta if hasattr(delta, "content") and delta.content: await websocket.send_json({ "type": "chunk", "content": delta.content }) ``` ## Complete Example A full-stack chat application with a minimal UI: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} """ FastAPI Chat Server with UI =========================== Full-stack chat application with model and MCP server selection. Run: uv run --python 3.13 cookbook/02_chat_server.py Then open: http://localhost:8000 """ import asyncio import json from contextlib import asynccontextmanager from dotenv import load_dotenv from fastapi import FastAPI, WebSocket, WebSocketDisconnect from fastapi.responses import HTMLResponse import uvicorn from dedalus_labs import AsyncDedalus, DedalusRunner load_dotenv() # In-memory session storage (use Redis/DB in production) sessions: dict[str, list[dict]] = {} @asynccontextmanager async def lifespan(app: FastAPI): print("\n" + "=" * 50) print(" Dedalus Chat Server") print(" Open http://localhost:8000") print("=" * 50 + "\n") yield app = FastAPI(lifespan=lifespan) HTML_PAGE = """ Dedalus Chat

Dedalus

""" @app.get("/") async def get_ui(): return HTMLResponse(HTML_PAGE) @app.websocket("/ws/{session_id}") async def websocket_chat(websocket: WebSocket, session_id: str): await websocket.accept() if session_id not in sessions: sessions[session_id] = [] client = AsyncDedalus() runner = DedalusRunner(client) try: while True: data = await websocket.receive_json() message = data.get("message", "") model = data.get("model", "openai/gpt-5.1") mcp_servers = data.get("mcp_servers", []) await websocket.send_json({"type": "start"}) try: # Append user message to history first sessions[session_id].append({"role": "user", "content": message}) history = sessions[session_id] kwargs = { "messages": history, "model": model, "stream": True, } if mcp_servers: kwargs["mcp_servers"] = mcp_servers response_stream = runner.run(**kwargs) full_response = "" async for chunk in response_stream: if hasattr(chunk, "choices") and chunk.choices: delta = chunk.choices[0].delta if hasattr(delta, "content") and delta.content: full_response += delta.content await websocket.send_json({ "type": "chunk", "content": delta.content }) # Save assistant response to session sessions[session_id].append({"role": "assistant", "content": full_response}) await websocket.send_json({"type": "done"}) except Exception as e: await websocket.send_json({"type": "error", "message": str(e)}) except WebSocketDisconnect: pass if __name__ == "__main__": uvicorn.run(app, host="0.0.0.0", port=8000) ``` ## Running the Server ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} # Install dependencies pip install fastapi uvicorn websockets python-dotenv dedalus-labs # Run the server python chat_server.py ``` Then open [http://localhost:8000](http://localhost:8000) in your browser. ## Production Considerations | Concern | Solution | | --------------- | ------------------------------------------------ | | Session storage | Replace in-memory dict with Redis or PostgreSQL | | Authentication | Add JWT/OAuth middleware to WebSocket handshake | | Rate limiting | Implement per-user request throttling | | Error handling | Add retry logic and graceful degradation | | Scaling | Use Redis pub/sub for multi-instance deployments | [Connect these docs programmatically](/contextual/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers. # Multi-turn Chat Source: https://docs.dedaluslabs.ai/sdk/cookbook/multi-turn-chat Build conversational agents with persistent context Build conversational agents that remember context across messages. This pattern maintains conversation history in memory, enabling chatbots, assistants, and any multi-turn interaction. ## How It Works The Dedalus SDK's `runner.run()` accepts a `messages` array. By appending user messages and updating with `result.to_input_list()` after each turn, you get persistent conversations: 1. **Append** the new user message to history 2. **Run** the model with the full history 3. **Update** history using `result.to_input_list()` ## Multi-turn Chat ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner async def main(): client = AsyncDedalus() runner = DedalusRunner(client) messages: list[dict] = [] while True: user_input = input("You: ").strip() if not user_input: break messages.append({"role": "user", "content": user_input}) result = await runner.run( model="openai/gpt-4o", messages=messages, ) messages = result.to_input_list() print(f"Assistant: {result.final_output}\n") asyncio.run(main()) ``` ## Key Concepts ### Message Format The Dedalus SDK uses the OpenAI message format: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} [ {"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi! How can I help?"}, {"role": "user", "content": "What did I just say?"}, ] ``` ### Persistence with `to_input_list()` After each `runner.run()`, call `result.to_input_list()` to get the complete conversation history including tool calls and assistant responses. This preserves the full context for the next turn. ## Persisting to Disk For conversations that survive restarts, save to JSON: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio import json from pathlib import Path from dedalus_labs import AsyncDedalus, DedalusRunner HISTORY_FILE = Path("chat_history.json") def load_messages() -> list[dict]: if HISTORY_FILE.exists(): return json.loads(HISTORY_FILE.read_text()) return [] def save_messages(messages: list[dict]): HISTORY_FILE.write_text(json.dumps(messages, indent=2)) async def main(): client = AsyncDedalus() runner = DedalusRunner(client) messages = load_messages() while True: user_input = input("You: ").strip() if not user_input: break messages.append({"role": "user", "content": user_input}) result = await runner.run( model="openai/gpt-4o", messages=messages, ) messages = result.to_input_list() save_messages(messages) print(f"Assistant: {result.final_output}\n") asyncio.run(main()) ``` ## Storage Options | Storage | Use Case | | ---------- | ------------------------------------- | | In-memory | Single session, no persistence needed | | JSON file | Local development, single user | | SQLite | Local apps, moderate scale | | Redis | High-performance, distributed | | PostgreSQL | Production, with JSONB columns | [Connect these docs programmatically](/contextual/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers. # useChat React Hook Source: https://docs.dedaluslabs.ai/sdk/cookbook/react-frontend Use the dedalus-react hook with a Python backend for streaming and client-side tool execution Use the [`dedalus-react`](https://www.npmjs.com/package/dedalus-react) `useChat` hook with a Python backend. This pattern enables real-time streaming, client-side tool execution, and model selection. The `dedalus-react` package was created by [Colby Gilbert](https://www.npmjs.com/~colbygilbert95). See the [npm package](https://www.npmjs.com/package/dedalus-react) for full documentation. ## Architecture ```mermaid theme={"theme":{"light":"github-light","dark":"github-dark"}} sequenceDiagram participant UI as React UI (useChat) participant API as FastAPI (/api/chat) participant R as DedalusRunner UI->>API: POST /api/chat (messages + model) API->>R: runner.run(stream=True) loop SSE stream R-->>API: StreamChunk delta API-->>UI: data: StreamChunk JSON end API-->>UI: data: [DONE] ``` The Dedalus Python SDK streams OpenAI-compatible chunks. The React hook consumes them via Server-Sent Events (SSE). ## Setup ### Install Dependencies ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} # Backend pip install fastapi uvicorn dedalus-labs python-dotenv # Frontend pnpm add dedalus-react dedalus-labs react ``` ## Python Backend (FastAPI) Create a streaming endpoint that wraps `DedalusRunner` output as SSE: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} # server.py import json from fastapi import FastAPI, Request from fastapi.responses import StreamingResponse from fastapi.middleware.cors import CORSMiddleware from dotenv import load_dotenv from dedalus_labs import AsyncDedalus from dedalus_labs.lib.runner import DedalusRunner load_dotenv() app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins=["http://localhost:3000"], allow_methods=["POST"], allow_headers=["*"], ) client = AsyncDedalus() runner = DedalusRunner(client) @app.post("/api/chat") async def chat(request: Request): body = await request.json() messages = body.get("messages", []) model = body.get("model", "openai/gpt-5.2") stream = runner.run( messages=messages, model=model, stream=True, ) async def generate(): async for chunk in stream: yield f"data: {chunk.model_dump_json()}\n\n" yield "data: [DONE]\n\n" return StreamingResponse( generate(), media_type="text/event-stream", headers={ "Cache-Control": "no-cache", "Connection": "keep-alive", }, ) if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` ## React Frontend Use the `useChat` hook to manage messages and streaming: ```tsx theme={"theme":{"light":"github-light","dark":"github-dark"}} // App.tsx import { useChat } from "dedalus-react"; import { useState } from "react"; function Chat() { const [input, setInput] = useState(""); const { messages, sendMessage, status, stop } = useChat({ transport: { api: "http://localhost:8000/api/chat" }, }); const handleSubmit = (e: React.FormEvent) => { e.preventDefault(); if (!input.trim()) return; sendMessage(input); setInput(""); }; return (
{messages.map((msg, i) => (
{msg.role}: {msg.content}
))}
setInput(e.target.value)} placeholder="Type a message..." disabled={status === "streaming"} /> {status === "streaming" && ( )}
); } export default Chat; ``` ## Client-Side Tool Execution The `useChat` hook supports executing tools on the client via `onToolCall` and `addToolResult`: ```tsx theme={"theme":{"light":"github-light","dark":"github-dark"}} import { useChat } from "dedalus-react"; function ChatWithTools() { const { messages, sendMessage, addToolResult } = useChat({ transport: { api: "/api/chat" }, // Called when model requests a tool onToolCall: async ({ toolCall }) => { if (toolCall.function.name === "get_user_location") { // Execute client-side (e.g., browser geolocation) const position = await new Promise((resolve) => navigator.geolocation.getCurrentPosition(resolve), ); addToolResult({ toolCallId: toolCall.id, result: { lat: position.coords.latitude, lng: position.coords.longitude, }, }); } }, // Auto-continue after tool results sendAutomaticallyWhen: ({ messages }) => { const last = messages[messages.length - 1]; return last?.role === "assistant" && last.tool_calls?.length > 0; }, }); // ... rest of component } ``` ### How It Works 1. **Model requests tool** - Backend streams `tool_calls` in the response 2. **Hook invokes callback** - `onToolCall` fires for each tool call 3. **Client executes** - Your code runs the tool (API call, browser API, user prompt, etc.) 4. **Result sent back** - `addToolResult` adds a `tool` message to history 5. **Auto-continue** - If `sendAutomaticallyWhen` returns true, another request is made with the tool result The Python backend doesn't need any special handling—it just receives messages including `role: "tool"` entries and continues the conversation. ## Model Selection Pass additional data via the transport body: ```tsx theme={"theme":{"light":"github-light","dark":"github-dark"}} const [model, setModel] = useState("openai/gpt-5.2"); const { messages, sendMessage } = useChat({ transport: { api: "/api/chat", body: { model }, // Merged into every request }, }); ``` Update the backend to read it: ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} @app.post("/api/chat") async def chat(request: Request): body = await request.json() messages = body.get("messages", []) model = body.get("model", "openai/gpt-5.2") # Read from body stream = runner.run(messages=messages, model=model, stream=True) # ... ``` ## Running the Example ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} # Terminal 1: Start backend python server.py # Terminal 2: Start frontend cd frontend && pnpm dev ``` ## Production Considerations | Concern | Solution | | -------------- | ------------------------------------------------- | | CORS | Configure allowed origins for your domain | | Authentication | Add JWT/session middleware, pass token in headers | | Rate limiting | Implement per-user throttling | | Error handling | Wrap stream in try/catch, surface errors to UI | [Connect these docs programmatically](/contextual/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers. # FAQ Source: https://docs.dedaluslabs.ai/sdk/guides/faq Frequently Asked Questions * We make it easy to build complex AI agents with just 5 (or so) lines of code. * Agents built with our Dedalus SDK can connect to any MCP server on our marketplace, switch between any model provider, and even execute locally-defined tools. * Don’t yet see an MCP you want to use on our marketplace? Upload any MCP server and we’ll host it for free. Log into your [dashboard](https://www.dedaluslabs.ai/dashboard/api-keys) to get your API key. Yes! However, you don't need to. With a `DEDALUS_API_KEY` in your environment, we take care of routing to any provider or model for you, including handoffs between models from different providers. For an example, see our [handoffs](/sdk/handoffs) page. Our SDK is currently available for Python and TypeScript. We recommend writing your MCPs in Python under Dedalus framework for speed and security. We also accept MCP servers written in other Python and TypeScript frameworks with HTTP transport. For best practices in writing MCP servers see our [server guidelines](/sdk/guides/server-guidelines). Yes. Dedalus invented the most secure MCP auth framework. On the Dedalus marketplace/runner, authentication is handled via **DAuth** (our managed OAuth 2.1 flow) so agents can securely connect to protected MCP servers and external APIs without hardcoding secrets. See [authorization](/dmcp/authorization) and the client auth guides ([bearer auth](/dmcp/client/bearer-auth), [OAuth](/dmcp/client/oauth)). Send us an email at [support@dedaluslabs.ai](mailto:support@dedaluslabs.ai) or send a message in our [Discord](https://discord.gg/RuDhZKnq5R). [Connect these docs programmatically](/contextual/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers. # Model Providers Source: https://docs.dedaluslabs.ai/sdk/guides/providers Mix and match models from supported providers. `OPENAI_API_KEY` `ANTHROPIC_API_KEY` `GOOGLE_API_KEY` `FIREWORKS_API_KEY` `XAI_API_KEY` `PERPLEXITY_API_KEY` `DEEPSEEK_API_KEY` `GROQ_API_KEY` `COHERE_API_KEY` `CEREBRAS_API_KEY` `MISTRAL_API_KEY` `MOONSHOT_API_KEY` ## Model Recommendations by Use Case Choosing the right model depends on your specific requirements. Here's a guide to help you select the best provider and model for your needs: ### Tool Calling & Function Use **Best for:** Building agents and applications that need to call external tools or functions * `anthropic/claude-opus-4-5` - Excellent tool calling reliability with structured outputs * `anthropic/claude-sonnet-4-5-20250929` - Strong tool use with fast performance * `openai/gpt-5` - Native function calling support with structured responses * `openai/gpt-4o` - Reliable tool calling for production applications * `deepseek/deepseek-chat` - Advanced tool use with multi-step reasoning ### Coding & Development **Best for:** Code generation, debugging, and technical implementations * `deepseek/deepseek-coder` - Purpose-built for coding tasks * `openai/gpt-5-codex` - Specialized for code generation and completion * `anthropic/claude-opus-4-5` - Strong code understanding and generation * `anthropic/claude-sonnet-4-5-20250929` - Excellent coding with faster responses * `xai/grok-code-fast-1` - Fast code-focused model ### Reasoning & Complex Problem Solving **Best for:** Mathematical reasoning, logical analysis, and complex decision-making * `anthropic/claude-opus-4-5` - Advanced reasoning capabilities * `openai/o3` - Deep reasoning for complex problems * `openai/o1` - Strong multi-step reasoning * `deepseek/deepseek-reasoner` - Specialized reasoning model * `xai/grok-4-fast-reasoning` - Optimized for reasoning tasks ### Speed & Efficiency **Best for:** High-throughput applications requiring fast responses * `anthropic/claude-haiku-4-5-20251001` - Fast performance at lower cost * `google/gemini-2.5-flash` - Optimized for throughput and low latency * `openai/gpt-5-mini` - Lightweight, fast model * `openai/gpt-5-nano` - Ultra-fast for simple tasks * `xai/grok-4-fast-non-reasoning` - Quick responses without extended reasoning ### Long Context Tasks **Best for:** Processing large documents, codebases, or extended conversations * `google/gemini-2.5-pro` - Up to 1M+ token context window * `google/gemini-2.0-flash` - Large context with fast performance * `anthropic/claude-opus-4-5` - Extended context for complex analysis * `anthropic/claude-sonnet-4-5-20250929` - Strong long-context capabilities * `openai/gpt-4-32k` - Extended 32K context window ### Vision & Multimodal **Best for:** Image understanding, document analysis, and visual tasks * `openai/gpt-4o` - Strong vision capabilities with chat * `anthropic/claude-opus-4-5` - Advanced multimodal understanding * `anthropic/claude-sonnet-4-5-20250929` - Multimodal with fast performance * `google/gemini-2.5-pro` - Advanced vision and multimodal processing * `xai/grok-2-vision-1212` - Multimodal understanding Many providers offer multiple model tiers (e.g., mini, standard, pro, opus) that balance cost, speed, and capability. Start with smaller models for testing and scale up based on your performance requirements. ## Supported Models **Programmatic Discovery:** Use [`GET /v1/models`](/api/list-models) to list all hundreds models with capabilities (vision, tools, thinking, streaming) and routing metadata. Perfect for building model selectors or auto-populating dropdowns in tools like n8n. ### OpenAI #### Chat Models * `openai/gpt-5.4` * `openai/gpt-5.2` * `openai/gpt-5.1` * `openai/gpt-5` * `openai/gpt-5-mini` * `openai/gpt-5-nano` * `openai/gpt-5-chat-latest` * `openai/gpt-4.1` * `openai/gpt-4.1-mini` * `openai/gpt-4.1-nano` * `openai/gpt-4o` * `openai/gpt-4o-2024-05-13` * `openai/gpt-5.2` * `openai/gpt-4o-search-preview` * `openai/gpt-4o-mini-search-preview` * `openai/chatgpt-4o-latest` * `openai/gpt-4-turbo` * `openai/gpt-4-turbo-2024-04-09` * `openai/gpt-4` * `openai/gpt-4-0125-preview` * `openai/gpt-4-1106-preview` * `openai/gpt-4-0613` * `openai/gpt-3.5-turbo` * `openai/gpt-3.5-turbo-0125` * `openai/gpt-3.5-turbo-1106` #### Reasoning Models * `openai/o1` * `openai/o3` * `openai/o3-mini` * `openai/o4-mini` #### Image Generation * `openai/dall-e-3` #### Audio Transcription * `openai/whisper-1` #### Embedding Models | Model | Price | | ------------------------------- | ------------------ | | `openai/text-embedding-3-large` | \$0.13 / 1M tokens | | `openai/text-embedding-3-small` | \$0.02 / 1M tokens | | `openai/text-embedding-ada-002` | \$0.10 / 1M tokens | ### Anthropic (Claude) #### Claude 4.6 Series * `anthropic/claude-opus-4-6` #### Claude 4.5 Series * `anthropic/claude-opus-4-5` * `anthropic/claude-haiku-4-5-20251001` * `anthropic/claude-sonnet-4-5-20250929` #### Claude 4 Series * `anthropic/claude-opus-4-1-20250805` * `anthropic/claude-opus-4-20250514` * `anthropic/claude-opus-4-5` #### Claude 3.7 Series * `anthropic/claude-3-7-sonnet-20250219` #### Claude 3.5 Series * `anthropic/claude-3-5-haiku-20241022` #### Claude 3 Series * `anthropic/claude-3-haiku-20240307` ### Google (Gemini) #### Gemini 3 Series * `google/gemini-3.1-pro-preview` * `google/gemini-3-flash-preview` #### Gemini 2.5 Series * `google/gemini-2.5-pro` * `google/gemini-2.5-flash` * `google/gemini-2.5-flash-lite` #### Gemini 2.0 Series * `google/gemini-2.0-flash` * `google/gemini-2.0-flash-exp` * `google/gemini-2.0-flash-001` * `google/gemini-2.0-flash-lite` #### Embedding Models * `google/text-embedding-004` ### xAI (Grok) #### Grok 4 Series * `xai/grok-4-1-fast-reasoning` * `xai/grok-4-1-fast-non-reasoning` * `xai/grok-4-fast-reasoning` * `xai/grok-4-fast-non-reasoning` * `xai/grok-code-fast-1` * `xai/grok-4-0709` #### Grok 3 Series * `xai/grok-3` * `xai/grok-3-mini` #### Grok 2 Series * `xai/grok-2-vision-1212` ### DeepSeek * `deepseek/deepseek-chat` * `deepseek/deepseek-reasoner` * `deepseek/deepseek-coder` ### Mistral * `mistral/mistral-large-latest` * `mistral/mistral-medium-latest` * `mistral/mistral-small-latest` * `mistral/codestral-2508` * `mistral/open-mistral-nemo-2407` * `mistral/pixtral-12b` ### Groq Lightning-fast inference for open source models. * `groq/llama-3.1-8b-instant` * `groq/llama-3.3-70b-versatile` * `groq/openai/gpt-oss-120b` * `groq/openai/gpt-oss-20b` * `groq/whisper-large-v3` * `groq/whisper-large-v3-turbo` ### Cerebras Ultra-fast inference on custom silicon. #### Production Models * `cerebras/llama3.1-8b` * `cerebras/llama-3.3-70b` * `cerebras/gpt-oss-120b` * `cerebras/qwen-3-32b` #### Preview Models * `cerebras/qwen-3-235b-a22b-instruct-2507` * `cerebras/zai-glm-4.7` ### Moonshot (Kimi) Advanced reasoning and extended context from Moonshot AI. * `moonshot/kimi-k2.5` * `moonshot/kimi-k2-0905-preview` * `moonshot/kimi-k2-0711-preview` * `moonshot/kimi-k2-turbo-preview` * `moonshot/kimi-k2-thinking` * `moonshot/kimi-k2-thinking-turbo` # MCP server guidelines Source: https://docs.dedaluslabs.ai/sdk/guides/server-guidelines Best practices for building MCP servers that work well with the Dedalus SDK This guide covers practical best practices for building MCP servers that work reliably with the Dedalus SDK. If you’re new to MCP, start with the MCP docs first: * **Build a server**: [MCP server overview](/dmcp/server/overview) * **Deploy**: [Deploy an MCP server](/dmcp/deploy) * **Test/debug**: [Testing](/dmcp/testing), [Debugging](/dmcp/debugging) ## Guidelines (high-level) * **Keep tools small and deterministic**: One tool should do one job, with clear input/output. * **Use strict schemas**: Prefer explicit parameter types and avoid “any”-shaped payloads. * **Return stable data**: Avoid embedding large prose in tool responses—return structured fields whenever possible. * **Be stateless when possible**: It simplifies scaling and avoids surprising cross-user behavior. * **Handle errors explicitly**: Return actionable error messages; avoid silent failures. ## Next steps * **Connect from the Dedalus SDK**: [MCP Servers](/sdk/mcp) * **Combine with local tools**: [Tools](/sdk/tools) # Handoffs Source: https://docs.dedaluslabs.ai/sdk/handoffs Route tasks to different models based on their strengths Different models excel at different tasks. GPT handles reasoning and tool use well. Claude writes better prose. Specialized models exist for code, math, and domain-specific work. Handoffs let agents route subtasks to the right model. If you’ve already built an MCP + tools workflow, handoffs let you keep a fast “coordinator” model most of the time and route to stronger models only when needed. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() async def main(): client = AsyncDedalus() runner = DedalusRunner(client) result = await runner.run( input=( "Find me the nearest basketball games in January in San Francisco, then write a concise plan for attending." ), model=["openai/gpt-5.2", "anthropic/claude-opus-4-5"], mcp_servers=["windsor/ticketmaster-mcp"], # Discover events via Ticketmaster ) print(result.final_output) if **name** == "**main**": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from 'dedalus-labs'; import { DedalusRunner } from 'dedalus-labs'; const client = new Dedalus(); const runner = new DedalusRunner(client); async function main() { const result = await runner.run({ input: 'Find me the nearest basketball games in January in San Francisco, then write a concise plan for attending.', model: ['openai/gpt-5.2', 'anthropic/claude-opus-4-5'], mcpServers: ['windsor/ticketmaster-mcp'], // Discover events via Ticketmaster }); console.log((result as any).finalOutput); } main(); ``` ## When to Use Handoffs Handoffs shine when a task has distinct phases requiring different capabilities: * **Research → Writing**: GPT gathers information, Claude writes the final piece * **Analysis → Code**: A reasoning model plans the approach, a code model implements it * **Triage → Specialist**: A general model routes to domain-specific models For simple tasks where one model handles everything, stick to a single model. ## Model Strengths A rough guide to model selection: | Task | Good Models | | ----------------------- | ------------------------------------------------- | | Tool calling, reasoning | `openai/gpt-5.2`, `xai/grok-4-1-fast-reasoning` | | Writing, creative work | `anthropic/claude-opus-4-5` | | Code generation | `anthropic/claude-opus-4-5`, `openai/gpt-5-codex` | | Fast, cheap responses | `gpt-5-mini` | ## Next steps * **Add multimodality**: [Images & Vision](/sdk/images) — Add image generation/vision to your workflow * **See workflows**: [Use Cases](/sdk/use-cases/data-analyst) — Multi-capability patterns [Connect these docs programmatically](/contextual/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers. # Images & Vision Source: https://docs.dedaluslabs.ai/sdk/images Generate, edit, and analyze images Generate images with DALL-E, create variations, apply edits, and analyze images with vision models. All through the same unified client. For image generation, use `openai/dall-e-3` for best quality. For vision tasks, `openai/gpt-5.2` provides excellent performance. ## Progressive example: add images to your workflow If you’ve already built a text-based agent (Chat → Tools → MCP → Streaming), images are usually the next capability you add: 1. **Generate** an image from a prompt 2. **Edit / vary** an existing image 3. **Analyze** an image with a vision model The sections below start with the simplest call (generation), then layer on editing and vision. ## Image Generation Generate images from text prompts using DALL-E models. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus from dotenv import load_dotenv load_dotenv() async def generate_image(): """Generate image from text.""" client = AsyncDedalus() response = await client.images.generate( prompt="Dedalus flying through clouds", model="openai/dall-e-3", ) print(response.data[0].url) if __name__ == "__main__": asyncio.run(generate_image()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; import * as dotenv from "dotenv"; dotenv.config(); async function generateImage() { const client = new Dedalus(); const response = await client.images.generate({ prompt: "Dedalus flying through clouds", model: "openai/dall-e-3", }); console.log(response.data[0].url); } generateImage(); ``` ## Image Editing Edit existing images by providing a source image, mask, and prompt describing desired changes. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio import httpx from dedalus_labs import AsyncDedalus from dotenv import load_dotenv load_dotenv() async def edit_image(): """Edit image (using generated image as both source and mask).""" client = AsyncDedalus() # Generate a test image (DALL·E output is valid RGBA PNG) gen_response = await client.images.generate( prompt="A white cat on a cushion", model="openai/dall-e-2", size="512x512", ) # Download generated image async with httpx.AsyncClient() as http: img_data = await http.get(gen_response.data[0].url) img_bytes = img_data.content # Use same image as both source and mask (just testing endpoint works) response = await client.images.edit( image=img_bytes, mask=img_bytes, prompt="A white cat with sunglasses", model="openai/dall-e-2", ) print(response.data[0].url) if __name__ == "__main__": asyncio.run(edit_image()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus, { toFile } from "dedalus-labs"; import * as dotenv from "dotenv"; dotenv.config(); async function editImage() { const client = new Dedalus(); // Generate a test image (DALL·E output is valid RGBA PNG) const genResponse = await client.images.generate({ prompt: "A white cat on a cushion", model: "openai/dall-e-2", size: "512x512", }); // Download generated image const imageUrl = genResponse.data[0].url; if (!imageUrl) throw new Error("No image URL returned"); const imageResponse = await fetch(imageUrl); const imgBytes = Buffer.from(await imageResponse.arrayBuffer()); // Use same image as both source and mask (just testing endpoint works) const response = await client.images.edit({ image: await toFile(imgBytes, "source.png"), mask: await toFile(imgBytes, "mask.png"), prompt: "A white cat with sunglasses", model: "openai/dall-e-2", }); console.log(response.data[0].url); } editImage(); ``` ## Image Variations Create variations of an existing image. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from pathlib import Path from dedalus_labs import AsyncDedalus from dotenv import load_dotenv load_dotenv() async def create_variations(): """Create image variations.""" client = AsyncDedalus() image_path = Path("image.png") if not image_path.exists(): print("Skipped: image.png not found") return response = await client.images.create_variation( image=image_path.read_bytes(), model="openai/dall-e-2", n=2, ) for img in response.data: print(img.url) if __name__ == "__main__": asyncio.run(create_variations()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus, { toFile } from "dedalus-labs"; import * as fs from "fs"; import * as path from "path"; import * as dotenv from "dotenv"; dotenv.config(); async function createVariations() { const client = new Dedalus(); const imagePath = path.join(process.cwd(), "image.png"); if (!fs.existsSync(imagePath)) { console.log("Skipped: image.png not found"); return; } const response = await client.images.createVariation({ image: await toFile(fs.readFileSync(imagePath), "image.png"), model: "openai/dall-e-2", n: 2, }); for (const img of response.data) { console.log(img.url); } } createVariations(); ``` ## Vision: Analyze Images from URL Use vision models to analyze and describe images from URLs. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus from dotenv import load_dotenv load_dotenv() async def vision_url(): """Analyze image from URL.""" client = AsyncDedalus() completion = await client.chat.completions.create( model="openai/gpt-5.2", messages=[ { "role": "user", "content": [ {"type": "text", "text": "What's in this image?"}, { "type": "image_url", "image_url": {"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"}, }, ], } ], ) print(completion.choices[0].message.content) if __name__ == "__main__": asyncio.run(vision_url()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; import * as dotenv from "dotenv"; dotenv.config(); async function visionUrl() { const client = new Dedalus(); const completion = await client.chat.completions.create({ model: "openai/gpt-5.2", messages: [ { role: "user", content: [ { type: "text", text: "What's in this image?" }, { type: "image_url", image_url: { url: "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg", }, }, ], }, ], }); console.log(completion.choices[0].message.content); } visionUrl(); ``` ## Vision: Analyze Local Images with Base64 Analyze local images by encoding them as base64. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio import base64 from pathlib import Path from dedalus_labs import AsyncDedalus from dotenv import load_dotenv load_dotenv() async def vision_base64(): """Analyze local image via base64.""" client = AsyncDedalus() image_path = Path("image.png") if not image_path.exists(): print("Skipped: image.png not found") return b64 = base64.b64encode(image_path.read_bytes()).decode() completion = await client.chat.completions.create( model="openai/gpt-5.2", messages=[ { "role": "user", "content": [ {"type": "text", "text": "Describe this image."}, {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{b64}"}}, ], } ], ) print(completion.choices[0].message.content) if __name__ == "__main__": asyncio.run(vision_base64()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; import * as fs from "fs"; import * as path from "path"; import * as dotenv from "dotenv"; dotenv.config(); async function visionBase64() { const client = new Dedalus(); const imagePath = path.join(process.cwd(), "image.png"); if (!fs.existsSync(imagePath)) { console.log("Skipped: image.png not found"); return; } const b64 = fs.readFileSync(imagePath).toString("base64"); const completion = await client.chat.completions.create({ model: "openai/gpt-5.2", messages: [ { role: "user", content: [ { type: "text", text: "Describe this image." }, { type: "image_url", image_url: { url: `data:image/jpeg;base64,${b64}` } }, ], }, ], }); console.log(completion.choices[0].message.content); } visionBase64(); ``` ## Advanced: Image Orchestration with DedalusRunner Create complex image workflows by combining generation, editing, and vision capabilities using DedalusRunner. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio import httpx from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() class ImageToolSuite: """Helper that exposes image endpoints as DedalusRunner tools.""" def __init__(self, client: AsyncDedalus): self._client = client async def generate_concept_art( self, prompt: str, model: str = "openai/dall-e-3", size: str = "1024x1024", ) -> str: """Create concept art and return the hosted image URL.""" response = await self._client.images.generate( prompt=prompt, model=model, size=size, ) return response.data[0].url async def edit_concept_art( self, prompt: str, reference_url: str, mask_url: str | None = None, model: str = "openai/dall-e-2", ) -> str: """Apply edits to the referenced image URL and return a new URL.""" if not reference_url: raise ValueError("reference_url must be provided when editing an image.") async with httpx.AsyncClient() as http: base_image = await http.get(reference_url) mask_bytes = await http.get(mask_url) if mask_url else None edit_kwargs = { "image": base_image.content, "prompt": prompt, "model": model, } if mask_bytes: edit_kwargs["mask"] = mask_bytes.content response = await self._client.images.edit(**edit_kwargs) return response.data[0].url async def describe_image( self, image_url: str, question: str = "Describe this image.", model: str = "openai/gpt-5.2", ) -> str: """Run a lightweight vision pass against an existing image URL.""" completion = await self._client.chat.completions.create( model=model, messages=[ { "role": "user", "content": [ {"type": "text", "text": question}, {"type": "image_url", "image_url": {"url": image_url}}, ], } ], ) return completion.choices[0].message.content async def runner_storyboard(): """Demonstrate DedalusRunner + agent-as-tool pattern for image workflows.""" client = AsyncDedalus() runner = DedalusRunner(client, verbose=True) image_tools = ImageToolSuite(client) instructions = ( "You are a creative director. Use the provided tools to generate concept art, " "optionally refine it, and then describe the final render. Always keep the " "main conversation on a text model and rely on the tools for image work." ) result = await runner.run( instructions=instructions, input="Create a retro Dedalus mission patch, refine it with a neon palette, and describe it.", model="openai/gpt-5.2", tools=[ image_tools.generate_concept_art, image_tools.edit_concept_art, image_tools.describe_image, ], max_steps=4, verbose=True, debug=False, ) print("Runner final output:", result.final_output) print("Tools invoked:", result.tools_called) if __name__ == "__main__": asyncio.run(runner_storyboard()) ``` ## Next steps * **See end-to-end agents**: [Use Cases](/sdk/use-cases/data-analyst) — Multimodal patterns * **Deploy your own MCP server**: [MCP quickstart](/dmcp/quickstart) — Host your own tools for your agent * **Build a chat server**: [Cookbook: Chat server](/sdk/cookbook/chat-server) — Serve your agent in production [Connect these docs programmatically](/contextual/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers. # MCP Servers Source: https://docs.dedaluslabs.ai/sdk/mcp Connect to any model to any MCP server The Dedalus SDK is a full MCP client. Connect your agents to any server that implements the [Model Context Protocol](https://modelcontextprotocol.io), hosted by you, us, or anyone else. Local tools handle your custom logic, MCP servers add hosted capabilities (search, databases, SaaS APIs, etc.). ## Connect MCP server in one line ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() async def main(): client = AsyncDedalus() runner = DedalusRunner(client) result = await runner.run( input="What's the weather forecast for San Francisco this week?", model="anthropic/claude-opus-4-5", mcp_servers=["windsornguyen/open-meteo-mcp"], # Weather forecasts via Open-Meteo ) print(result.final_output) if __name__ == "__main__": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; import { DedalusRunner } from "dedalus-labs"; const client = new Dedalus(); const runner = new DedalusRunner(client); async function main() { const result = await runner.run({ input: "What's the weather forecast for San Francisco this week?", model: "anthropic/claude-opus-4-5", mcpServers: ["windsornguyen/open-meteo-mcp"], // Weather forecasts via Open-Meteo }); console.log(result.finalOutput); } main(); ``` The agent discovers the server's tools and uses them when relevant. ## Combine with local tools MCP servers and local tools work together. Pass both to `runner.run()`. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() def as_bullets(items: list[str]) -> str: """Format items as a bulleted list.""" return "\n".join(f"• {item}" for item in items) async def main(): client = AsyncDedalus() runner = DedalusRunner(client) result = await runner.run( input=( "Get the 7-day weather forecast for San Francisco " "and format the daily conditions as bullets using as_bullets." ), model="anthropic/claude-opus-4-5", mcp_servers=["windsornguyen/open-meteo-mcp"], tools=[as_bullets], ) print(result.final_output) if __name__ == "__main__": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; import { DedalusRunner } from "dedalus-labs"; const client = new Dedalus(); const runner = new DedalusRunner(client); function asBullets(items: string[]): string { return items.map((item) => `• ${item}`).join("\n"); } async function main() { const result = await runner.run({ input: "Get the 7-day weather forecast for San Francisco and format the daily conditions as bullets using asBullets.", model: "anthropic/claude-opus-4-5", mcpServers: ["windsornguyen/open-meteo-mcp"], tools: [asBullets], }); console.log((result as any).finalOutput); } main(); ``` ## External MCP URL You can connect directly to any external MCP server URL (Streamable HTTP). This is useful when: * You’re testing a server without registering it * You’re connecting to a self-hosted MCP deployment * You’re using an MCP server that isn’t in the marketplace ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() async def main(): client = AsyncDedalus() runner = DedalusRunner(client) result = await runner.run( input="Use your tools to summarize the Dedalus Python SDK repo in 5 bullet points.", model="openai/gpt-5.2", # External MCP URL! mcp_servers=["https://mcp.deepwiki.com/mcp"], ) print(result.final_output) if __name__ == "__main__": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; import { DedalusRunner } from "dedalus-labs"; const client = new Dedalus(); const runner = new DedalusRunner(client); async function main() { const result = await runner.run({ input: "Use your tools to summarize the Dedalus Python SDK repo in 5 bullet points.", model: "openai/gpt-5.2", // External MCP URL! mcpServers: ["https://mcp.deepwiki.com/mcp"], }); console.log((result as any).finalOutput); } main(); ``` ## Next steps * **Return typed data**: [Structured Outputs](/sdk/structured-outputs) — Validate and parse JSON into schemas * **Stream the workflow**: [Streaming](/sdk/streaming) — Watch tool use + output in real time * **See examples**: [Use Cases](/sdk/use-cases/web-search-agent) — End-to-end MCP agent patterns [Connect these docs programmatically](/contextual/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers. # Quickstart Source: https://docs.dedaluslabs.ai/sdk/quickstart Learn how to build, run, and deploy agents with the Dedalus SDK in minutes Dedalus helps you ship agent workflows that are: * **Provider-agnostic**: Use OpenAI, Anthropic, Google, xAI, DeepSeek, and more with one API. * **Tool- and MCP-native**: Let models call local functions and hosted MCP servers. * **Production-ready**: Streaming, structured outputs, routing/handoffs, and runtime policies. ## What are you trying to build? Send a prompt and get a response from any provider/model. Let the model call typed Python/TS functions that you implement. Print responses as they're generated (great for UIs/CLIs). Connect to hosted MCP servers with one line. Validate model output against schemas (Pydantic/Zod). Provide multiple models; the agent can route/handoff by phase. ## Installation ```bash Python theme={"theme":{"light":"github-light","dark":"github-dark"}} uv pip install dedalus-labs ``` ```bash npm theme={"theme":{"light":"github-light","dark":"github-dark"}} npm install dedalus-labs ``` ```bash yarn theme={"theme":{"light":"github-light","dark":"github-dark"}} yarn add dedalus-labs ``` ```bash pnpm theme={"theme":{"light":"github-light","dark":"github-dark"}} pnpm add dedalus-labs ``` ```bash bun theme={"theme":{"light":"github-light","dark":"github-dark"}} bun add dedalus-labs ``` ## Set Your API Key Get your API key from the [dashboard](https://www.dedaluslabs.ai/dashboard/api-keys) and set it as an environment variable: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} export DEDALUS_API_KEY="your-api-key" ``` Or use a `.env` file: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} DEDALUS_API_KEY=your-api-key ``` ## Your First Request Let's build this incrementally. ### 1) Chat with a model ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() async def main(): client = AsyncDedalus() runner = DedalusRunner(client) response = await runner.run( input="What are the key factors that influence weather patterns?", model="anthropic/claude-opus-4-6", ) print(response.final_output) if __name__ == "__main__": asyncio.run(main()) ``` ### 2) Add an MCP server Here we connect a well-known MCP server and let the model use it. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() async def main(): client = AsyncDedalus() runner = DedalusRunner(client) response = await runner.run( input="What's the weather forecast for San Francisco this week?", model="anthropic/claude-opus-4-6", mcp_servers=["windsornguyen/open-meteo-mcp"], # Weather forecasts via Open-Meteo ) print(response.final_output) if __name__ == "__main__": asyncio.run(main()) ``` ### 3) Add a local tool Define a function with type hints and a docstring. Pass it to `runner.run()`. The SDK extracts the schema automatically and handles execution when the model decides to use it. ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() def as_bullets(items: list[str]) -> str: """Format items as a bulleted list.""" return "\n".join(f"• {item}" for item in items) async def main(): client = AsyncDedalus() runner = DedalusRunner(client) response = await runner.run( input=( "Get the 7-day weather forecast for San Francisco " "and format the daily conditions as bullets using as_bullets." ), model="anthropic/claude-opus-4-6", mcp_servers=["windsornguyen/open-meteo-mcp"], tools=[as_bullets], ) print(response.final_output) if __name__ == "__main__": asyncio.run(main()) ``` ### 4) Stream output ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dedalus_labs.utils.stream import stream_async from dotenv import load_dotenv load_dotenv() async def main(): client = AsyncDedalus() runner = DedalusRunner(client) stream = runner.run( input="Explain how weather forecasting works in one paragraph, streaming as you write.", model="anthropic/claude-opus-4-6", stream=True, ) await stream_async(stream) if __name__ == "__main__": asyncio.run(main()) ``` ## Next steps Start from common agent patterns and templates. End-to-end implementations and working recipes. **Go deeper**: [Tools](/sdk/tools) · [MCP Servers](/sdk/mcp) · [Structured Outputs](/sdk/structured-outputs) · [Streaming](/sdk/streaming) ## Get the latest SDKs dedalus-labs/dedalus-sdk-python dedalus-labs/dedalus-sdk-typescript [Connect these docs programmatically](/contextual/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers. # Runner Reference Source: https://docs.dedaluslabs.ai/sdk/runner Complete reference for DedalusRunner.run() parameters `DedalusRunner` is the core of the Dedalus SDK. It orchestrates local tools, hosted MCP servers, streaming, and any model from any provider into a single agentic loop. Five lines of code, any agent you want. ## Quick Example ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_labs import AsyncDedalus, DedalusRunner client = AsyncDedalus() runner = DedalusRunner(client) result = await runner.run( input="What's the weather in Tokyo?", model="anthropic/claude-sonnet-4-20250514", mcp_servers=["windsornguyen/open-meteo-mcp"], max_steps=5, ) print(result.final_output) ``` *** ## Parameters The user's prompt or a list of messages. Use a string for single-turn requests; use a message list for multi-turn conversations. Model(s) to use. Format: `provider/model-name` (e.g., `openai/gpt-4o`, `anthropic/claude-sonnet-4-20250514`). Pass a list for routing or fallback behavior. System prompt that defines the agent's behavior and personality. Existing conversation history. Use with `result.to_input_list()` for multi-turn conversations. Local Python/TS functions the model can call. Schema extracted automatically from type hints and docstrings. See [Tools](/sdk/tools). Hosted MCP servers to connect. Format: `["owner/server-name"]`. See [MCP](/sdk/mcp). Credentials for MCP server authentication. Control tool usage: * `"auto"` — Model decides (default) * `"none"` — Disable tools * `"required"` — Force tool use * `{"type": "function", "function": {"name": "..."}}` — Force specific tool Sampling temperature (0–2). Higher values increase randomness. Default varies by model. Maximum tokens in the response. Nucleus sampling threshold (0–1). Alternative to temperature. Penalize repeated tokens based on frequency (-2.0 to 2.0). Penalize tokens that have appeared at all (-2.0 to 2.0). Adjust likelihood of specific tokens. Maps token IDs to bias values (-100 to 100). Enforce structured output. Pass a Pydantic model or JSON schema. See [Structured Outputs](/sdk/structured-outputs). Return an async iterator for streaming responses. See [Streaming](/sdk/streaming). Include model's intent analysis in result. Maximum agentic loop iterations. The loop runs until the model stops calling tools or hits this limit. Transport protocol: `"http"` or `"realtime"`. Runtime policies for dynamic model selection or behavior modification. Configuration for agent-to-agent handoffs. See [Handoffs](/sdk/handoffs). Attributes for agent routing and selection. Maps attribute names to float values. Per-model attribute overrides. Maps model names to attribute dictionaries. Restrict which models the agent can use. Enforce strict model validation. Input/output guardrail configurations. Enable verbose logging. Enable debug mode with detailed traces and conversation snapshots. Callback fired when tools are called. Receives tool call details as a dictionary. *** ## Return Value Response object returned by `runner.run()`. The final text response from the agent. Results from local tool executions. Each contains `name`, `result`, `step`, and optionally `error`. Results from MCP server tool calls. Names of tools that were invoked during the run. Number of agentic loop iterations used. Full conversation history including tool calls. Useful for debugging or continuing conversations. Model's intent analysis (only present if `return_intent=true`). Alias for `final_output`. Alias for `final_output`. Returns a copy of the conversation history for use in follow-up runs. Enables multi-turn conversations. ```python Multi-turn Chat theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner async def main(): client = AsyncDedalus() runner = DedalusRunner(client) messages: list[dict] = [] while True: user_input = input("You: ").strip() if not user_input: break messages.append({"role": "user", "content": user_input}) result = await runner.run( model="openai/gpt-4o", messages=messages, ) messages = result.to_input_list() print(f"Assistant: {result.final_output}\n") asyncio.run(main()) ``` ```json Example Response theme={"theme":{"light":"github-light","dark":"github-dark"}} { "final_output": "The weather in Tokyo is currently 18°C with clear skies.", "tool_results": [], "mcp_results": [ { "name": "get_current_weather", "result": {"temperature": 18, "conditions": "clear"}, "server": "windsornguyen/open-meteo-mcp" } ], "tools_called": ["get_current_weather"], "steps_used": 2, "messages": [...] } ``` *** ## Next Steps Define local functions the model can call. Connect to hosted MCP servers. Validate responses against schemas. Stream responses as they generate. # Streaming Source: https://docs.dedaluslabs.ai/sdk/streaming Display responses as they're generated Streaming shows output token-by-token instead of waiting for the complete response. Users see progress immediately, which matters for longer outputs or interactive applications. ## Stream in one line Set `stream=True` so users see progress as the agent works. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dedalus_labs.utils.stream import stream_async from dotenv import load_dotenv load_dotenv() async def main(): client = AsyncDedalus() runner = DedalusRunner(client) stream = runner.run( input="Find me the nearest basketball games in January in San Francisco (stream your work).", model="anthropic/claude-opus-4-5", mcp_servers=["windsor/ticketmaster-mcp"], # Discover events via Ticketmaster stream=True, ) await stream_async(stream) if __name__ == "__main__": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; import { DedalusRunner } from "dedalus-labs"; const client = new Dedalus(); const runner = new DedalusRunner(client, true); async function main() { const result = await runner.run({ input: "Find me the nearest basketball games in January in San Francisco (stream your work).", model: "anthropic/claude-opus-4-5", mcpServers: ["windsor/ticketmaster-mcp"], // Discover events via Ticketmaster stream: true, }); if (Symbol.asyncIterator in result) { for await (const chunk of result) { if (chunk.choices?.[0]?.delta?.content) { process.stdout.write(chunk.choices[0].delta.content); } } } } main(); ``` ## Streaming with Tools Streaming works with tool-calling workflows. You can stream while the agent calls **local tools**, **MCPs**, or both. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dedalus_labs.utils.stream import stream_async from dotenv import load_dotenv load_dotenv() def summarize_headlines(headlines: list[str]) -> str: """Format headlines as a short bullet list.""" return "\n".join(f"• {h}" for h in headlines[:3]) async def main(): client = AsyncDedalus() runner = DedalusRunner(client) stream = runner.run( input=( "Search for AI news. Extract 3 headlines. " "Then call summarize_headlines(headlines) and stream your final answer." ), model="openai/gpt-5.2", mcp_servers=["windsor/brave-search-mcp"], # Web search via Brave Search MCP tools=[summarize_headlines], stream=True, ) await stream_async(stream) if __name__ == "__main__": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; import { DedalusRunner } from "dedalus-labs"; function summarizeHeadlines(headlines: string[]): string { return headlines .slice(0, 3) .map((h) => `• ${h}`) .join("\n"); } const client = new Dedalus(); const runner = new DedalusRunner(client, true); async function main() { const result = await runner.run({ input: "Search for AI news. Extract 3 headlines. Then call summarizeHeadlines(headlines) and stream your final answer.", model: "openai/gpt-5.2", mcpServers: ["windsor/brave-search-mcp"], // Web search via Brave Search MCP tools: [summarizeHeadlines], stream: true, }); if (Symbol.asyncIterator in result) { for await (const chunk of result) { if (chunk.choices?.[0]?.delta?.content) { process.stdout.write(chunk.choices[0].delta.content); } } } } main(); ``` ## Compare: non-streaming vs streaming (same scenario) The scenario below is the same in both snippets. The only difference is whether you set `stream=True` **and iterate over the stream**. In Python, **non-streaming** refers to `stream=False`, not “sync”. If you use `AsyncDedalus`, you’ll still write async code and use `asyncio.run(...)`. If you prefer fully synchronous code, use the `Dedalus` client (example below). ### Python ```python Non-streaming (AsyncDedalus) theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() async def main(): client = AsyncDedalus() runner = DedalusRunner(client) result = await runner.run( input="Find me the nearest basketball games in January in San Francisco.", model="anthropic/claude-opus-4-5", mcp_servers=["windsor/ticketmaster-mcp"], # Discover events via Ticketmaster ) # You only see output after the full run completes. print(result.final_output) if __name__ == "__main__": asyncio.run(main()) ``` ```python Streaming (AsyncDedalus) theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dedalus_labs.utils.stream import stream_async from dotenv import load_dotenv load_dotenv() async def main(): client = AsyncDedalus() runner = DedalusRunner(client) stream = runner.run( input="Find me the nearest basketball games in January in San Francisco.", model="anthropic/claude-opus-4-5", mcp_servers=["windsor/ticketmaster-mcp"], # Discover events via Ticketmaster stream=True, ) # You see output as the model generates it. await stream_async(stream) if __name__ == "__main__": asyncio.run(main()) ``` ### Python (sync client) ```python Non-streaming (Dedalus) theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_labs import Dedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() def main(): client = Dedalus() runner = DedalusRunner(client) result = runner.run( input="Find me the nearest basketball games in January in San Francisco.", model="anthropic/claude-opus-4-5", mcp_servers=["windsor/ticketmaster-mcp"], # Discover events via Ticketmaster ) print(result.final_output) if __name__ == "__main__": main() ``` ```python Streaming (Dedalus) theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_labs import Dedalus, DedalusRunner from dedalus_labs.utils.stream import stream_sync from dotenv import load_dotenv load_dotenv() def main(): client = Dedalus() runner = DedalusRunner(client) stream = runner.run( input="Find me the nearest basketball games in January in San Francisco.", model="anthropic/claude-opus-4-5", mcp_servers=["windsor/ticketmaster-mcp"], # Discover events via Ticketmaster stream=True, ) stream_sync(stream) if __name__ == "__main__": main() ``` ### TypeScript ```typescript Non-streaming theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; import { DedalusRunner } from "dedalus-labs"; const client = new Dedalus(); const runner = new DedalusRunner(client, true); async function main() { const result = await runner.run({ input: "Find me the nearest basketball games in January in San Francisco.", model: "anthropic/claude-opus-4-5", mcpServers: ["windsor/ticketmaster-mcp"], // Discover events via Ticketmaster }); console.log((result as any).finalOutput); } main(); ``` ```typescript Streaming theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; import { DedalusRunner } from "dedalus-labs"; const client = new Dedalus(); const runner = new DedalusRunner(client, true); async function main() { const result = await runner.run({ input: "Find me the nearest basketball games in January in San Francisco.", model: "anthropic/claude-opus-4-5", mcpServers: ["windsor/ticketmaster-mcp"], // Discover events via Ticketmaster stream: true, }); if (Symbol.asyncIterator in result) { for await (const chunk of result) { if (chunk.choices?.[0]?.delta?.content) { process.stdout.write(chunk.choices[0].delta.content); } } } } main(); ``` ## How the user experience differs * **Progressive rendering**: you can display text as it arrives (“typing”), instead of waiting for a complete response. * **Visible work**: in tool/MCP workflows, you can show status updates (e.g., “Searching Ticketmaster…”) while the agent is calling tools. * **Interruptibility**: you can stop early (client-side) if the user already has what they need, instead of paying for a full completion. ## When to Stream Stream when: * Building chat interfaces where perceived latency matters * Generating long-form content (articles, code, analysis) * Running in terminals or logs where progress feedback helps Don’t stream when: * You need to parse the complete response before displaying * Using structured outputs with `.parse()` * Response time is already fast enough ## Next steps * **Route across models**: [Handoffs](/sdk/handoffs) — Use fast/strong models by phase * **Add images last**: [Images & Vision](/sdk/images) — Add multimodality when your text workflow is solid * **See patterns**: [Use Cases](/sdk/use-cases/web-search-agent) — More streaming agent examples [Connect these docs programmatically](/contextual/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers. # Structured Outputs Source: https://docs.dedaluslabs.ai/sdk/structured-outputs Type-safe JSON responses with Pydantic, Zod, or Effect schemas LLMs generate text. Applications need data structures. Structured outputs bridge this gap—define a schema (Pydantic in Python, Zod or Effect Schema in TypeScript), and the Dedalus SDK ensures responses conform with full type safety. This is essential for building reliable applications. Instead of parsing free-form text and hoping for the best, you get validated objects that your code can trust. ## Extract typed data Define a schema. Call `.parse()`. Get validated objects. ```python Python (Pydantic) theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus from dotenv import load_dotenv from pydantic import BaseModel load_dotenv() class Event(BaseModel): name: str city: str date: str class EventsResponse(BaseModel): query: str events: list[Event] async def main(): client = AsyncDedalus() completion = await client.chat.completions.parse( model="openai/gpt-5.2", messages=[{ "role": "user", "content": "Return 3 upcoming basketball events near San Francisco as JSON.", }], response_format=EventsResponse, ) parsed: EventsResponse = completion.choices[0].message.parsed print(parsed) if __name__ == "__main__": asyncio.run(main()) ``` ```typescript TypeScript (Zod) theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; import { zodResponseFormat } from "dedalus-labs/helpers/zod"; import { z } from "zod"; const client = new Dedalus(); const Event = z.object({ name: z.string(), city: z.string(), date: z.string(), }); const EventsResponse = z.object({ query: z.string(), events: z.array(Event), }); async function main() { const completion = await client.chat.completions.parse({ model: "openai/gpt-5.2", messages: [ { role: "user", content: "Return 3 upcoming basketball events near San Francisco as JSON.", }, ], response_format: zodResponseFormat(EventsResponse, "events_response"), }); console.log(completion.choices[0]?.message.parsed); } main(); ``` ## Advanced This section is a reference you can skim and come back to. It’s organized as a progression: 1. **Client `.parse()`** (non-streaming, typed output) 2. **Client `.stream()`** (streaming, typed output) 3. **Runner `response_format`** (typed output inside an agent/tool loop) 4. **Schemas & patterns** (optional fields, nested models, enums/unions) 5. **Structured tool calls** (when you need deterministic tool calling) ## Client API (reference) The client provides three methods for structured outputs: * **`.parse()`** - Non-streaming with type-safe schemas * **`.stream()`** - Streaming with type-safe schemas (context manager) * **`.create()`** - Dict-based schemas only ### TypeScript setup TypeScript schema helpers are optional peer dependencies. Install the validator you want to use: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} bun install zod # or bun install effect ``` ### `.parse()` (non-streaming) This is the same pattern as the progressive example above, shown again in a more “API-reference” style. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus from dotenv import load_dotenv from pydantic import BaseModel load_dotenv() class Event(BaseModel): name: str city: str date: str class EventsResponse(BaseModel): query: str events: list[Event] async def main(): client = AsyncDedalus() completion = await client.chat.completions.parse( model="openai/gpt-5.2", messages=[ { "role": "user", "content": ( "Return 3 upcoming basketball events near San Francisco as JSON. " "Use ISO dates (YYYY-MM-DD)." ), } ], response_format=EventsResponse, mcp_servers=["windsor/ticketmaster-mcp"], # Discover events via Ticketmaster ) parsed = completion.choices[0].message.parsed print(parsed) if **name** == "**main**": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from 'dedalus-labs'; import { zodResponseFormat } from 'dedalus-labs/helpers/zod'; import { z } from 'zod'; const client = new Dedalus(); const Event = z.object({ name: z.string(), city: z.string(), date: z.string(), }); const EventsResponse = z.object({ query: z.string(), events: z.array(Event), }); async function main() { const completion = await client.chat.completions.parse({ model: 'openai/gpt-5.2', messages: [ { role: 'user', content: 'Return 3 upcoming basketball events near San Francisco as JSON. Use ISO dates (YYYY-MM-DD).', }, ], response_format: zodResponseFormat(EventsResponse, 'events_response'), mcpServers: ['windsor/ticketmaster-mcp'], // Discover events via Ticketmaster }); console.log(completion.choices[0]?.message.parsed); } main(); ``` ```typescript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from 'dedalus-labs'; import { effectResponseFormat } from 'dedalus-labs/helpers/effect'; import * as Schema from 'effect/Schema'; const client = new Dedalus(); const Event = Schema.Struct({ name: Schema.String, city: Schema.String, date: Schema.String, }); async function main() { const completion = await client.chat.completions.parse({ model: 'openai/gpt-5.2', messages: [ { role: 'user', content: 'Return 3 upcoming basketball events near San Francisco as JSON. Use ISO dates (YYYY-MM-DD).', }, ], response_format: effectResponseFormat( Schema.Struct({ query: Schema.String, events: Schema.Array(Event) }), 'events_response', ), mcpServers: ['windsor/ticketmaster-mcp'], }); console.log(completion.choices[0]?.message.parsed); } main(); ``` ### `.stream()` (streaming) Use this when you want **streaming UX** and a **typed final result**. Streaming helpers differ by language: * **Python**: use `.stream(...)` as a context manager and read typed stream events. * **TypeScript**: stream tokens with `create({ stream: true, ... })`, then validate the final JSON with Zod/Effect. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus from dotenv import load_dotenv from pydantic import BaseModel load_dotenv() class Event(BaseModel): name: str city: str date: str class EventsResponse(BaseModel): query: str events: list[Event] async def main(): client = AsyncDedalus() # Use context manager for streaming async with client.chat.completions.stream( model="openai/gpt-5.2", messages=[{ "role": "user", "content": ( "Return 3 upcoming basketball events near San Francisco as JSON. " "Use ISO dates (YYYY-MM-DD)." ), }], response_format=EventsResponse, mcp_servers=["windsor/ticketmaster-mcp"], ) as stream: # Process events as they arrive async for event in stream: if event.type == "content.delta": print(event.delta, end="", flush=True) elif event.type == "content.done": # Snapshot available at content.done (typed) print(f"\nParsed events: {len(event.parsed.events)}") # Get final parsed result final = await stream.get_final_completion() parsed = final.choices[0].message.parsed print(f"\nFinal events: {len(parsed.events)}") if __name__ == "__main__": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; import { zodResponseFormat } from "dedalus-labs/helpers/zod"; import { z } from "zod"; const client = new Dedalus(); const Event = z.object({ name: z.string(), city: z.string(), date: z.string(), }); const EventsResponse = z.object({ query: z.string(), events: z.array(Event), }); async function main() { const stream = await client.chat.completions.create({ model: "openai/gpt-5.2", messages: [ { role: "user", content: "Return 3 upcoming basketball events near San Francisco as JSON. Use ISO dates (YYYY-MM-DD).", }, ], response_format: zodResponseFormat(EventsResponse, "events_response"), mcpServers: ["windsor/ticketmaster-mcp"], // Discover events via Ticketmaster stream: true, }); // Stream output to the user while collecting it for parsing. let text = ""; for await (const chunk of stream) { const delta = chunk.choices?.[0]?.delta?.content; if (delta) { process.stdout.write(delta); text += delta; } } // If you need a typed object, parse the final JSON text. // (In production, use robust JSON extraction if your model outputs any extra text.) const parsed = EventsResponse.parse(JSON.parse(text)); console.log(`\nParsed events: ${parsed.events.length}`); } main(); ``` ### Optional Fields Use `Optional[T]` in Python, `.nullable()` in Zod, or `Schema.NullOr(...)` in Effect for nullable fields: With OpenAI strict mode, every field must be required. Model “optional” values as nullable. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus from dotenv import load_dotenv from pydantic import BaseModel load_dotenv() class Event(BaseModel): name: str city: str date: str price_usd: int | None = None # model unknown as null class EventsResponse(BaseModel): query: str events: list[Event] async def main(): client = AsyncDedalus() completion = await client.chat.completions.parse( model="openai/gpt-5.2", messages=[{ "role": "user", "content": ( "Return 3 upcoming basketball events near San Francisco as JSON. " "Include price_usd if known; otherwise null. Use ISO dates (YYYY-MM-DD)." ), }], response_format=EventsResponse, mcp_servers=["windsor/ticketmaster-mcp"], ) parsed = completion.choices[0].message.parsed for e in parsed.events: print(e.name, e.price_usd) if **name** == "**main**": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from 'dedalus-labs'; import { zodResponseFormat } from 'dedalus-labs/helpers/zod'; import { z } from 'zod'; const client = new Dedalus(); const Event = z.object({ name: z.string(), city: z.string(), date: z.string(), price_usd: z.number().nullable(), }); const EventsResponse = z.object({ query: z.string(), events: z.array(Event), }); async function main() { const completion = await client.chat.completions.parse({ model: 'openai/gpt-5.2', messages: [ { role: 'user', content: 'Return 3 upcoming basketball events near San Francisco as JSON. Include price_usd if known; otherwise null. Use ISO dates (YYYY-MM-DD).', }, ], response_format: zodResponseFormat(EventsResponse, 'events_response'), mcpServers: ['windsor/ticketmaster-mcp'], }); const parsed = completion.choices[0]?.message.parsed; console.log(parsed?.events.map((e) => [e.name, e.price_usd])); } main(); ``` ```typescript theme={"theme":{"light":"github-light","dark":"github-dark"}} import * as Schema from 'effect/Schema'; const Event = Schema.Struct({ name: Schema.String, city: Schema.String, date: Schema.String, price_usd: Schema.NullOr(Schema.Number), }); const EventsResponse = Schema.Struct({ query: Schema.String, events: Schema.Array(Event), }); ``` Avoid `Schema.optional(...)` for structured outputs—use `Schema.NullOr(...)` instead. ## Schemas & patterns ## Nested Models ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus from dotenv import load_dotenv from pydantic import BaseModel load_dotenv() class Venue(BaseModel): name: str address: str | None = None city: str class Event(BaseModel): name: str date: str venue: Venue class EventsResponse(BaseModel): query: str events: list[Event] async def main(): client = AsyncDedalus() completion = await client.chat.completions.parse( model="openai/gpt-5.2", messages=[{ "role": "user", "content": ( "Return 3 upcoming basketball events near San Francisco as JSON. " "Each event must include a nested venue object with name, city, and address (null if unknown). " "Use ISO dates (YYYY-MM-DD)." ) }], response_format=EventsResponse, mcp_servers=["windsor/ticketmaster-mcp"], ) parsed = completion.choices[0].message.parsed for e in parsed.events: print(e.name, "→", e.venue.name) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; import { zodResponseFormat } from "dedalus-labs/helpers/zod"; import { z } from "zod"; const client = new Dedalus(); const Venue = z.object({ name: z.string(), city: z.string(), address: z.string().nullable(), }); const Event = z.object({ name: z.string(), date: z.string(), venue: Venue, }); const EventsResponse = z.object({ query: z.string(), events: z.array(Event), }); async function main() { const completion = await client.chat.completions.parse({ model: "openai/gpt-5.2", messages: [ { role: "user", content: "Return 3 upcoming basketball events near San Francisco as JSON. Each event must include a nested venue object with name, city, and address (null if unknown). Use ISO dates (YYYY-MM-DD).", }, ], response_format: zodResponseFormat(EventsResponse, "events_response"), mcpServers: ["windsor/ticketmaster-mcp"], }); const parsed = completion.choices[0]?.message.parsed; console.log(parsed?.events.map((e) => [e.name, e.venue.name])); } main(); ``` ## Structured Tool Calls (advanced) Define type-safe tools with automatic argument parsing: ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus from dotenv import load_dotenv from pydantic import BaseModel load_dotenv() class SearchEventsArgs(BaseModel): city: str month: str max_results: int = 5 async def main(): client = AsyncDedalus() tools = [ { "type": "function", "function": { "name": "search_events", "description": "Search for events in a city during a month.", "parameters": { "type": "object", "properties": { "city": {"type": "string"}, "month": {"type": "string", "description": "YYYY-MM"}, "max_results": {"type": "integer", "default": 5}, }, "required": ["city", "month"], "additionalProperties": False, }, "strict": True, } } ] completion = await client.chat.completions.parse( model="openai/gpt-5.2", messages=[{ "role": "user", "content": "Call search_events for San Francisco in 2026-01.", }], tools=tools, tool_choice={"type": "tool", "name": "search_events"}, ) message = completion.choices[0].message if message.tool_calls: tool_call = message.tool_calls[0] print(f"Tool called: {tool_call.function.name}") print(f"Parsed args: {tool_call.function.parsed_arguments}") if **name** == "**main**": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from 'dedalus-labs'; import { zodFunction } from 'dedalus-labs/helpers/zod'; import { z } from 'zod'; const client = new Dedalus(); const SearchEventsTool = zodFunction({ name: 'search_events', parameters: z.object({ city: z.string(), month: z.string(), // YYYY-MM max_results: z.number().optional(), }), description: 'Search for events in a city during a month.', function: (args) => { // Your tool implementation would go here. // For docs, we return a placeholder JSON string. return JSON.stringify({ events: [], query: `${args.city} ${args.month}`, }); }, }); async function main() { const completion = await client.chat.completions.parse({ model: 'openai/gpt-5.2', messages: [{ role: 'user', content: 'Call search_events for San Francisco in 2026-01.' }], tools: [SearchEventsTool], // Force a deterministic tool call (useful for examples/tests). tool_choice: { type: 'tool', name: 'search_events' }, }); const toolCall = completion.choices[0]?.message.tool_calls?.[0]; if (toolCall) { console.log(`Tool called: ${toolCall.function.name}`); console.log(`Arguments: ${JSON.stringify(toolCall.function.parsed_arguments)}`); } } main(); ``` If you need deterministic tool calling, set `tool_choice` to one of the object variants: `{ type: 'auto' }` (model decides), `{ type: 'any' }` (require a tool call), `{ type: 'tool', name: 'search_events' }` (require a specific tool), `{ type: 'none' }` (disable tools). Passing the OpenAI string form (e.g. `tool_choice: 'required'`) will fail schema validation with a 422. ```typescript theme={"theme":{"light":"github-light","dark":"github-dark"}} import { effectFunction } from 'dedalus-labs/helpers/effect'; import * as Schema from 'effect/Schema'; const SearchEventsTool = effectFunction({ name: 'search_events', parameters: Schema.Struct({ city: Schema.String, month: Schema.String, // YYYY-MM max_results: Schema.NullOr(Schema.Number), }), description: 'Search for events in a city during a month.', }); ``` Tool parameters must be an object schema (use `Schema.Struct({ ... })`). ## Enums and Unions ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from typing import Literal from dedalus_labs import AsyncDedalus from dotenv import load_dotenv from pydantic import BaseModel load_dotenv() class Event(BaseModel): name: str city: str date: str category: Literal["sports", "music", "theater", "other"] ticket_status: Literal["available", "sold_out", "unknown"] class EventsResponse(BaseModel): query: str events: list[Event] async def main(): client = AsyncDedalus() completion = await client.chat.completions.parse( model="openai/gpt-5.2", messages=[{ "role": "user", "content": ( "Return 3 upcoming events near San Francisco as JSON. " "Each event must include category (sports/music/theater/other) and ticket_status (available/sold_out/unknown). " "Use ISO dates (YYYY-MM-DD)." ) }], response_format=EventsResponse, mcp_servers=["windsor/ticketmaster-mcp"], ) parsed = completion.choices[0].message.parsed for e in parsed.events: print(e.name, e.category, e.ticket_status) if __name__ == "__main__": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from "dedalus-labs"; import { zodResponseFormat } from "dedalus-labs/helpers/zod"; import { z } from "zod"; const client = new Dedalus(); const Event = z.object({ name: z.string(), city: z.string(), date: z.string(), category: z.enum(["sports", "music", "theater", "other"]), ticket_status: z.union([z.literal("available"), z.literal("sold_out"), z.literal("unknown")]), }); const EventsResponse = z.object({ query: z.string(), events: z.array(Event), }); async function main() { const completion = await client.chat.completions.parse({ model: "openai/gpt-5.2", messages: [ { role: "user", content: "Return 3 upcoming events near San Francisco as JSON. Each event must include category (sports/music/theater/other) and ticket_status (available/sold_out/unknown). Use ISO dates (YYYY-MM-DD).", }, ], response_format: zodResponseFormat(EventsResponse, "events_response"), mcpServers: ["windsor/ticketmaster-mcp"], }); const parsed = completion.choices[0]?.message.parsed; console.log(parsed?.events.map((e) => [e.name, e.category, e.ticket_status])); } main(); ``` ## DedalusRunner API The Runner supports `response_format` with automatic schema conversion: ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv from pydantic import BaseModel load_dotenv() class Event(BaseModel): name: str city: str date: str class EventsResponse(BaseModel): query: str events: list[Event] def as_bullets(items: list[str]) -> str: """Format items as a bulleted list.""" return "\n".join(f"• {item}" for item in items) async def main(): client = AsyncDedalus() runner = DedalusRunner(client) result = await runner.run( input=( "Find me the nearest basketball games in January in San Francisco using Ticketmaster. " "Then call as_bullets with a list of items (one per event: name, city, date)." ), model="anthropic/claude-opus-4-5", mcp_servers=["windsor/ticketmaster-mcp"], # Discover events via Ticketmaster tools=[as_bullets], response_format=EventsResponse, max_steps=5, ) print(result.final_output) if **name** == "**main**": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from 'dedalus-labs'; import { DedalusRunner } from 'dedalus-labs'; const client = new Dedalus(); function asBullets(items: string[]): string { return items.map((item) => `• ${item}`).join('\n'); } async function main() { const runner = new DedalusRunner(client, true); const result = await runner.run({ model: 'anthropic/claude-opus-4-5', input: 'Find me the nearest basketball games in January in San Francisco using Ticketmaster. Then call asBullets with a list of items (one per event: name, city, date).', mcpServers: ['windsor/ticketmaster-mcp'], // Discover events via Ticketmaster tools: [asBullets], maxSteps: 5, }); console.log((result as any).finalOutput); } main(); ``` ## .create() vs .parse() vs .stream() | Method | Schema Support | Streaming | Use Case | | ----------- | ------------------- | --------- | ----------------------- | | `.create()` | Dict only | ✓ | Manual JSON schemas | | `.parse()` | Pydantic/Zod/Effect | ❌ | Type-safe non-streaming | | `.stream()` | Pydantic/Zod/Effect | ✓ | Type-safe streaming | `.create()` expects a plain JSON Schema object. Don’t pass a Pydantic model, Zod schema, or Effect schema directly. “Streaming + typed output” is language-dependent: - **Python**: `.stream(...)` yields typed events and a typed final snapshot. - **TypeScript**: stream tokens and validate the final JSON with Zod/Effect. ## Error Handling ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from typing import Any from dedalus_labs import AsyncDedalus from dotenv import load_dotenv from pydantic import BaseModel load_dotenv() class Event(BaseModel): name: str city: str date: str class EventsResponse(BaseModel): query: str events: list[Event] async def main(): client = AsyncDedalus() try: completion = await client.chat.completions.parse( model="openai/gpt-5.2", messages=[{ "role": "user", "content": ( "Return 3 upcoming basketball events near San Francisco as JSON. " "Use ISO dates (YYYY-MM-DD)." ), }], response_format=EventsResponse, ) parsed = completion.choices[0].message.parsed print(f"Parsed events: {len(parsed.events)}") except Exception as e: print("Parse failed:", e) if **name** == "**main**": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from 'dedalus-labs'; import { zodResponseFormat } from 'dedalus-labs/helpers/zod'; import { z } from 'zod'; const client = new Dedalus(); const Event = z.object({ name: z.string(), city: z.string(), date: z.string(), }); const EventsResponse = z.object({ query: z.string(), events: z.array(Event), }); async function main() { try { const completion = await client.chat.completions.parse({ model: 'openai/gpt-5.2', messages: [ { role: 'user', content: 'Return 3 upcoming basketball events near San Francisco as JSON. Use ISO dates (YYYY-MM-DD).', }, ], response_format: zodResponseFormat(EventsResponse, 'events_response'), }); const parsed = completion.choices[0]?.message.parsed; console.log(`Parsed events: ${parsed?.events.length ?? 0}`); } catch (error) { console.error('Request failed:', error); } } main(); ``` ## Supported Models The Dedalus SDK's `.parse()` and `.stream()` methods work across all providers. Schema enforcement varies: **Strict Enforcement** (CFG-based, schema guarantees): * ✓ `openai/*` - Context-free grammar compilation * ✓ `xai/*` - Native schema validation * ✓ `fireworks_ai/*` - Native schema validation (select models) * ✓ `deepseek/*` - Native schema validation (select models) **Best-Effort** (schema sent for guidance, no guarantees): * 🟡 `google/*` - Schema forwarded to `generationConfig.responseSchema` * 🟡 `anthropic/*` - Prompt-based JSON generation (\~85-90% success rate) For `google/*` and `anthropic/*` models, always validate parsed output and implement retry logic. ## Provider Examples You can use `.parse()` and `.stream()` with models from any provider. In practice, you only change `model`—everything else stays the same. For a full list of model IDs, see the [providers guide](/sdk/guides/providers). ## Quick Reference ### Python (Pydantic) ```python theme={"theme":{"light":"github-light","dark":"github-dark"}} from dedalus_labs import AsyncDedalus from pydantic import BaseModel class MyModel(BaseModel): field: str client = AsyncDedalus() result = await client.chat.completions.parse( model="openai/gpt-5.2", messages=[...], response_format=MyModel, ) parsed = result.choices[0].message.parsed ``` ### TypeScript (Zod) ```typescript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from 'dedalus-labs'; import { zodResponseFormat } from 'dedalus-labs/helpers/zod'; import { z } from 'zod'; const MySchema = z.object({ field: z.string() }); const client = new Dedalus(); const result = await client.chat.completions.parse({ model: 'openai/gpt-5.2', messages: [...], response_format: zodResponseFormat(MySchema, 'my_schema'), }); const parsed = result.choices[0]?.message.parsed; ``` ### TypeScript (Effect Schema) ```typescript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from 'dedalus-labs'; import { effectResponseFormat } from 'dedalus-labs/helpers/effect'; import * as Schema from 'effect/Schema'; const MySchema = Schema.Struct({ field: Schema.String }); const client = new Dedalus(); const result = await client.chat.completions.parse({ model: 'openai/gpt-5.2', messages: [...], response_format: effectResponseFormat(MySchema, 'my_schema'), }); const parsed = result.choices[0]?.message.parsed; ``` ### Zod Helpers ```typescript theme={"theme":{"light":"github-light","dark":"github-dark"}} import { zodResponseFormat, zodFunction } from 'dedalus-labs/helpers/zod'; // For response schemas zodResponseFormat(MyZodSchema, 'schema_name') // For tool definitions zodFunction({ name: 'tool_name', description: 'What the tool does', parameters: z.object({ ... }), function: (args) => { ... }, }) ``` ### Effect Helpers ```typescript theme={"theme":{"light":"github-light","dark":"github-dark"}} import { effectResponseFormat, effectFunction } from 'dedalus-labs/helpers/effect'; // For response schemas effectResponseFormat(MyEffectSchema, 'schema_name') // For tool definitions effectFunction({ name: 'tool_name', description: 'What the tool does', parameters: MyEffectParametersSchema, function: (args) => { ... }, }) ``` If you still use `@effect/schema`, schemas from `@effect/schema/Schema` also work with `helpers/effect`. You still need to install `effect` (the Dedalus SDK uses `effect/JSONSchema` and `effect/Schema` for conversion + validation). Prefer `effect/Schema` for new code. ## Next steps * **Stream output**: [Streaming](/sdk/streaming) — Improve UX for long tool/MCP runs * **Route across models**: [Handoffs](/sdk/handoffs) — Use fast/strong models by phase * **See patterns**: [Use Cases](/sdk/use-cases/data-analyst) — Structured extraction workflows [Connect these docs programmatically](/contextual/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers. # Tools Source: https://docs.dedaluslabs.ai/sdk/tools Give agents the ability to take actions Agents become useful when they can do things beyond generating text. Tools let them call functions, query databases, make API requests—anything you can express in code. ## How It Works Define a function with type hints and a docstring. Pass it to `runner.run()`. The Dedalus SDK extracts the schema automatically and handles execution when the model decides to use it. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() def as_bullets(items: list[str]) -> str: """Format items as a bulleted list.""" return "\n".join(f"• {item}" for item in items) async def main(): client = AsyncDedalus() runner = DedalusRunner(client) result = await runner.run( input=( "Take the following events and call as_bullets with a list of items (one per event).\n\n" "Events:\n" "- Warriors vs Lakers — San Francisco — 2026-01-18\n" "- Warriors vs Suns — San Francisco — 2026-01-22\n" "- Warriors vs Celtics — San Francisco — 2026-01-29\n\n" "Return only the list." ), model="openai/gpt-5.2", tools=[as_bullets], ) print(result.final_output) if **name** == "**main**": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from 'dedalus-labs'; import { DedalusRunner } from 'dedalus-labs'; const client = new Dedalus(); const runner = new DedalusRunner(client, true); function formatTable(rows: Record[]): string { if (!rows.length) return 'No results.'; const cols = Object.keys(rows[0]); const header = `| ${cols.join(' | ')} |`; const sep = `| ${cols.map(() => '---').join(' | ')} |`; const body = rows.map((r) => `| ${cols.map((c) => String(r?.[c] ?? '')).join(' | ')} |`); return [header, sep, ...body].join('\n'); } async function main() { const result = await runner.run({ input: 'Take the following events and call formatTable with a list of rows (one row per event).\n\n' + 'Events:\n' + '- {"name":"Warriors vs Lakers","city":"San Francisco","date":"2026-01-18"}\n' + '- {"name":"Warriors vs Suns","city":"San Francisco","date":"2026-01-22"}\n' + '- {"name":"Warriors vs Celtics","city":"San Francisco","date":"2026-01-29"}\n\n' + 'Return only the table.', model: 'openai/gpt-5.2', tools: [formatTable], }); console.log((result as any).finalOutput); } main(); ``` The model sees the tool schemas, decides which to call, and the Runner executes them. Multi-step reasoning happens automatically—the Runner keeps calling the model until it can complete the task. ## Tool best practices Good tools typically have: * **Type hints** on all parameters and return values * **Docstrings** that explain what the tool does (the model reads these) * **Clear names** that indicate purpose ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} # Good: typed, documented, clear name def get_weather(city: str, units: str = "celsius") -> dict: """Get current weather for a city. Returns temperature and conditions.""" return {"temp": 22, "conditions": "sunny"} # Bad: no types, no docs, unclear name def do_thing(x): return some_api_call(x) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} // Good: typed, documented, clear name function getWeather(city: string, units: string = 'celsius'): object { // Get current weather for a city return { temp: 22, conditions: 'sunny' }; } // Bad: no types, unclear name function doThing(x: any) { return someApiCall(x); } ``` ## Async Tools Tools can be async. The Runner awaits them automatically: ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} async def fetch_user(user_id: int) -> dict: """Fetch user profile from database.""" async with db.connection() as conn: return await conn.fetchone("SELECT * FROM users WHERE id = $1", user_id) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} async function fetchUser(userId: number): Promise { // Fetch user profile from database const result = await db.query("SELECT * FROM users WHERE id = $1", [userId]); return result.rows[0]; } ``` ## Agent as Tool Wrap a specialized agent as a tool. The coordinator delegates specific tasks to specialists without giving up conversation control. This differs from [handoffs](/sdk/handoffs): * **Handoffs**: New agent takes over the conversation with full history * **Agent as tool**: Specialist receives specific input, returns output, coordinator continues ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner async def main(): client = AsyncDedalus() runner = DedalusRunner(client) # Specialist: wrap another runner call as a tool async def research_specialist(query: str) -> str: """Deep research on a topic. Use for questions requiring thorough analysis.""" result = await runner.run( input=query, model="openai/gpt-5.2", # Stronger model for research instructions="You are a research analyst. Be thorough and cite sources.", mcp_servers=["windsor/brave-search-mcp"] # Web search via Brave Search MCP ) return result.final_output async def code_specialist(spec: str) -> str: """Generate production code from specifications.""" result = await runner.run( input=spec, model="anthropic/claude-opus-4-5", # Strong at code instructions="Write clean, tested, production-ready code." ) return result.final_output # Coordinator: cheap model that delegates to specialists result = await runner.run( input="Research quantum computing breakthroughs in 2025, then write a Python simulator for a basic quantum gate", model="openai/gpt-4o-mini", tools=[research_specialist, code_specialist] ) print(result.final_output) if **name** == "**main**": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus from 'dedalus-labs'; import { DedalusRunner } from 'dedalus-labs'; const client = new Dedalus(); const runner = new DedalusRunner(client); // Specialist functions async function researchSpecialist(query: string): Promise { const result = await runner.run({ input: query, model: 'openai/gpt-4o', instructions: 'You are a research analyst. Be thorough.', mcpServers: ['windsor/brave-search-mcp'], // Web search via Brave Search MCP }); return result.finalOutput; } async function codeSpecialist(spec: string): Promise { const result = await runner.run({ input: spec, model: 'anthropic/claude-opus-4-5', instructions: 'Write clean, production-ready code.', }); return result.finalOutput; } // Coordinator delegates to specialists const result = await runner.run({ input: 'Research AI trends, then write a TypeScript example', model: 'openai/gpt-5.2', tools: [researchSpecialist, codeSpecialist], }); ``` **When to use this pattern:** | Scenario | Why Agent-as-Tool | | ------------------ | --------------------------------------------------------- | | Vision/OCR tasks | Text-only coordinator delegates images to vision model | | Code generation | Fast model triages, strong model writes code | | Domain specialists | Generic router → specialized instructions/model | | Cost optimization | Cheap coordinator, expensive specialists only when needed | ## Model Selection Tool calling quality varies by model. For reliable multi-step tool use: `openai/gpt-5.2` and `openai/gpt-4.1` handle complex tool chains well. Older or smaller models may struggle with multi-step reasoning. ## Next steps * **Combine with MCP servers**: [MCP Servers](/sdk/mcp) — Use local tools for custom logic + hosted tools for external capabilities * **Return typed data**: [Structured Outputs](/sdk/structured-outputs) — Validate and parse JSON into schemas * **Control execution**: [Policies](/sdk/policies) — Dynamically modify behavior at runtime * **See full examples**: [Use Cases](/sdk/use-cases/web-search-agent) — End-to-end agent patterns [Connect these docs programmatically](/contextual/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers. # Concert Planner Source: https://docs.dedaluslabs.ai/sdk/use-cases/concert-planner Find concerts and venue information Finding concert tickets involves checking dates, venues, seating options, and accessibility—information scattered across multiple pages on ticketing sites. An agent with access to Ticketmaster's API can consolidate this search. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() async def main(): client = AsyncDedalus() runner = DedalusRunner(client) result = await runner.run( input="""I want to see Taylor Swift in New York City. Help me find: 1. Upcoming concert dates 2. Venue details 3. Ticket price ranges 4. Accessibility information 5. Best seating options for the budget""", model="openai/gpt-4.1", mcp_servers=["windsor/ticketmaster-mcp"] ) print(result.final_output) if **name** == "**main**": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus, { DedalusRunner } from 'dedalus-labs'; import * as dotenv from 'dotenv'; dotenv.config(); async function main() { const client = new Dedalus({ apiKey: process.env.DEDALUS_API_KEY }); const runner = new DedalusRunner(client); const result = await runner.run({ input: `I want to see Taylor Swift in New York City. Help me find: 1. Upcoming concert dates 2. Venue details 3. Ticket price ranges 4. Accessibility information 5. Best seating options for the budget`, model: 'openai/gpt-4.1', mcpServers: ['windsor/ticketmaster-mcp'] }); console.log(result.finalOutput); } main(); ``` ## Ticketmaster MCP The `windsor/ticketmaster-mcp` server provides access to: * Event search by artist, venue, or location * Venue information and seating charts * Ticket availability and pricing * Event details and timing ## When to Use This pattern works for any ticketed event: sports games, theater, festivals. The agent handles the search and comparison work, presenting options that match your criteria instead of making you browse through pages of results. # Data Analyst Source: https://docs.dedaluslabs.ai/sdk/use-cases/data-analyst Create a data analyst agent that can search for real-time data, write and execute Python code to analyze it, and generate insights. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dedalus_labs.utils.stream import stream_async from dotenv import load_dotenv load_dotenv() def execute_python_code(code: str) -> str: """ Execute Python code and return the result. Safely executes code in a controlled namespace. """ try: namespace = {} exec(code, {"__builtins__": __builtins__}, namespace) if 'result' in namespace: return str(namespace['result']) results = {k: v for k, v in namespace.items() if not k.startswith('_')} return str(results) if results else "Code executed successfully" except Exception as e: return f"Error executing code: {str(e)}" async def main(): client = AsyncDedalus() runner = DedalusRunner(client) result = runner.run( input="""Research the current stock price of Tesla (TSLA) and Apple (AAPL). Then write and execute Python code to: 1. Compare their current prices 2. Calculate the percentage difference 3. Determine which stock has grown more in the past year based on the data you find 4. Provide investment insights based on your analysis Use web search to get the latest stock information.""", model="openai/gpt-5", tools=[execute_python_code], mcp_servers=["windsor/brave-search-mcp"], stream=True ) await stream_async(result) if __name__ == "__main__": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus, { DedalusRunner } from "dedalus-labs"; import * as dotenv from "dotenv"; dotenv.config(); function executePythonCode(code: string): string { // Note: In TypeScript, you would typically use a sandboxed // execution environment or call out to a Python service return `Code execution requested: ${code.substring(0, 100)}...`; } async function main() { const client = new Dedalus({ apiKey: process.env.DEDALUS_API_KEY, }); const runner = new DedalusRunner(client); const result = await runner.run({ input: `Research the current stock price of Tesla (TSLA) and Apple (AAPL). Then write and execute Python code to: 1. Compare their current prices 2. Calculate the percentage difference 3. Determine which stock has grown more in the past year based on the data you find 4. Provide investment insights based on your analysis Use web search to get the latest stock information.`, model: "openai/gpt-5", tools: [executePythonCode], mcpServers: ["windsor/brave-search-mcp"], }); console.log(result.finalOutput); } main(); ``` This data analyst example combines real-time web search with code execution capabilities: * **Brave Search MCP** (`windsor/brave-search-mcp`): Fetches real-time data from the web * **execute\_python\_code** tool: Allows the agent to write and run Python code for analysis The agent can search for current information, extract relevant data, then dynamically write code to analyze it and generate insights. **Note**: In production environments, consider using sandboxed code execution for security. # Travel Agent Source: https://docs.dedaluslabs.ai/sdk/use-cases/travel-agent Creating a travel planning agent that can search for flights, hotels, and provide travel recommendations. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() async def main(): client = AsyncDedalus() runner = DedalusRunner(client) result = await runner.run( input="""I'm planning a trip to Paris, France from San Francisco, CA for 3 days for Christmas in 2025. Can you help me find: 1. Flight options and prices, give me the best option for the cheapest flight 2. Hotel recommendations in central Paris 3. Weather forecast for my travel dates 4. Popular events during the Christmas season in Paris 5. Give a quick summary of the trip and the results My budget is around $3000 total and I prefer mid-range accommodations. keep it succint in 300 words or less""", model="anthropic/claude-opus-4-5", mcp_servers=[ "windsor/brave-search-mcp", # For travel information search "windsor/open-meteo-mcp", # For weather at destination "windsor/ticketmaster-mcp" # For events lookup ] ) print(f"Travel Planning Results:\n{result.final_output}") if __name__ == "__main__": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus, { DedalusRunner } from "dedalus-labs"; import * as dotenv from "dotenv"; dotenv.config(); async function main() { const client = new Dedalus({ apiKey: process.env.DEDALUS_API_KEY, }); const runner = new DedalusRunner(client); const result = await runner.run({ input: `I'm planning a trip to Paris, France from San Francisco, CA for 3 days for Christmas in 2025. Can you help me find: 1. Flight options and prices, give me the best option for the cheapest flight 2. Hotel recommendations in central Paris 3. Weather forecast for my travel dates 4. Popular events during the Christmas season in Paris 5. Give a quick summary of the trip and the results My budget is around $3000 total and I prefer mid-range accommodations. keep it succint in 300 words or less`, model: "anthropic/claude-opus-4-5", mcpServers: [ "windsor/brave-search-mcp", // For travel information search "windsor/open-meteo-mcp", // For weather at destination "windsor/ticketmaster-mcp", // For events lookup ], }); console.log(`Travel Planning Results:\n${result.finalOutput}`); } main(); ``` This travel agent example uses multiple MCP servers: * **Brave Search MCP** (`windsor/brave-search-mcp`): For finding current travel information, flight options, hotel reviews, and booking options * **Open Meteo MCP** (`windsor/open-meteo-mcp`): For weather forecasts at your destination * **Ticketmaster MCP** (`windsor/ticketmaster-mcp`): For finding concerts and events during your trip Try these servers out in your projects! # Weather Forecaster Source: https://docs.dedaluslabs.ai/sdk/use-cases/weather-forecaster Detailed weather analysis with recommendations Weather APIs return data. Users want recommendations. An agent with access to weather data can translate forecasts into actionable advice for specific situations. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() async def main(): client = AsyncDedalus() runner = DedalusRunner(client) result = await runner.run( input="""I'm planning an outdoor wedding in San Francisco next weekend. Please provide: 1. Current weather conditions 2. 7-day forecast with daily details 3. Precipitation probability 4. Temperature highs and lows 5. Wind and UV conditions 6. Specific recommendations for outdoor event planning""", model="openai/gpt-4.1", mcp_servers=["windsor/open-meteo-mcp"] ) print(result.final_output) if __name__ == "__main__": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus, { DedalusRunner } from "dedalus-labs"; import * as dotenv from "dotenv"; dotenv.config(); async function main() { const client = new Dedalus({ apiKey: process.env.DEDALUS_API_KEY, }); const runner = new DedalusRunner(client); const result = await runner.run({ input: `I'm planning an outdoor wedding in San Francisco next weekend. Please provide: 1. Current weather conditions 2. 7-day forecast with daily details 3. Precipitation probability 4. Temperature highs and lows 5. Wind and UV conditions 6. Specific recommendations for outdoor event planning`, model: "openai/gpt-4.1", mcpServers: ["windsor/open-meteo-mcp"], }); console.log(result.finalOutput); } main(); ``` ## Open Meteo Capabilities The `windsor/open-meteo-mcp` server provides: * Current conditions * Multi-day forecasts (hourly and daily) * Historical weather data * Weather alerts * Global coverage (no API key required) ## Beyond Raw Data Any API can fetch weather. The agent interprets it: wind affecting outdoor events, rain probability suggesting backup plans, UV levels for guest safety, temperature changes through the day. Same pattern applies to any data-to-advice task. Health metrics become fitness recommendations. Market data becomes investment suggestions. Sensor readings become maintenance alerts. # Web Search Agent Source: https://docs.dedaluslabs.ai/sdk/use-cases/web-search-agent Create a web search agent using multiple search MCPs to find and analyze information from the web. ```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}} import asyncio from dedalus_labs import AsyncDedalus, DedalusRunner from dotenv import load_dotenv load_dotenv() async def main(): client = AsyncDedalus() runner = DedalusRunner(client) result = await runner.run( input="""I need to research the latest developments in AI agents for 2024. Please help me: 1. Find recent news articles about AI agent breakthroughs 2. Search for academic papers on multi-agent systems 3. Look up startup companies working on AI agents 4. Find GitHub repositories with popular agent frameworks 5. Summarize the key trends and provide relevant links Focus on developments from the past 6 months.""", model="openai/gpt-4.1", mcp_servers=[ "tsion/exa", # Semantic search engine "windsor/brave-search-mcp" # Privacy-focused web search ] ) print(f"Web Search Results:\n{result.final_output}") if __name__ == "__main__": asyncio.run(main()) ``` ```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}} import Dedalus, { DedalusRunner } from "dedalus-labs"; import * as dotenv from "dotenv"; dotenv.config(); async function main() { const client = new Dedalus({ apiKey: process.env.DEDALUS_API_KEY, }); const runner = new DedalusRunner(client); const result = await runner.run({ input: `I need to research the latest developments in AI agents for 2024. Please help me: 1. Find recent news articles about AI agent breakthroughs 2. Search for academic papers on multi-agent systems 3. Look up startup companies working on AI agents 4. Find GitHub repositories with popular agent frameworks 5. Summarize the key trends and provide relevant links Focus on developments from the past 6 months.`, model: "openai/gpt-4.1", mcpServers: [ "tsion/exa", // Semantic search engine "windsor/brave-search-mcp", // Privacy-focused web search ], }); console.log(`Web Search Results:\n${result.finalOutput}`); } main(); ``` This example uses multiple search MCP servers: * **Exa MCP** (`tsion/exa`): Semantic search, great for finding conceptually related content * **Brave Search MCP** (`windsor/brave-search-mcp`): Privacy-focused web search for current events and specific queries Together, they cover more ground than either alone—Exa finds related ideas while Brave handles current events. # Use docs programmatically Source: https://docs.dedaluslabs.ai/sdk/use-these-docs Connect Dedalus documentation to your AI tools and workflows We want to make our documentation as accessible as possible. We've included several ways for you to use these docs programmatically through AI assistants, code editors, and direct integrations, such as Model Context Protocol (MCP). ## Quick access options On any page in our documentation, you'll find a contextual menu dropdown in the top right corner with quick access options including our `llms.txt`, MCP server connection, and other integrations such as ChatGPT and Claude. Quick access menu showing Copy page, View as Markdown, Open in ChatGPT, Open in Claude, and Copy MCP Server options ## Use our MCP server Our documentation includes a built-in **Model Context Protocol (MCP) server** that lets AI applications query the latest docs in real-time. The Dedalus docs MCP server is available at: ```txt theme={"theme":{"light":"github-light","dark":"github-dark"}} https://docs.dedaluslabs.ai/mcp ``` Once connected, you can ask your AI assistant questions about Dedalus SDK, MCP servers, and our platform, and it will search our documentation to provide accurate, current answers. ### Connect with Claude Code If you're using Claude Code, run this command in your terminal to add the server to your current project: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} claude mcp add --transport http docs-dedalus https://docs.dedaluslabs.ai/mcp ``` **Project (local) scoped** The command above adds the MCP server only to your current project/working directory. To add the MCP server globally and access it in all projects, add the user scope by adding `--scope user` to the command: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} claude mcp add --transport http docs-dedalus --scope user https://docs.dedaluslabs.ai/mcp ``` ### Connect with Claude Desktop 1. Open Claude Desktop 2. Go to **Settings** → **Developer** → **Connectors** 3. Click **Add MCP Server** 4. Add our MCP server URL: `https://docs.dedaluslabs.ai/mcp` ### Connect with Codex CLI If you're using OpenAI Codex CLI, run this command in your terminal to add the server globally: ```bash theme={"theme":{"light":"github-light","dark":"github-dark"}} codex mcp add dedalus-docs --url https://docs.dedaluslabs.ai/mcp ``` ### Connect with Cursor Install in one click: Add to Cursor Or add this configuration to `.cursor/mcp.json`: ```json theme={"theme":{"light":"github-light","dark":"github-dark"}} { "mcpServers": { "docs-dedalus": { "url": "https://docs.dedaluslabs.ai/mcp" } } } ``` ### Connect with VS Code Install in one click: Install in VS Code Or add this configuration to `.vscode/mcp.json`: ```json theme={"theme":{"light":"github-light","dark":"github-dark"}} { "servers": { "docs-dedalus": { "type": "http", "url": "https://docs.dedaluslabs.ai/mcp" } } } ``` ### Connect with Antigravity Add the following to your MCP settings configuration file: ```json theme={"theme":{"light":"github-light","dark":"github-dark"}} { "mcpServers": { "docs-dedalus": { "serverUrl": "https://docs.dedaluslabs.ai/mcp" } } } ``` ## Learn more Have questions or feedback? Join our [Discord community](https://discord.gg/RuDhZKnq5R) or [email us](mailto:support@dedaluslabs.ai).