DedalusRunner is the core of the Dedalus SDK. It orchestrates local tools, hosted MCP servers, streaming, and any model from any provider into a single agentic loop. Five lines of code, any agent you want.
Quick Example
Report incorrect code
Copy
Ask AI
from dedalus_labs import AsyncDedalus, DedalusRunner
client = AsyncDedalus()
runner = DedalusRunner(client)
result = await runner.run(
input="What's the weather in Tokyo?",
model="anthropic/claude-sonnet-4-20250514",
mcp_servers=["windsornguyen/open-meteo-mcp"],
max_steps=5,
)
print(result.final_output)
Parameters
Show Core
Show Core
The user’s prompt or a list of messages. Use a string for single-turn requests; use a message list for multi-turn conversations.
Model(s) to use. Format:
provider/model-name (e.g., openai/gpt-4o, anthropic/claude-sonnet-4-20250514). Pass a list for routing or fallback behavior.System prompt that defines the agent’s behavior and personality.
Existing conversation history. Use with
result.to_input_list() for multi-turn conversations.Show Tools & MCP
Show Tools & MCP
Local Python/TS functions the model can call. Schema extracted automatically from type hints and docstrings. See Tools.
Credentials for MCP server authentication.
Control tool usage:
"auto"— Model decides (default)"none"— Disable tools"required"— Force tool use{"type": "function", "function": {"name": "..."}}— Force specific tool
Show Model Parameters
Show Model Parameters
Sampling temperature (0–2). Higher values increase randomness. Default varies by model.
Maximum tokens in the response.
Nucleus sampling threshold (0–1). Alternative to temperature.
Penalize repeated tokens based on frequency (-2.0 to 2.0).
Penalize tokens that have appeared at all (-2.0 to 2.0).
Adjust likelihood of specific tokens. Maps token IDs to bias values (-100 to 100).
Show Output Control
Show Output Control
Enforce structured output. Pass a Pydantic model or JSON schema. See Structured Outputs.
Include model’s intent analysis in result.
Show Execution
Show Execution
Show Advanced
Show Advanced
Runtime policies for dynamic model selection or behavior modification.
Attributes for agent routing and selection. Maps attribute names to float values.
Per-model attribute overrides. Maps model names to attribute dictionaries.
Restrict which models the agent can use.
Enforce strict model validation.
Input/output guardrail configurations.
Show Debugging
Show Debugging
Return Value
Response object returned by
runner.run().Show Properties
Show Properties
The final text response from the agent.
Results from local tool executions. Each contains
name, result, step, and optionally error.Results from MCP server tool calls.
Names of tools that were invoked during the run.
Number of agentic loop iterations used.
Full conversation history including tool calls. Useful for debugging or continuing conversations.
Model’s intent analysis (only present if
return_intent=true).Alias for
final_output.Alias for
final_output.Show Methods
Show Methods
Returns a copy of the conversation history for use in follow-up runs. Enables multi-turn conversations.
Multi-turn Chat
Report incorrect code
Copy
Ask AI
import asyncio
from dedalus_labs import AsyncDedalus, DedalusRunner
async def main():
client = AsyncDedalus()
runner = DedalusRunner(client)
messages: list[dict] = []
while True:
user_input = input("You: ").strip()
if not user_input:
break
messages.append({"role": "user", "content": user_input})
result = await runner.run(
model="openai/gpt-4o",
messages=messages,
)
messages = result.to_input_list()
print(f"Assistant: {result.final_output}\n")
asyncio.run(main())
Example Response
Report incorrect code
Copy
Ask AI
{
"final_output": "The weather in Tokyo is currently 18°C with clear skies.",
"tool_results": [],
"mcp_results": [
{
"name": "get_current_weather",
"result": {"temperature": 18, "conditions": "clear"},
"server": "windsornguyen/open-meteo-mcp"
}
],
"tools_called": ["get_current_weather"],
"steps_used": 2,
"messages": [...]
}