Skip to main content
Policies let you inject logic at each step of agent execution. Add instructions, modify behavior, enforce constraints—all based on runtime context like step count, previous outputs, or external state.

Basic Policy

A policy is a function that receives context and returns modifications:
import asyncio
from dedalus_labs import AsyncDedalus, DedalusRunner
from dedalus_labs.utils.stream import stream_async
from dotenv import load_dotenv

load_dotenv()

def policy(ctx: dict) -> dict:
    step = ctx.get("step", 1)
    
    if step >= 3:
        # After step 3, tell the model to wrap up
        return {
            "message_prepend": [
                {"role": "system", "content": "Provide your final answer now."}
            ],
            "max_steps": 4
        }
    
    return {}

async def main():
    client = AsyncDedalus()
    runner = DedalusRunner(client)

    result = runner.run(
        input="Research the history of the internet and summarize key milestones",
        model="openai/gpt-4o-mini",
        mcp_servers=["tsion/brave-search-mcp"],
        stream=True,
        policy=policy
    )

    await stream_async(result)

if __name__ == "__main__":
    asyncio.run(main())

Policy Context

The ctx dict contains:
FieldTypeDescription
stepintCurrent execution step (1-indexed)
messageslistConversation history so far
tools_calledlistTools invoked in previous steps

Policy Returns

Policies can return:
FieldEffect
message_prepend / messagePrependMessages added before the next model call
message_append / messageAppendMessages added after the conversation
max_steps / maxStepsOverride the maximum step count
stopBoolean to halt execution early

Use Cases

Rate limiting: Track API calls across steps, pause if limits approached. Guardrails: Check outputs for policy violations, inject correction prompts. Dynamic instructions: Change behavior based on intermediate results. Cost control: Stop execution after a certain number of expensive operations.

Tool Event Callbacks

Monitor tool execution with on_tool_event:
import json

def on_tool(evt: dict) -> None:
    print(f"Tool called: {json.dumps(evt)}")

result = runner.run(
    input="Calculate shipping costs for a 5kg package to London",
    model="openai/gpt-4o-mini",
    tools=[calculate_shipping],
    on_tool_event=on_tool,
    policy=policy
)

Next Steps

  • Tools — Define the tools policies can control
  • Examples — See policies in action