Skip to main content
A tool is a function you expose to a language model. The model can’t execute code itself — it can only request that you call a function by returning a structured tool call with a name and arguments. Your code executes the function and sends the result back. The model then uses that result to form its final response.

The manual way

Under the hood, tool calling is a multi-turn conversation:
  1. You describe available tools as JSON schemas
  2. The model returns a tool call instead of text
  3. You execute the function and send the result back
  4. The model responds with text (or more tool calls)
import json
from dedalus_labs import Dedalus

client = Dedalus()

def get_weather(city: str, units: str = "celsius") -> dict:
    """Get current weather for a city."""
    return {"temp": 22, "conditions": "sunny"}

# 1. Describe the tool as a JSON schema
tools = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Get current weather for a city.",
        "parameters": {
            "type": "object",
            "properties": {
                "city": {"type": "string"},
                "units": {"type": "string", "enum": ["celsius", "fahrenheit"]},
            },
            "required": ["city"],
        },
    },
}]

# 2. Send the prompt with tool definitions
response = client.chat.completions.create(
    model="openai/gpt-4.1",
    messages=[{"role": "user", "content": "Weather in Paris?"}],
    tools=tools,
)

# 3. The model returns a tool call, not text
msg = response.choices[0].message
tool_call = msg.tool_calls[0]
args = json.loads(tool_call.function.arguments or "{}")
result = get_weather(
    city=args.get("city", "Paris"),
    units=args.get("units", "celsius"),
)

# 4. Send the result back for a final answer
final = client.chat.completions.create(
    model="openai/gpt-4.1",
    messages=[
        {"role": "user", "content": "Weather in Paris?"},
        msg,
        {"role": "tool", "tool_call_id": tool_call.id, "content": json.dumps(result)},
    ],
    tools=tools,
)

print(final.choices[0].message.content)
# "The current weather in Paris is 22°C and sunny."
That’s a lot of plumbing for one function. You’re writing JSON schemas by hand, parsing arguments, dispatching to the right function, serializing results, and managing the conversation loop. For a handful of tools it’s fine. For ten, it’s a maintenance tax.

Automating schema generation

Both SDKs include utilities that derive JSON schemas from your function signatures, eliminating hand-written JSON:
# inspect.signature + Pydantic → full JSON schema
from dedalus_labs.lib.utils._schemas import to_schema

def get_weather(city: str, units: str = "celsius") -> dict:
    """Get current weather for a city."""
    return {"temp": 22, "conditions": "sunny"}

schema = to_schema(get_weather)
# {
#   "type": "function",
#   "function": {
#     "name": "get_weather",
#     "description": "Get current weather for a city.",
#     "parameters": { "properties": { "city": { "type": "string" }, ... } }
#   }
# }
This saves the schema boilerplate, but you still have to manage the dispatch loop, argument parsing, conversation history, and multi-step execution yourself.

DedalusRunner: tools without the plumbing

DedalusRunner handles everything above automatically. Pass raw functions — it extracts schemas, dispatches tool calls, manages the conversation loop, and keeps calling the model until the task is done.
import asyncio
from dedalus_labs import AsyncDedalus, DedalusRunner

def get_weather(city: str, units: str = "celsius") -> dict:
    """Get current weather for a city."""
    return {"temp": 22, "conditions": "sunny"}

async def main():
    client = AsyncDedalus()
    runner = DedalusRunner(client)

    result = await runner.run(
        input="What's the weather in Paris?",
        model="openai/gpt-4.1",
        tools=[get_weather],
    )
    print(result.final_output)

asyncio.run(main())
Five lines instead of thirty. The Runner introspects each function’s signature, builds JSON schemas, sends them to the API, executes tool calls when the model requests them, and loops until the model produces a final text response. See the Runner reference for the full parameter set.

Best practices

  • Type your parameters and returns — the SDK uses them to generate accurate schemas. In Python, Pydantic extracts rich types. In TypeScript, parameter names are extracted at runtime (types are erased), so use Zod for richer schemas.
  • Write docstrings — the model reads them to decide when and how to call the tool.
  • Use clear namesget_weather tells the model more than do_thing.
  • Keep the count low — models perform better with fewer than 20 tools. Combine functions that are always called in sequence.
# Good: typed, documented, clear name
def get_weather(city: str, units: str = "celsius") -> dict:
    """Get current weather for a city. Returns temperature and conditions."""
    return {"temp": 22, "conditions": "sunny"}

# Bad: no types, no docs, unclear name
def do_thing(x):
    return some_api_call(x)

Async tools

Tools can be async. The Runner awaits them automatically:
async def fetch_user(user_id: int) -> dict:
    """Fetch user profile from database."""
    async with db.connection() as conn:
        return await conn.fetchone("SELECT * FROM users WHERE id = $1", user_id)

Model selection

openai/gpt-5.2 and openai/gpt-4.1 handle complex tool chains well. Older or smaller models may struggle with multi-step reasoning.

Next steps

Runner Reference

Full DedalusRunner.run() parameter reference

MCP Servers

Hosted tools for external capabilities

Structured Outputs

Validate and parse JSON into schemas

Use Cases

End-to-end agent patterns
Last modified on April 17, 2026