Skip to main content
A tool is a function you expose to a language model. Concretely, it specifies the input and output schema that describes what the function takes in and what it outputs. Tool calling is useful because language models cannot execute code by itself. It can only specify which function should be invoked and how.

The manual way

Under the hood, tool calling is a three-step process.
  1. Describe the structure of the tool as a JSON schema and pass it to the model
  2. The model fills in the JSON schema and outputs it for the application to parse
  3. The application executes the tool call and send the results to the model

1. Describe the tool schema

Python
import json
from dedalus_labs import Dedalus

client = Dedalus()

def get_weather(city: str, units: str = "celsius") -> dict:
    """Get current weather for a city."""
    return {"temp": 22, "conditions": "sunny"}  # Toy example

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather for a city.",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {"type": "string"},
                    "units": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                },
                "required": ["city"],
            },
        },
    }
]

2. Give tool call schema to model

Python
response = client.chat.completions.create(
    model="openai/gpt-4.1",
    messages=[{"role": "user", "content": "Weather in Paris?"}],
    tools=tools,  # From step 1
)

# Process what the model responds with!
msg = response.choices[0].message
tool_call = msg.tool_calls[0]
args = json.loads(tool_call.function.arguments)

3. Execute the tool call

Python
result = get_weather(args["city"], args.get("units", "celsius"))

final = client.chat.completions.create(
    model="openai/gpt-4.1",
    messages=[
        {"role": "user", "content": "Weather in Paris?"},
        msg,
        {"role": "tool", "tool_call_id": tool_call.id, "content": json.dumps(result)},
    ],
    tools=tools,
)

print(final.choices[0].message.content)
When put all together, a simple tool call looks like this:
Python
import json
from dedalus_labs import Dedalus

client = Dedalus()

def get_weather(city: str, units: str = "celsius") -> dict:
    """Get current weather for a city."""
    return {"temp": 22, "conditions": "sunny"}

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather for a city.",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {"type": "string"},
                    "units": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                },
                "required": ["city"],
            },
        },
    }
]

response = client.chat.completions.create(
    model="openai/gpt-4.1",
    messages=[{"role": "user", "content": "Weather in Paris?"}],
    tools=tools,
)

msg = response.choices[0].message
tool_call = msg.tool_calls[0]
args = json.loads(tool_call.function.arguments)
result = get_weather(args["city"], args.get("units", "celsius"))

final = client.chat.completions.create(
    model="openai/gpt-4.1",
    messages=[
        {"role": "user", "content": "Weather in Paris?"},
        msg,
        {"role": "tool", "tool_call_id": tool_call.id, "content": json.dumps(result)},
    ],
    tools=tools,
)

print(final.choices[0].message.content)
That’s a lot of work! You are hand-writing schemas, parsing args, dispatching tool calls, and maintaining the request/response loop yourself.

The Dedalus Way

The DedalusRunner supports automatic tool calling serialization. This means that it handles schema extraction, tool dispatch, conversation looping, and final response handling. All you have to do is pass in your function into the tools parameter!
Python
from dedalus_labs import Dedalus, DedalusRunner

def get_weather(city: str, units: str = "celsius") -> dict:
    """Get current weather for a city."""
    return {"temp": 22, "conditions": "sunny"}

client = Dedalus()
runner = DedalusRunner(client)

result = runner.run(
    input="What's the weather in Paris?",
    model="openai/gpt-4.1",
    tools=[get_weather],
)

print(result.final_output)
See Response Schemas for the full ChatCompletion and RunResult shapes, including tool_calls fields.

Writing good tools

Type your functions. Type hints become the JSON schema the model sees. city: str becomes {"type": "string"}. The more specific the hints, the better the model fills them in. Write docstrings. The Runner uses your docstring as the tool’s description. The model reads it to decide when to call the function. Use descriptive names. The model picks which tool to call by name. get_weather beats do_stuff. Keep tool counts low. Tool schemas take up space in the context window. Minimize the tools you pass for a given task.

Read more

Dedalus Runner

Learn more about the DedalusRunner

MCP Servers

Learn how to connect MCP servers to your Dedalus models

Structured Outputs

Guarantee that your model outputs the desired schema every time

Use Cases

End-to-end examples for inspiration
Last modified on April 9, 2026