Skip to main content
This page documents all response schemas returned by the Dedalus API. All responses follow OpenAI-compatible formats.

Dedalus Runner

RunResult
object
Response object returned by the DedalusRunner for non-streaming tool execution runs.
Example
from dedalus_labs import Dedalus, DedalusRunner

client = Dedalus(api_key="YOUR_API_KEY")
runner = DedalusRunner(client)

def get_weather(location: str) -> str:
    """Get the current weather for a location."""
    return f"The weather in {location} is sunny and 72°F"

result = runner.run(
    input="What's the weather like in San Francisco?",
    tools=[get_weather],
    model="openai/gpt-5-nano",
    max_steps=5
)

# Access result properties
print(result.final_output)   # "The weather in San Francisco is sunny and 72°F"
print(result.steps_used)     # e.g., 2
print(result.tools_called)   # ["get_weather"]
print(result.tool_results)   # [{"name": "get_weather", "result": "The weather...", "step": 1}]
Accessing Message History
import json

# Print the full conversation history
for msg in result.messages:
    role = msg.get("role")
    content = msg.get("content", "")
    
    if role == "user":
        print(f"User: {content}")
    elif role == "assistant":
        if msg.get("tool_calls"):
            tools = [tc["function"]["name"] for tc in msg["tool_calls"]]
            print(f"Assistant: [calling {', '.join(tools)}]")
        else:
            print(f"Assistant: {content}")
    elif role == "tool":
        print(f"Tool Result: {content[:100]}...")

# Store message history to JSON for logging/debugging
with open("conversation_log.json", "w") as f:
    json.dump(result.messages, f, indent=2)

# Continue the conversation with message history
follow_up = runner.run(
    messages=result.to_input_list(),  # Pass previous conversation
    input="What about New York?",      # Add new user message
    tools=[get_weather],
    model="openai/gpt-5-nano"
)
Example Response
{
  "final_output": "The weather in San Francisco is sunny and 72°F",
  "tool_results": [
    {
      "name": "get_weather",
      "result": "The weather in San Francisco is sunny and 72°F",
      "step": 1
    }
  ],
  "steps_used": 2,
  "tools_called": ["get_weather"],
  "messages": [
    {"role": "user", "content": "What's the weather like in San Francisco?"},
    {"role": "assistant", "tool_calls": [{"id": "call_abc123", "type": "function", "function": {"name": "get_weather", "arguments": "{\"location\": \"San Francisco\"}"}}]},
    {"role": "tool", "tool_call_id": "call_abc123", "content": "The weather in San Francisco is sunny and 72°F"},
    {"role": "assistant", "content": "The weather in San Francisco is sunny and 72°F"}
  ],
  "intents": null
}

Chat Completions

ChatCompletion
object
The complete response object for non-streaming chat completions.
Example
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "openai/gpt-5-nano",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! I'm doing well, thank you for asking."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 13,
    "completion_tokens": 12,
    "total_tokens": 25
  }
}

ChatCompletionChunk
object
Streamed response chunks for streaming completions (stream=true).
Example
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion.chunk",
  "created": 1677652288,
  "model": "openai/gpt-5-nano",
  "choices": [
    {
      "index": 0,
      "delta": {
        "content": "Hello"
      },
      "finish_reason": null
    }
  ]
}

Embeddings

CreateEmbeddingResponse
object
Response object for embedding creation requests.
Example
{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "embedding": [
        0.0023064255,
        -0.009327292,
        -0.0028842222
      ],
      "index": 0
    }
  ],
  "model": "openai/text-embedding-3-small",
  "usage": {
    "prompt_tokens": 8,
    "total_tokens": 8
  }
}

Models

ListModelsResponse
object
Response object for listing available models.
Example
{
  "object": "list",
  "data": [
    {
      "id": "openai/gpt-5-nano",
      "object": "model",
      "created": 1687882411,
      "owned_by": "openai"
    },
    {
      "id": "anthropic/claude-3-5-sonnet",
      "object": "model",
      "created": 1686935002,
      "owned_by": "anthropic"
    }
  ]
}

Images

ImagesResponse
object
Response object for image generation requests.
Example
{
  "created": 1677652288,
  "data": [
    {
      "url": "https://images.example.com/abc123.png",
      "revised_prompt": "A cute baby sea otter floating on its back in calm blue water"
    }
  ]
}

Audio

TranscriptionResponse
object
Response object for audio transcription requests.
Example
{
  "text": "Hello, this is a test of audio transcription."
}

TranslationResponse
object
Response object for audio translation requests (always translates to English).
Example
{
  "text": "Hello, this is a test of audio translation."
}

Errors

ErrorResponse
object
All endpoints may return errors with this structure.
Example
{
  "error": {
    "message": "Invalid API key provided",
    "type": "authentication_error",
    "code": "invalid_api_key"
  }
}