Skip to main content
LLMs generate text. Applications need data structures. Structured outputs bridge this gap—define a schema (Pydantic in Python, Zod or Effect Schema in TypeScript), and the Dedalus SDK ensures responses conform with full type safety. This is essential for building reliable applications. Instead of parsing free-form text and hoping for the best, you get validated objects that your code can trust.

Extract typed data

Define a schema. Call .parse(). Get validated objects.
import asyncio
from dedalus_labs import AsyncDedalus
from dotenv import load_dotenv
from pydantic import BaseModel

load_dotenv()

class Event(BaseModel):
    name: str
    city: str
    date: str

class EventsResponse(BaseModel):
    query: str
    events: list[Event]

async def main():
    client = AsyncDedalus()

    completion = await client.chat.completions.parse(
        model="openai/gpt-5.2",
        messages=[{
            "role": "user",
            "content": "Return 3 upcoming basketball events near San Francisco as JSON.",
        }],
        response_format=EventsResponse,
    )

    parsed: EventsResponse = completion.choices[0].message.parsed
    print(parsed)

if __name__ == "__main__":
    asyncio.run(main())

Advanced

This section is a reference you can skim and come back to. It’s organized as a progression:
  1. Client .parse() (non-streaming, typed output)
  2. Client .stream() (streaming, typed output)
  3. Runner response_format (typed output inside an agent/tool loop)
  4. Schemas & patterns (optional fields, nested models, enums/unions)
  5. Structured tool calls (when you need deterministic tool calling)

Client API (reference)

The client provides three methods for structured outputs:
  • .parse() - Non-streaming with type-safe schemas
  • .stream() - Streaming with type-safe schemas (context manager)
  • .create() - Dict-based schemas only

TypeScript setup

TypeScript schema helpers are optional peer dependencies. Install the validator you want to use:
bun install zod
# or
bun install effect

.parse() (non-streaming)

This is the same pattern as the progressive example above, shown again in a more “API-reference” style.
import asyncio
from dedalus_labs import AsyncDedalus
from dotenv import load_dotenv
from pydantic import BaseModel

load_dotenv()

class Event(BaseModel):
name: str
city: str
date: str

class EventsResponse(BaseModel):
query: str
events: list[Event]

async def main():
client = AsyncDedalus()

    completion = await client.chat.completions.parse(
        model="openai/gpt-5.2",
        messages=[
            {
                "role": "user",
                "content": (
                    "Return 3 upcoming basketball events near San Francisco as JSON. "
                    "Use ISO dates (YYYY-MM-DD)."
                ),
            }
        ],
        response_format=EventsResponse,
        mcp_servers=["windsor/ticketmaster-mcp"],  # Discover events via Ticketmaster
    )

    parsed = completion.choices[0].message.parsed
    print(parsed)

if **name** == "**main**":
asyncio.run(main())

import Dedalus from 'dedalus-labs';
import { effectResponseFormat } from 'dedalus-labs/helpers/effect';
import * as Schema from 'effect/Schema';

const client = new Dedalus();

const Event = Schema.Struct({
name: Schema.String,
city: Schema.String,
date: Schema.String,
});

async function main() {
const completion = await client.chat.completions.parse({
model: 'openai/gpt-5.2',
messages: [
{
role: 'user',
content:
'Return 3 upcoming basketball events near San Francisco as JSON. Use ISO dates (YYYY-MM-DD).',
},
],
response_format: effectResponseFormat(
Schema.Struct({ query: Schema.String, events: Schema.Array(Event) }),
'events_response',
),
mcpServers: ['windsor/ticketmaster-mcp'],
});

console.log(completion.choices[0]?.message.parsed);
}

main();

.stream() (streaming)

Use this when you want streaming UX and a typed final result.
Streaming helpers differ by language:
  • Python: use .stream(...) as a context manager and read typed stream events.
  • TypeScript: stream tokens with create({ stream: true, ... }), then validate the final JSON with Zod/Effect.
import asyncio
from dedalus_labs import AsyncDedalus
from dotenv import load_dotenv
from pydantic import BaseModel

load_dotenv()

class Event(BaseModel):
    name: str
    city: str
    date: str

class EventsResponse(BaseModel):
    query: str
    events: list[Event]

async def main():
    client = AsyncDedalus()

    # Use context manager for streaming
    async with client.chat.completions.stream(
        model="openai/gpt-5.2",
        messages=[{
            "role": "user",
            "content": (
                "Return 3 upcoming basketball events near San Francisco as JSON. "
                "Use ISO dates (YYYY-MM-DD)."
            ),
        }],
        response_format=EventsResponse,
        mcp_servers=["windsor/ticketmaster-mcp"],
    ) as stream:
        # Process events as they arrive
        async for event in stream:
            if event.type == "content.delta":
                print(event.delta, end="", flush=True)
            elif event.type == "content.done":
                # Snapshot available at content.done (typed)
                print(f"\nParsed events: {len(event.parsed.events)}")

        # Get final parsed result
        final = await stream.get_final_completion()
        parsed = final.choices[0].message.parsed
        print(f"\nFinal events: {len(parsed.events)}")

if __name__ == "__main__":
    asyncio.run(main())

Optional Fields

Use Optional[T] in Python, .nullable() in Zod, or Schema.NullOr(...) in Effect for nullable fields:
With OpenAI strict mode, every field must be required. Model “optional” values as nullable.
import asyncio
from dedalus_labs import AsyncDedalus
from dotenv import load_dotenv
from pydantic import BaseModel

load_dotenv()

class Event(BaseModel):
name: str
city: str
date: str
price_usd: int | None = None # model unknown as null

class EventsResponse(BaseModel):
query: str
events: list[Event]

async def main():
client = AsyncDedalus()

    completion = await client.chat.completions.parse(
        model="openai/gpt-5.2",
        messages=[{
            "role": "user",
            "content": (
                "Return 3 upcoming basketball events near San Francisco as JSON. "
                "Include price_usd if known; otherwise null. Use ISO dates (YYYY-MM-DD)."
            ),
        }],
        response_format=EventsResponse,
        mcp_servers=["windsor/ticketmaster-mcp"],
    )

    parsed = completion.choices[0].message.parsed
    for e in parsed.events:
        print(e.name, e.price_usd)

if **name** == "**main**":
asyncio.run(main())

import * as Schema from 'effect/Schema';

const Event = Schema.Struct({
name: Schema.String,
city: Schema.String,
date: Schema.String,
price_usd: Schema.NullOr(Schema.Number),
});

const EventsResponse = Schema.Struct({
query: Schema.String,
events: Schema.Array(Event),
});

Avoid Schema.optional(...) for structured outputs—use Schema.NullOr(...) instead.

Schemas & patterns

Nested Models

import asyncio
from dedalus_labs import AsyncDedalus
from dotenv import load_dotenv
from pydantic import BaseModel

load_dotenv()

class Venue(BaseModel):
    name: str
    address: str | None = None
    city: str

class Event(BaseModel):
    name: str
    date: str
    venue: Venue

class EventsResponse(BaseModel):
    query: str
    events: list[Event]

async def main():
    client = AsyncDedalus()

    completion = await client.chat.completions.parse(
        model="openai/gpt-5.2",
        messages=[{
            "role": "user",
            "content": (
                "Return 3 upcoming basketball events near San Francisco as JSON. "
                "Each event must include a nested venue object with name, city, and address (null if unknown). "
                "Use ISO dates (YYYY-MM-DD)."
            )
        }],
        response_format=EventsResponse,
        mcp_servers=["windsor/ticketmaster-mcp"],
    )

    parsed = completion.choices[0].message.parsed
    for e in parsed.events:
        print(e.name, "→", e.venue.name)

Structured Tool Calls (advanced)

Define type-safe tools with automatic argument parsing:
import asyncio
from dedalus_labs import AsyncDedalus
from dotenv import load_dotenv
from pydantic import BaseModel

load_dotenv()

class SearchEventsArgs(BaseModel):
city: str
month: str
max_results: int = 5

async def main():
client = AsyncDedalus()

    tools = [
        {
            "type": "function",
            "function": {
                "name": "search_events",
                "description": "Search for events in a city during a month.",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "city": {"type": "string"},
                        "month": {"type": "string", "description": "YYYY-MM"},
                        "max_results": {"type": "integer", "default": 5},
                    },
                    "required": ["city", "month"],
                    "additionalProperties": False,
                },
                "strict": True,
            }
        }
    ]

    completion = await client.chat.completions.parse(
        model="openai/gpt-5.2",
        messages=[{
            "role": "user",
            "content": "Call search_events for San Francisco in 2026-01.",
        }],
        tools=tools,
        tool_choice={"type": "tool", "name": "search_events"},
    )

    message = completion.choices[0].message
    if message.tool_calls:
        tool_call = message.tool_calls[0]
        print(f"Tool called: {tool_call.function.name}")
        print(f"Parsed args: {tool_call.function.parsed_arguments}")

if **name** == "**main**":
asyncio.run(main())

If you need deterministic tool calling, set tool_choice to one of the object variants: { type: 'auto' } (model decides), { type: 'any' } (require a tool call), { type: 'tool', name: 'search_events' } (require a specific tool), { type: 'none' } (disable tools). Passing the OpenAI string form (e.g. tool_choice: 'required') will fail schema validation with a 422.
import { effectFunction } from 'dedalus-labs/helpers/effect';
import * as Schema from 'effect/Schema';

const SearchEventsTool = effectFunction({
name: 'search_events',
parameters: Schema.Struct({
city: Schema.String,
month: Schema.String, // YYYY-MM
max_results: Schema.NullOr(Schema.Number),
}),
description: 'Search for events in a city during a month.',
});

Tool parameters must be an object schema (use Schema.Struct({ ... })).

Enums and Unions

import asyncio
from typing import Literal
from dedalus_labs import AsyncDedalus
from dotenv import load_dotenv
from pydantic import BaseModel

load_dotenv()

class Event(BaseModel):
    name: str
    city: str
    date: str
    category: Literal["sports", "music", "theater", "other"]
    ticket_status: Literal["available", "sold_out", "unknown"]

class EventsResponse(BaseModel):
    query: str
    events: list[Event]

async def main():
    client = AsyncDedalus()

    completion = await client.chat.completions.parse(
        model="openai/gpt-5.2",
        messages=[{
            "role": "user",
            "content": (
                "Return 3 upcoming events near San Francisco as JSON. "
                "Each event must include category (sports/music/theater/other) and ticket_status (available/sold_out/unknown). "
                "Use ISO dates (YYYY-MM-DD)."
            )
        }],
        response_format=EventsResponse,
        mcp_servers=["windsor/ticketmaster-mcp"],
    )

    parsed = completion.choices[0].message.parsed
    for e in parsed.events:
        print(e.name, e.category, e.ticket_status)

if __name__ == "__main__":
    asyncio.run(main())

DedalusRunner API

The Runner supports response_format with automatic schema conversion:
import asyncio

from dedalus_labs import AsyncDedalus, DedalusRunner
from dotenv import load_dotenv
from pydantic import BaseModel

load_dotenv()

class Event(BaseModel):
name: str
city: str
date: str

class EventsResponse(BaseModel):
query: str
events: list[Event]

def as_bullets(items: list[str]) -> str:
"""Format items as a bulleted list."""
return "\n".join(f"• {item}" for item in items)

async def main():
client = AsyncDedalus()
runner = DedalusRunner(client)

    result = await runner.run(
        input=(
            "Find me the nearest basketball games in January in San Francisco using Ticketmaster. "
            "Then call as_bullets with a list of items (one per event: name, city, date)."
        ),
        model="anthropic/claude-opus-4-5",
        mcp_servers=["windsor/ticketmaster-mcp"],  # Discover events via Ticketmaster
        tools=[as_bullets],
        response_format=EventsResponse,
        max_steps=5,
    )

    print(result.final_output)

if **name** == "**main**":
asyncio.run(main())

.create() vs .parse() vs .stream()

MethodSchema SupportStreamingUse Case
.create()Dict onlyManual JSON schemas
.parse()Pydantic/Zod/EffectType-safe non-streaming
.stream()Pydantic/Zod/EffectType-safe streaming
.create() expects a plain JSON Schema object. Don’t pass a Pydantic model, Zod schema, or Effect schema directly.
“Streaming + typed output” is language-dependent: - Python: .stream(...) yields typed events and a typed final snapshot. - TypeScript: stream tokens and validate the final JSON with Zod/Effect.

Error Handling

import asyncio
from typing import Any
from dedalus_labs import AsyncDedalus
from dotenv import load_dotenv
from pydantic import BaseModel

load_dotenv()

class Event(BaseModel):
name: str
city: str
date: str

class EventsResponse(BaseModel):
query: str
events: list[Event]

async def main():
client = AsyncDedalus()

    try:
        completion = await client.chat.completions.parse(
            model="openai/gpt-5.2",
            messages=[{
                "role": "user",
                "content": (
                    "Return 3 upcoming basketball events near San Francisco as JSON. "
                    "Use ISO dates (YYYY-MM-DD)."
                ),
            }],
            response_format=EventsResponse,
        )
        parsed = completion.choices[0].message.parsed
        print(f"Parsed events: {len(parsed.events)}")
    except Exception as e:
        print("Parse failed:", e)

if **name** == "**main**":
asyncio.run(main())

Supported Models

The Dedalus SDK’s .parse() and .stream() methods work across all providers. Schema enforcement varies: Strict Enforcement (CFG-based, schema guarantees):
  • openai/* - Context-free grammar compilation
  • xai/* - Native schema validation
  • fireworks_ai/* - Native schema validation (select models)
  • deepseek/* - Native schema validation (select models)
Best-Effort (schema sent for guidance, no guarantees):
  • 🟡 google/* - Schema forwarded to generationConfig.responseSchema
  • 🟡 anthropic/* - Prompt-based JSON generation (~85-90% success rate)
For google/* and anthropic/* models, always validate parsed output and implement retry logic.

Provider Examples

You can use .parse() and .stream() with models from any provider. In practice, you only change model—everything else stays the same. For a full list of model IDs, see the providers guide.

Quick Reference

Python (Pydantic)

from dedalus_labs import AsyncDedalus
from pydantic import BaseModel

class MyModel(BaseModel):
    field: str

client = AsyncDedalus()
result = await client.chat.completions.parse(
    model="openai/gpt-5.2",
    messages=[...],
    response_format=MyModel,
)
parsed = result.choices[0].message.parsed

TypeScript (Zod)

import Dedalus from 'dedalus-labs';
import { zodResponseFormat } from 'dedalus-labs/helpers/zod';
import { z } from 'zod';

const MySchema = z.object({ field: z.string() });

const client = new Dedalus();
const result = await client.chat.completions.parse({
  model: 'openai/gpt-5.2',
  messages: [...],
  response_format: zodResponseFormat(MySchema, 'my_schema'),
});
const parsed = result.choices[0]?.message.parsed;

TypeScript (Effect Schema)

import Dedalus from 'dedalus-labs';
import { effectResponseFormat } from 'dedalus-labs/helpers/effect';
import * as Schema from 'effect/Schema';

const MySchema = Schema.Struct({ field: Schema.String });

const client = new Dedalus();
const result = await client.chat.completions.parse({
  model: 'openai/gpt-5.2',
  messages: [...],
  response_format: effectResponseFormat(MySchema, 'my_schema'),
});
const parsed = result.choices[0]?.message.parsed;

Zod Helpers

import { zodResponseFormat, zodFunction } from 'dedalus-labs/helpers/zod';

// For response schemas
zodResponseFormat(MyZodSchema, 'schema_name')

// For tool definitions
zodFunction({
  name: 'tool_name',
  description: 'What the tool does',
  parameters: z.object({ ... }),
  function: (args) => { ... },
})

Effect Helpers

import { effectResponseFormat, effectFunction } from 'dedalus-labs/helpers/effect';

// For response schemas
effectResponseFormat(MyEffectSchema, 'schema_name')

// For tool definitions
effectFunction({
  name: 'tool_name',
  description: 'What the tool does',
  parameters: MyEffectParametersSchema,
  function: (args) => { ... },
})
If you still use @effect/schema, schemas from @effect/schema/Schema also work with helpers/effect.You still need to install effect (the Dedalus SDK uses effect/JSONSchema and effect/Schema for conversion + validation).Prefer effect/Schema for new code.

Next steps

  • Stream output: Streaming — Improve UX for long tool/MCP runs
  • Route across models: Handoffs — Use fast/strong models by phase
  • See patterns: Use Cases — Structured extraction workflows
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.
Last modified on February 28, 2026