# API
Source: https://docs.dedaluslabs.ai/api-reference/api
Unified API for chat completions, embeddings, audio, and image generation across multiple AI providers
## Libraries
## Getting Started
Sign up at the [Dedalus Dashboard](https://www.dedaluslabs.ai/dashboard/api-keys) and create an API key.
Pick a language from the **Install** tabs above.
Send a chat completion request using any supported model.
```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}}
import DedalusLabs from "dedalus-labs";
const client = new DedalusLabs();
const completion = await client.chat.completions.create({
model: "openai/gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
});
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_labs import DedalusLabs
client = DedalusLabs()
completion = client.chat.completions.create(
model="openai/gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
```
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
curl -X POST https://api.dedaluslabs.ai/v1/chat/completions \
-H "Authorization: Bearer $DEDALUS_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "openai/gpt-4o", "messages": [{"role": "user", "content": "Hello!"}]}'
```
## Authentication
All endpoints require a Bearer token or `X-API-Key` header.
Get your key from the [Dedalus Dashboard](https://www.dedaluslabs.ai/dashboard/api-keys).
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
Authorization: Bearer YOUR_API_KEY
```
## Chat & Embeddings
| Method | Path | Description |
| ------ | ---------------------- | -------------------------------------------------------- |
| `POST` | `/v1/chat/completions` | [Create chat completion](/api/v1/create-chat-completion) |
| `POST` | `/v1/embeddings` | [Create embeddings](/api/v1/create-embeddings) |
## Audio
| Method | Path | Description |
| ------ | -------------------------- | ---------------------------------------------------- |
| `POST` | `/v1/audio/speech` | [Create speech](/api/v1/create-speech) |
| `POST` | `/v1/audio/transcriptions` | [Create transcription](/api/v1/create-transcription) |
| `POST` | `/v1/audio/translations` | [Create translation](/api/v1/create-translation) |
## Images & Documents
| Method | Path | Description |
| ------ | ------------------------ | ------------------------------------ |
| `POST` | `/v1/images/generations` | [Create image](/api/v1/create-image) |
| `POST` | `/v1/ocr` | [OCR](/api-reference/ocr) |
## Models
| Method | Path | Description |
| ------ | ------------ | ---------------------------------- |
| `GET` | `/v1/models` | [List models](/api/v1/list-models) |
# DCS Machines API
Source: https://docs.dedaluslabs.ai/api-reference/dcs
Programmatic access to secure, on-demand cloud workspaces for AI agents and developers
## Libraries
## Getting Started
Sign up at the [Dedalus Dashboard](https://www.dedaluslabs.ai/dashboard/api-keys) and create an API key.
Pick a language from the **Install** tabs above.
Create your first cloud workspace with 2 vCPUs, 4 GB RAM, and 20 GB storage.
```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from "dedalus";
const client = new Dedalus();
const machine = await client.machines.create({
vcpu: 2, memoryMib: 4096, storageGib: 20,
});
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_sdk import Dedalus
client = Dedalus()
machine = client.machines.create(
vcpu=2, memory_mib=4096, storage_gib=20,
)
```
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
curl -X POST https://dcs.dedaluslabs.ai/v1/machines \
-H "Authorization: Bearer $DEDALUS_API_KEY" \
-H "Content-Type: application/json" \
-d '{"vcpu": 2, "memory_mib": 4096, "storage_gib": 20}'
```
## Authentication
All endpoints require a Bearer token or `X-API-Key` header, plus an `X-Dedalus-Org-Id` header.
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
Authorization: Bearer YOUR_API_KEY
X-Dedalus-Org-Id: YOUR_ORG_ID
```
## Machines
| Method | Path | Description |
| -------- | ------------------------ | ------------------------------------------------------------------- |
| `POST` | `/v1/machines` | [Create machine](/dcs/api/machine-lifecycle/create-machine) |
| `GET` | `/v1/machines` | [List machines](/dcs/api/machine-lifecycle/list-machines) |
| `GET` | `/v1/machines/:id` | [Get machine](/dcs/api/machine-lifecycle/get-machine) |
| `PATCH` | `/v1/machines/:id` | [Update machine](/dcs/api/machine-lifecycle/update-machine) |
| `DELETE` | `/v1/machines/:id` | [Destroy machine](/dcs/api/machine-lifecycle/destroy-machine) |
| `POST` | `/v1/machines/:id/wake` | [Wake machine](/dcs/api/machine-lifecycle/wake-a-sleeping-machine) |
| `POST` | `/v1/machines/:id/sleep` | [Sleep machine](/dcs/api/machine-lifecycle/sleep-a-running-machine) |
## SSH
| Method | Path | Description |
| -------- | --------------------------- | ------------------------------------------------------------------- |
| `POST` | `/v1/machines/:id/ssh` | [Create SSH session](/dcs/api/machine-lifecycle/create-ssh-session) |
| `GET` | `/v1/machines/:id/ssh` | [List SSH sessions](/dcs/api/machine-lifecycle/list-ssh-sessions) |
| `DELETE` | `/v1/machines/:id/ssh/:sid` | [Delete SSH session](/dcs/api/machine-lifecycle/delete-ssh-session) |
## Executions
| Method | Path | Description |
| -------- | ---------------------------------- | --------------------------------------------------------------- |
| `POST` | `/v1/machines/:id/executions` | [Create execution](/dcs/api/machine-lifecycle/create-execution) |
| `GET` | `/v1/machines/:id/executions` | [List executions](/dcs/api/machine-lifecycle/list-executions) |
| `DELETE` | `/v1/machines/:id/executions/:eid` | [Delete execution](/dcs/api/machine-lifecycle/delete-execution) |
## Terminals
| Method | Path | Description |
| ------ | ---------------------------------------- | ------------------------------------------------------------------------------------ |
| `POST` | `/v1/machines/:id/terminals` | [Create terminal](/dcs/api/machine-lifecycle/create-terminal) |
| `GET` | `/v1/machines/:id/terminals/:tid/stream` | [Connect WebSocket](/dcs/api/machine-lifecycle/connect-to-terminal-websocket-stream) |
## Previews & Artifacts
| Method | Path | Description |
| ------ | ---------------------------- | ----------------------------------------------------------------- |
| `POST` | `/v1/machines/:id/previews` | [Create preview](/dcs/api/machine-lifecycle/create-preview) |
| `GET` | `/v1/machines/:id/artifacts` | [List artifacts](/dcs/api/machine-lifecycle/list-artifacts) |
| `GET` | `/v1/orgs/:org_id/usage` | [Get org usage](/dcs/api/machine-lifecycle/get-org-machine-usage) |
# OCR
Source: https://docs.dedaluslabs.ai/api-reference/ocr
POST /v1/ocr
Extract text from PDFs and images
## Overview
The OCR endpoint extracts text from documents and images, returning clean markdown. Powered by Mistral's OCR model.
**Supported formats:** PDF, PNG, JPEG, WebP
## Quick Start
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
curl -X POST https://api.dedaluslabs.ai/v1/ocr \
-H "Authorization: Bearer $DEDALUS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "mistral-ocr-latest",
"document": {
"type": "document_url",
"document_url": "https://arxiv.org/pdf/1706.03762"
}
}'
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
import httpx
import os
response = httpx.post(
"https://api.dedaluslabs.ai/v1/ocr",
headers={"Authorization": f"Bearer {os.environ['DEDALUS_API_KEY']}"},
json={
"model": "mistral-ocr-latest",
"document": {
"type": "document_url",
"document_url": "https://arxiv.org/pdf/1706.03762"
}
},
timeout=120.0
)
for page in response.json()["pages"]:
print(f"Page {page['index']}:\n{page['markdown'][:200]}...")
```
```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}}
const response = await fetch("https://api.dedaluslabs.ai/v1/ocr", {
method: "POST",
headers: {
Authorization: `Bearer ${process.env.DEDALUS_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "mistral-ocr-latest",
document: {
type: "document_url",
document_url: "https://arxiv.org/pdf/1706.03762",
},
}),
});
const data = await response.json();
for (const page of data.pages) {
console.log(`Page ${page.index}:\n${page.markdown.slice(0, 200)}...`);
}
```
For local files, encode as base64 data URI: `data:application/pdf;base64, {base64_data}`
## Response
```json theme={"theme":{"light":"github-light","dark":"github-dark"}}
{
"pages": [
{
"index": 0,
"markdown": "# Attention Is All You Need\n\nAshish Vaswani, Noam Shazeer...\n\n# Abstract\n\nThe dominant sequence transduction models..."
},
{
"index": 1,
"markdown": "## 1 Introduction\n\nRecurrent neural networks..."
}
],
"model": "mistral-ocr-latest"
}
```
## Use Cases
### Invoice Processing
Extract line items, totals, and dates from invoices for automated bookkeeping.
### Receipt Scanning
Parse receipts for expense tracking—amounts, vendors, dates extracted as structured text.
### Document Digitization
Convert scanned documents to searchable, editable markdown while preserving tables and formatting.
## Parameters
| Parameter | Type | Required | Description |
| ----------------------- | ------ | -------- | ---------------------------------------- |
| `model` | string | No | OCR model. Default: `mistral-ocr-latest` |
| `document.type` | string | Yes | Always `document_url` |
| `document.document_url` | string | Yes | HTTPS URL or data URI |
## Limits
* **Max file size:** 50 MB
* **Max pages:** 1,000 per document
* **Timeout:** 120 seconds
# Response Schemas
Source: https://docs.dedaluslabs.ai/api-reference/schemas
Reference for all API response objects and their structure
This page documents all response schemas returned by the Dedalus API. All responses follow OpenAI-compatible formats.
***
## Dedalus Runner
Response object returned by the `DedalusRunner` for non-streaming tool execution runs.
Final text output from the conversation after all tool executions complete
List of all tool execution results from the run
Name of the tool that was executed
The result returned by the tool execution
The step number when this tool was executed
Error message if the tool execution failed
Total number of steps (LLM calls) used during the run
List of tool names that were called during the run
Full conversation history including system prompts, user messages, assistant responses, and tool calls/results. Useful for debugging, logging, or continuing conversations.
Optional list of detected intents (when `return_intent=true`)
Alias for `final_output` (legacy compatibility)
Alias for `final_output` (legacy compatibility)
Returns a copy of the full conversation history (`messages`) for use in follow-up runs. Enables multi-turn conversations by passing the result to subsequent `runner.run()` calls.
```python Example theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_labs import Dedalus, DedalusRunner
client = Dedalus(api_key="YOUR_API_KEY")
runner = DedalusRunner(client)
def get_weather(location: str) -> str:
"""Get the current weather for a location."""
return f"The weather in {location} is sunny and 72°F"
result = runner.run(
input="What's the weather like in San Francisco?",
tools=[get_weather],
model="openai/gpt-5-nano",
max_steps=5
)
# Access result properties
print(result.final_output) # "The weather in San Francisco is sunny and 72°F"
print(result.steps_used) # e.g., 2
print(result.tools_called) # ["get_weather"]
print(result.tool_results) # [{"name": "get_weather", "result": "The weather...", "step": 1}]
```
```python Accessing Message History theme={"theme":{"light":"github-light","dark":"github-dark"}}
import json
# Print the full conversation history
for msg in result.messages:
role = msg.get("role")
content = msg.get("content", "")
if role == "user":
print(f"User: {content}")
elif role == "assistant":
if msg.get("tool_calls"):
tools = [tc["function"]["name"] for tc in msg["tool_calls"]]
print(f"Assistant: [calling {', '.join(tools)}]")
else:
print(f"Assistant: {content}")
elif role == "tool":
print(f"Tool Result: {content[:100]}...")
# Store message history to JSON for logging/debugging
with open("conversation_log.json", "w") as f:
json.dump(result.messages, f, indent=2)
# Continue the conversation with message history
follow_up = runner.run(
messages=result.to_input_list(), # Pass previous conversation
input="What about New York?", # Add new user message
tools=[get_weather],
model="openai/gpt-5-nano"
)
```
```json Example Response theme={"theme":{"light":"github-light","dark":"github-dark"}}
{
"final_output": "The weather in San Francisco is sunny and 72°F",
"tool_results": [
{
"name": "get_weather",
"result": "The weather in San Francisco is sunny and 72°F",
"step": 1
}
],
"steps_used": 2,
"tools_called": ["get_weather"],
"messages": [
{ "role": "user", "content": "What's the weather like in San Francisco?" },
{
"role": "assistant",
"tool_calls": [
{
"id": "call_abc123",
"type": "function",
"function": {
"name": "get_weather",
"arguments": "{\"location\": \"San Francisco\"}"
}
}
]
},
{
"role": "tool",
"tool_call_id": "call_abc123",
"content": "The weather in San Francisco is sunny and 72°F"
},
{
"role": "assistant",
"content": "The weather in San Francisco is sunny and 72°F"
}
],
"intents": null
}
```
***
## Chat Completions
The complete response object for non-streaming chat completions.
Unique identifier for the chat completion
Object type, always `chat.completion`
Unix timestamp (seconds) when the completion was created
The model used for completion (e.g., `openai/gpt-5-nano`)
List of completion choices
Index of this choice
The generated message
Role of the message author (`assistant`, `tool`, etc.)
The content of the message
Tool calls requested by the model (present when `finish_reason` is `tool_calls`)
Unique identifier for this tool call, referenced when returning results via `tool_call_id`
Always `function`
Name of the function the model wants to invoke
JSON-encoded string of the arguments. Must be parsed with `JSON.parse` / `json.loads` before use.
Why the generation stopped: `stop`, `length`, `tool_calls`, `content_filter`
Log probability information for tokens
Token usage statistics
Number of tokens in the prompt
Number of tokens in the completion
Total tokens used (prompt + completion)
System fingerprint for reproducibility
```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}}
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1677652288,
"model": "openai/gpt-5-nano",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm doing well, thank you for asking."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 13,
"completion_tokens": 12,
"total_tokens": 25
}
}
```
***
Streamed response chunks for streaming completions (`stream=true`).
Unique identifier for the chat completion
Object type, always `chat.completion.chunk`
Unix timestamp when the chunk was created
The model being used
List of chunk choices
Index of this choice
Incremental content delta
Role (only in first chunk)
Incremental content string
Incremental tool call updates
Reason for completion (only in final chunk): `stop`, `length`, `tool_calls`, `content_filter`, or `null`
```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}}
{
"id": "chatcmpl-abc123",
"object": "chat.completion.chunk",
"created": 1677652288,
"model": "openai/gpt-5-nano",
"choices": [
{
"index": 0,
"delta": {
"content": "Hello"
},
"finish_reason": null
}
]
}
```
***
## Embeddings
Response object for embedding creation requests.
Object type, always `list`
List of embedding objects
Object type, always `embedding`
The embedding vector (array of floats)
Index of this embedding
The model used to generate embeddings
Token usage information
Number of tokens in the input
Total tokens processed
```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}}
{
"object": "list",
"data": [
{
"object": "embedding",
"embedding": [0.0023064255, -0.009327292, -0.0028842222],
"index": 0
}
],
"model": "openai/text-embedding-3-small",
"usage": {
"prompt_tokens": 8,
"total_tokens": 8
}
}
```
***
## Models
Response object for listing available models. Includes rich metadata about capabilities and routing.
Object type, always `list`
List of model objects
Model identifier with provider prefix (e.g., `openai/gpt-4o`, `anthropic/claude-opus-4-5`)
Provider name: `openai`, `anthropic`, `google`, `xai`, `deepseek`, `mistral`, etc.
ISO 8601 timestamp when the model was created
Human-readable display name (optional)
Model description (optional)
Model capabilities
Supports text generation via chat completions
Supports image input / multimodal
Can generate images
Supports audio input/output
Supports tool/function calling
Supports structured JSON output
Supports streaming responses
Supports extended reasoning (e.g., o1, o3, Claude thinking)
Maximum input context window in tokens
Maximum output tokens
Provider-specific metadata
Model status: `enabled`, `disabled`, `preview`, `deprecated`
Which upstream API this model uses (e.g., `openai/chat/completions`, `anthropic/messages`)
```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}}
{
"object": "list",
"data": [
{
"id": "openai/gpt-4o",
"provider": "openai",
"created_at": "1970-01-01T00:00:00Z",
"display_name": null,
"description": null,
"capabilities": {
"text": true,
"vision": null,
"image_generation": null,
"audio": null,
"tools": null,
"structured_output": null,
"streaming": null,
"thinking": null,
"input_token_limit": null,
"output_token_limit": null
},
"provider_info": {
"status": "enabled",
"upstream_api": "openai/chat/completions"
}
},
{
"id": "openai/o1",
"provider": "openai",
"created_at": "1970-01-01T00:00:00Z",
"capabilities": {
"text": true,
"thinking": true
},
"provider_info": {
"status": "enabled",
"upstream_api": "openai/chat/completions"
}
},
{
"id": "anthropic/claude-opus-4-5",
"provider": "anthropic",
"created_at": "1970-01-01T00:00:00Z",
"capabilities": {
"text": true,
"vision": true,
"tools": true
},
"provider_info": {
"status": "enabled",
"upstream_api": "anthropic/messages"
}
}
]
}
```
***
## Images
Response object for image generation requests.
Unix timestamp when the images were generated
List of generated image objects
URL of the generated image (when `response_format="url"`)
Base64-encoded image data (when `response_format="b64_json"`)
The revised prompt used to generate the image (may differ from input for safety)
```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}}
{
"created": 1677652288,
"data": [
{
"url": "https://images.example.com/abc123.png",
"revised_prompt": "A cute baby sea otter floating on its back in calm blue water"
}
]
}
```
***
## Audio
Response object for audio transcription requests.
The transcribed text from the audio file
```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}}
{
"text": "Hello, this is a test of audio transcription."
}
```
***
Response object for audio translation requests (always translates to English).
The translated text from the audio file (in English)
```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}}
{
"text": "Hello, this is a test of audio translation."
}
```
***
## Errors
All endpoints may return errors with this structure.
Error information object
Human-readable error message
Error type: `invalid_request_error`, `authentication_error`, `rate_limit_error`, `server_error`
Specific error code for programmatic handling
Parameter that caused the error (if applicable)
```json Example theme={"theme":{"light":"github-light","dark":"github-dark"}}
{
"error": {
"message": "Invalid API key provided",
"type": "authentication_error",
"code": "invalid_api_key"
}
}
```
# Pricing
Source: https://docs.dedaluslabs.ai/api/pricing
Pay per token, any model, any provider
The Dedalus API is usage-based. You pay per token at each provider's rate, plus a small routing fee. No markup on model costs.
Pricing details coming soon. Contact us at [support@dedaluslabs.ai](mailto:support@dedaluslabs.ai) for current rates.
## BYOK
Already have provider API keys? Use [Bring Your Own Key](/guides/byok) to route through Dedalus at zero markup. You get the unified API, tool calling, streaming, and format normalization — billed directly to your provider account.
# Create Chat Completion
Source: https://docs.dedaluslabs.ai/api/v1/create-chat-completion
/openapi.json post /v1/chat/completions
Create a chat completion.
Generates a model response for the given conversation and configuration.
Supports OpenAI-compatible parameters and provider-specific extensions.
Headers:
- Authorization: bearer key for the calling account.
- X-Provider / X-Provider-Key: optional headers for using your own provider API key.
Behavior:
- If multiple models are supplied, the first one is used, and the agent may hand off to another model.
- Tools may be invoked on the server or signaled for the client to run.
- Streaming responses emit incremental deltas; non-streaming returns a single object.
- Usage metrics are computed when available and returned in the response.
Responses:
- 200 OK: JSON completion object with choices, message content, and usage.
- 400 Bad Request: validation error.
- 401 Unauthorized: authentication failed.
- 402 Payment Required or 429 Too Many Requests: quota, balance, or rate limit issue.
- 500 Internal Server Error: unexpected failure.
Billing:
- Token usage metered by the selected model(s).
- Tool calls and MCP sessions may be billed separately.
- Streaming is settled after the stream ends via an async task.
Example (non-streaming HTTP):
POST /v1/chat/completions
Content-Type: application/json
Authorization: Bearer
{
"model": "provider/model-name",
"messages": [{"role": "user", "content": "Hello"}]
}
200 OK
{
"id": "cmpl_123",
"object": "chat.completion",
"choices": [
{"index": 0, "message": {"role": "assistant", "content": "Hi there!"}, "finish_reason": "stop"}
],
"usage": {"prompt_tokens": 3, "completion_tokens": 4, "total_tokens": 7}
}
Example (streaming over SSE):
POST /v1/chat/completions
Accept: text/event-stream
data: {"id":"cmpl_123","choices":[{"index":0,"delta":{"content":"Hi"}}]}
data: {"id":"cmpl_123","choices":[{"index":0,"delta":{"content":" there!"}}]}
data: [DONE]
# Create Embeddings
Source: https://docs.dedaluslabs.ai/api/v1/create-embeddings
/openapi.json post /v1/embeddings
Create embeddings using the configured provider.
# Create Image
Source: https://docs.dedaluslabs.ai/api/v1/create-image
/openapi.json post /v1/images/generations
Generate images from text prompts.
Pure image generation models only (DALL-E, GPT Image).
For multimodal models like gemini-2.5-flash-image, use /v1/chat/completions.
# Create Speech
Source: https://docs.dedaluslabs.ai/api/v1/create-speech
/openapi.json post /v1/audio/speech
Generate speech audio from text.
Generates audio from the input text using text-to-speech models. Supports multiple
voices and output formats including mp3, opus, aac, flac, wav, and pcm.
Returns streaming audio data that can be saved to a file or streamed directly to users.
# Create Transcription
Source: https://docs.dedaluslabs.ai/api/v1/create-transcription
/openapi.json post /v1/audio/transcriptions
Transcribe audio into text.
Transcribes audio files using OpenAI's Whisper model. Supports multiple audio formats
including mp3, mp4, mpeg, mpga, m4a, wav, and webm. Maximum file size is 25 MB.
Args:
file: Audio file to transcribe (required)
model: Model ID to use (e.g., "openai/whisper-1")
language: ISO-639-1 language code (e.g., "en", "es") - improves accuracy
prompt: Optional text to guide the model's style
response_format: Format of the output (json, text, srt, verbose_json, vtt)
temperature: Sampling temperature between 0 and 1
Returns:
Transcription object with the transcribed text
# Create Translation
Source: https://docs.dedaluslabs.ai/api/v1/create-translation
/openapi.json post /v1/audio/translations
Translate audio into English.
Translates audio files in any supported language to English text using OpenAI's
Whisper model. Supports the same audio formats as transcription. Maximum file size
is 25 MB.
Args:
file: Audio file to translate (required)
model: Model ID to use (e.g., "openai/whisper-1")
prompt: Optional text to guide the model's style
response_format: Format of the output (json, text, srt, verbose_json, vtt)
temperature: Sampling temperature between 0 and 1
Returns:
Translation object with the English translation
# Changelog
Source: https://docs.dedaluslabs.ai/changelog/index
Latest updates and releases for Dedalus Labs
### Dedalus API
**Features**
* Support for Opus 4.5 streaming
* Support for Gemini model generation tool calling functions
* Expanded support for Grok models
* Expanded support for DeepSeek models
### Dedalus Cloud
**Non-compliant Servers Deprecation Notice**
Starting January 12, 2026, some MCP servers will be deprecated, and affected users will be notified by email.
To comply with MCP latest protocol, servers must support authentication to securely store credentials. The servers that will be deprecated do not meet this requirement. See our [MCP server guide](/sdk/guides/server-guidelines) for migration instructions.
## Structured Outputs and TypeScript SDK
This release introduces structured outputs for the Python SDK and the launch of our TypeScript SDK.
### dedalus-labs-sdk (Python)
**Features**
* Structured Outputs: Added support for Pydantic-powered structured outputs. This includes a new `chat.completions.parse()` method to automatically deserialize the response content into a Pydantic model, ensuring strict adherence to JSON schema with OpenAI models.
### dedalus-labs-sdk (TypeScript)
**Features**
* TypeScript SDK Launch: The `dedalus-labs` and `dedalus-labs-mcp` packages are now available on npm, featuring type-safe JSON responses with Zod schemas.
## Streaming Structured Outputs
### dedalus-labs-sdk (Python)
**Features**
* Streaming support for structured outputs: Stream partial results while parsing into Pydantic models
* Pydantic stream helper: New `stream_helper` for incremental structured data
* Response format standardization across all providers
**Compatibility**
* Python 3.14 support
* Improved Pydantic v1 compatibility for `model_dump` and `model_dump_json` signatures
* Dropped Python 3.8 support (minimum version is now 3.9+)
## Structured Outputs for Tools
### dedalus-labs-sdk (Python)
**Features**
* Structured outputs for tool definitions: Define tool parameters using Pydantic models
* Flexible `.parse()` input: Accept various input formats for the parse method
* Nullable messages parameter for simpler API calls
## Image Support and Auto-Executing Tools
### dedalus-labs-sdk (Python)
**Features**
* Image editing and variation support via the images API
* Vision format helper: Simplified image content formatting for multimodal models
* Auto-executing tools: Tools can now be configured to execute automatically based on model responses
* File upload support for multimodal requests
## Runner Improvements
### dedalus-labs-sdk (Python)
**Features**
* Conversation history access: Access the full conversation history from runner instances
* Instructions parameter: Pass custom system instructions to runners at runtime
* Pydantic v3 forward compatibility
## DedalusModel
### dedalus-labs-sdk (Python)
**Features**
* `DedalusModel` type: A unified model identifier that works across all supported providers
* Model parameter extraction: Automatically extracts provider-specific parameters with warnings for unsupported options
* Decoupled `Model` and `DedalusModel` types for cleaner API boundaries
## API Standardization
### dedalus-labs-sdk (Python)
**Improvements**
* Standardized parameter naming: `messages=` for completions, `input=` for runner
## Chat Completions and Schema Generation
### dedalus-labs-sdk (Python)
**Features**
* Chat completions API: Full support for the chat completions endpoint
* `to_schema()` method: Generate JSON schemas from Pydantic models for structured outputs
* `ModelConfig`: Configure model-specific parameters programmatically
* Streaming support with configurable options
## Streaming Schemas
### dedalus-labs-sdk (Python)
**Features**
* Streaming response schemas: Type-safe streaming with proper schema definitions
* File upload requests: Initial support for multipart file uploads
## SDK Publication
First public release of the Dedalus SDK on package registries.
### dedalus-labs-sdk
**Features**
* Published `dedalus-labs` package on PyPI
* Published `dedalus-labs` package on npm
* Published `dedalus-labs-mcp` package on npm
[//]: # "AUTO-SDK-UPDATES:START"
* Repository: `dedalus-labs/dedalus-sdk-python`
* Version: `v0.2.0`
* Published: 2026-01-09
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-python/releases/tag/v0.2.0)
* Repository: `dedalus-labs/dedalus-sdk-typescript`
* Version: `v0.1.0-alpha.8`
* Published: 2025-11-26
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-typescript/releases/tag/v0.1.0-alpha.8)
* Repository: `dedalus-labs/dedalus-sdk-typescript`
* Version: `v0.1.0-alpha.6`
* Published: 2025-11-25
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-typescript/releases/tag/v0.1.0-alpha.6)
* Repository: `dedalus-labs/dedalus-sdk-python`
* Version: `v0.1.1`
* Published: 2025-11-12
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-python/releases/tag/v0.1.1)
* Repository: `dedalus-labs/dedalus-sdk-python`
* Version: `v0.1.0`
* Published: 2025-11-09
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-python/releases/tag/v0.1.0)
* Repository: `dedalus-labs/dedalus-sdk-go`
* Version: `v0.1.0-alpha.3`
* Published: 2025-11-08
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-go/releases/tag/v0.1.0-alpha.3)
* Repository: `dedalus-labs/dedalus-sdk-typescript`
* Version: `v0.1.0-alpha.5`
* Published: 2025-11-08
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-typescript/releases/tag/v0.1.0-alpha.5)
* Repository: `dedalus-labs/dedalus-sdk-python`
* Version: `v0.0.1`
* Published: 2025-11-08
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-python/releases/tag/v0.0.1)
* Repository: `dedalus-labs/dedalus-sdk-python`
* Version: `v0.1.0-alpha.10`
* Published: 2025-11-08
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-python/releases/tag/v0.1.0-alpha.10)
* Repository: `dedalus-labs/dedalus-sdk-python`
* Version: `v0.1.0-alpha.9`
* Published: 2025-09-20
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-python/releases/tag/v0.1.0-alpha.9)
* Repository: `dedalus-labs/dedalus-sdk-python`
* Version: `v0.1.0-alpha.8`
* Published: 2025-08-21
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-python/releases/tag/v0.1.0-alpha.8)
* Repository: `dedalus-labs/dedalus-sdk-python`
* Version: `v0.1.0-alpha.7`
* Published: 2025-08-21
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-python/releases/tag/v0.1.0-alpha.7)
* Repository: `dedalus-labs/dedalus-sdk-python`
* Version: `v0.1.0-alpha.6`
* Published: 2025-08-21
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-python/releases/tag/v0.1.0-alpha.6)
* Repository: `dedalus-labs/dedalus-sdk-python`
* Version: `v0.1.0-alpha.5`
* Published: 2025-08-18
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-python/releases/tag/v0.1.0-alpha.5)
* Repository: `dedalus-labs/dedalus-sdk-typescript`
* Version: `v0.1.0-alpha.4`
* Published: 2025-08-07
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-typescript/releases/tag/v0.1.0-alpha.4)
* Repository: `dedalus-labs/dedalus-sdk-go`
* Version: `v0.1.0-alpha.2`
* Published: 2025-08-05
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-go/releases/tag/v0.1.0-alpha.2)
* Repository: `dedalus-labs/dedalus-sdk-typescript`
* Version: `v0.1.0-alpha.3`
* Published: 2025-08-05
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-typescript/releases/tag/v0.1.0-alpha.3)
* Repository: `dedalus-labs/dedalus-sdk-typescript`
* Version: `v0.1.0-alpha.2`
* Published: 2025-07-31
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-typescript/releases/tag/v0.1.0-alpha.2)
* Repository: `dedalus-labs/dedalus-sdk-typescript`
* Version: `v0.1.0-alpha.1`
* Published: 2025-07-30
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-typescript/releases/tag/v0.1.0-alpha.1)
* Repository: `dedalus-labs/dedalus-sdk-go`
* Version: `v0.1.0-alpha.1`
* Published: 2025-07-30
* Notes: [View release notes](https://github.com/dedalus-labs/dedalus-sdk-go/releases/tag/v0.1.0-alpha.1)
[//]: # "AUTO-SDK-UPDATES:END"
# Community
Source: https://docs.dedaluslabs.ai/community
Connect with the Dedalus community
Join the Dedalus community to get help, share what you're building, and stay up to date.
Chat with the team and other developers.
Follow us for product updates and announcements.
Browse our open-source SDKs and report issues.
Reach us directly for account or billing questions.
# Cookbook
Source: https://docs.dedaluslabs.ai/cookbook/coming-soon
Recipes, patterns, and real-world examples
Coming soon.
# Connect to terminal WebSocket stream
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/connect-to-terminal-websocket-stream
/dcs-openapi.json get /v1/machines/{machine_id}/terminals/{terminal_id}/stream
Upgrades to a WebSocket connection for interactive terminal I/O. Clients send JSON `TerminalClientEvent` messages and receive JSON `TerminalServerEvent` messages. Terminal byte streams are base64-encoded inside `input` and `output` events; `resize` events use integer `width` and `height` fields.
# Create execution
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/create-execution
/dcs-openapi.json post /v1/machines/{machine_id}/executions
# Create machine
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/create-machine
/dcs-openapi.json post /v1/machines
# Create preview
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/create-preview
/dcs-openapi.json post /v1/machines/{machine_id}/previews
# Create SSH session
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/create-ssh-session
/dcs-openapi.json post /v1/machines/{machine_id}/ssh
# Create terminal
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/create-terminal
/dcs-openapi.json post /v1/machines/{machine_id}/terminals
# Delete artifact
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/delete-artifact
/dcs-openapi.json delete /v1/machines/{machine_id}/artifacts/{artifact_id}
# Delete execution
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/delete-execution
/dcs-openapi.json delete /v1/machines/{machine_id}/executions/{execution_id}
# Delete preview
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/delete-preview
/dcs-openapi.json delete /v1/machines/{machine_id}/previews/{preview_id}
# Delete SSH session
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/delete-ssh-session
/dcs-openapi.json delete /v1/machines/{machine_id}/ssh/{session_id}
# Delete terminal
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/delete-terminal
/dcs-openapi.json delete /v1/machines/{machine_id}/terminals/{terminal_id}
# Destroy machine
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/destroy-machine
/dcs-openapi.json delete /v1/machines/{machine_id}
# Get artifact
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/get-artifact
/dcs-openapi.json get /v1/machines/{machine_id}/artifacts/{artifact_id}
# Get execution
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/get-execution
/dcs-openapi.json get /v1/machines/{machine_id}/executions/{execution_id}
# Get execution output
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/get-execution-output
/dcs-openapi.json get /v1/machines/{machine_id}/executions/{execution_id}/output
# Get machine
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/get-machine
/dcs-openapi.json get /v1/machines/{machine_id}
# Get org machine usage
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/get-org-machine-usage
/dcs-openapi.json get /v1/orgs/{org_id}/usage
# Get preview
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/get-preview
/dcs-openapi.json get /v1/machines/{machine_id}/previews/{preview_id}
# Get SSH session
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/get-ssh-session
/dcs-openapi.json get /v1/machines/{machine_id}/ssh/{session_id}
# Get terminal
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/get-terminal
/dcs-openapi.json get /v1/machines/{machine_id}/terminals/{terminal_id}
# List artifacts
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/list-artifacts
/dcs-openapi.json get /v1/machines/{machine_id}/artifacts
# List execution events
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/list-execution-events
/dcs-openapi.json get /v1/machines/{machine_id}/executions/{execution_id}/events
# List executions
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/list-executions
/dcs-openapi.json get /v1/machines/{machine_id}/executions
# List machines
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/list-machines
/dcs-openapi.json get /v1/machines
# List previews
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/list-previews
/dcs-openapi.json get /v1/machines/{machine_id}/previews
# List SSH sessions
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/list-ssh-sessions
/dcs-openapi.json get /v1/machines/{machine_id}/ssh
# List terminals
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/list-terminals
/dcs-openapi.json get /v1/machines/{machine_id}/terminals
# Sleep a running machine
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/sleep-a-running-machine
/dcs-openapi.json post /v1/machines/{machine_id}/sleep
# Update machine
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/update-machine
/dcs-openapi.json patch /v1/machines/{machine_id}
# Wake a sleeping machine
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/wake-a-sleeping-machine
/dcs-openapi.json post /v1/machines/{machine_id}/wake
# Watch machine lifecycle status
Source: https://docs.dedaluslabs.ai/dcs/api/machine-lifecycle/watch-machine-lifecycle-status
/dcs-openapi.json get /v1/machines/{machine_id}/status/stream
Streams machine lifecycle updates over Server-Sent Events. Each `status` event contains a full `LifecycleResponse` payload. The stream closes after the machine reaches its current desired state.
# What are Dedalus Machines?
Source: https://docs.dedaluslabs.ai/dcs/dedalus-machines
Full Linux VMs. KVM isolation. S3 storage. GPU passthrough. No timeout.
A full Linux VM. Dedicated kernel. KVM isolation. Your own root.
Boots in 411ms. Sleeps for free. Wakes without moving data. Resizes live. Storage survives everything.
et
Think of it as EC2, except the instance starts in under a second, sleeps to zero cost, wakes without reprovisioning, and resizes CPU and memory while your process is running.
## Compute
Each machine runs on a dedicated [Cloud Hypervisor](https://github.com/cloud-hypervisor/cloud-hypervisor) microVM with its own Linux kernel. Not a container. Not a shared-kernel sandbox. Hardware-level KVM isolation.
| | |
| ------------------- | -------------------------------------------------------------- |
| **Cold start** | 411ms from a pre-warmed host pool |
| **Wake from sleep** | Sub-second. userfaultfd demand-pages 2 GB in 18ms |
| **Live resize** | ACPI hotplug. Add vCPU and memory to a running VM. No restart. |
| **Live migration** | fd-passing between hosts. Sub-millisecond for local migration. |
| **GPU** | VFIO passthrough. Full device, not emulated, not time-sliced. |
| **Timeout** | None. Machines run until you stop them. |
## Storage
`/home/machine` is backed by S3 via virtio-fs. It's not a local disk. It's a disaggregated filesystem that exists independently of the VM.
| | |
| ------------------- | ------------------------------------------------------ |
| **Persistence** | Survives sleep, wake, host failure, live migration |
| **Capacity** | Configurable. Not bounded by local disk. |
| **Durability** | S3 (99.999999999% durability) |
| **On sleep** | Compute released. Storage untouched. No data movement. |
| **On destroy** | Retained 30 days per retention policy, then deleted. |
| **Root filesystem** | Ephemeral. Rebuilt from snapshot on each wake. |
## Billing
Per-second while awake. Monthly for storage. Sleeping machines pay storage only.
| State | Compute cost | Storage cost |
| --------- | ------------ | ---------------------- |
| Running | Per-second | Monthly |
| Sleeping | Zero | Monthly |
| Destroyed | Zero | Zero (after retention) |
No idle tax. No minimum runtime. No reserved instances. Sleep a machine at 3 AM, wake it at 9 AM, pay for zero compute in between.
## Access
Four ways to interact with a running machine.
| Method | Use case |
| ----------------- | --------------------------------------------------------------- |
| **Execution API** | Run a command, get stdout/stderr. Stateless RPC. |
| **SSH** | Interactive shell. Port forwarding. SCP. |
| **Terminal API** | WebSocket-based PTY. For browser-based terminals. |
| **Preview URLs** | Expose a port to the internet. For web servers, notebooks, UIs. |
All four require the machine to be in `running` state. If it's sleeping, wake it first (or let the SDK handle it).
## Lifecycle
Four states. Predictable transitions. No hidden states.
```mermaid theme={"theme":{"light":"github-light","dark":"github-dark"}}
stateDiagram-v2
[*] --> starting: create
starting --> running: VM ready
running --> sleeping: sleep
sleeping --> running: wake (sub-second)
running --> destroyed: destroy
sleeping --> destroyed: destroy
destroyed --> [*]
running --> running: resize (live hotplug)
```
| Transition | What happens |
| ----------- | ---------------------------------------------------------------- |
| **Create** | Control plane admits request. Host agent boots VM from snapshot. |
| **Sleep** | VM stops. CPU and memory released. virtio-fs state preserved. |
| **Wake** | Fresh VM boots. virtio-fs restored via userfaultfd. Sub-second. |
| **Destroy** | VM removed. Storage enters 30-day retention. |
| **Resize** | Hotplug applies immediately. No state change. |
Create a machine and run code. CLI, Python, TypeScript, Go.
# Executions
Source: https://docs.dedaluslabs.ai/dcs/executions
Run commands on Dedalus Machines with persistent storage
Machines are stateful Linux VMs with persistent `/home/machine` storage. Files written in one execution are visible in the next. The machine stays running until you sleep or delete it.
## Run a command
Commands follow [`execve(2)`](https://man7.org/linux/man-pages/man2/execve.2.html) conventions. No shell. Wrap in `["/bin/bash", "-c", "..."]` if you need one.
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
dedalus machines exec --machine-id dm- -- whoami
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
import time
exc = client.machines.executions.create(
machine_id="dm-",
command=["/bin/bash", "-c", "whoami && uname -a"],
)
while exc.status not in ("succeeded", "failed"):
time.sleep(0.5)
exc = client.machines.executions.retrieve(
machine_id="dm-", execution_id=exc.execution_id,
)
output = client.machines.executions.output(
machine_id="dm-", execution_id=exc.execution_id,
)
print(output.stdout)
```
```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}}
const exec = await client.machines.executions.create({
machine_id: "dm-",
command: ["/bin/bash", "-c", "whoami && uname -a"],
});
// poll for completion...
const output = await client.machines.executions.output({
machine_id: "dm-",
execution_id: exec.execution_id,
});
console.log(output.stdout);
```
## Patterns
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
dedalus machines exec --machine-id dm- -- apt-get install -y python3-pip
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
exec_cmd(client, "dm-", ["apt-get", "install", "-y", "python3-pip"])
```
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
dedalus machines exec --machine-id dm- -- bash -c "cd /home/machine && git clone https://github.com/org/repo"
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
exec_cmd(client, "dm-", ["/bin/bash", "-c",
"cd /home/machine && git clone https://github.com/org/repo"])
```
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
dedalus machines exec --machine-id dm- -- bash -c "cd /home/machine/repo && python -m pytest -v"
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
exec_cmd(client, "dm-", ["/bin/bash", "-c",
"cd /home/machine/repo && python -m pytest -v 2>&1"])
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
exec_cmd(client, "dm-", ["/bin/bash", "-c",
"nohup python server.py > /tmp/server.log 2>&1 &"])
```
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
dedalus machines exec --machine-id dm- -- cat /home/machine/repo/output.json
dedalus machines exec --machine-id dm- -- bash -c "echo 'hello' > /home/machine/test.txt"
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
exec_cmd(client, "dm-", ["cat", "/home/machine/repo/output.json"])
exec_cmd(client, "dm-", ["/bin/bash", "-c", "echo 'hello' > /home/machine/test.txt"])
```
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
dedalus machines exec --machine-id dm- -- bash -c "free -h && df -h /home/machine"
```
All paths under `/home/machine` persist across executions and across
sleep/wake cycles. The root filesystem is ephemeral.
# Introduction
Source: https://docs.dedaluslabs.ai/dcs/index
Dedalus Cloud Services: the infra layer for AI agents
Dedalus Cloud Services (DCS) is the compute substrate for modern AI workloads.
Our flagship cloud offering is the Dedalus Machine: a full Linux system where everything just works — `apt install`, background services, nested virtualization, the lot. Starts in 250ms. Storage persists forever.
Cloud VMs that boot in 411ms, sleep for free, and resize without downtime.
# Lifecycle
Source: https://docs.dedaluslabs.ai/dcs/lifecycle
Sleep, wake, resize, and delete Dedalus Machines
Machines have four states: **running**, **sleeping**, **starting**, and **destroyed**. You control transitions between them.
## List machines
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
dedalus machines list
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
machines = client.machines.list()
```
```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}}
const machines = await client.machines.list();
```
## Get machine details
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
dedalus machines retrieve --machine-id dm-
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
dm = client.machines.retrieve(machine_id="dm-")
```
```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}}
const dm = await client.machines.retrieve({ machine_id: "dm-" });
```
## Sleep
Zero compute cost. Storage persists. Wake takes a few seconds.
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
dedalus machines update --machine-id dm- --desired-state sleeping
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
client.machines.sleep(machine_id="dm-")
```
```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}}
await client.machines.sleep({ machine_id: "dm-" });
```
## Wake
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
dedalus machines update --machine-id dm- --desired-state running
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
client.machines.wake(machine_id="dm-")
```
```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}}
await client.machines.wake({ machine_id: "dm-" });
```
## Resize
Live resize, no restart required.
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
dedalus machines update --machine-id dm- --vcpu 2 --memory-mib 2048
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
client.machines.update(machine_id="dm-", vcpu=2, memory_mib=2048)
```
```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}}
await client.machines.update({ machine_id: "dm-", vcpu: 2, memory_mib: 2048 });
```
Resize applies immediately to the running VM via hotplug. No downtime, no reboot.
## Delete
Grab the `revision` from the retrieve output. This prevents accidental deletes from stale state.
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
dedalus machines delete --machine-id dm- --if-match
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
client.machines.delete(machine_id="dm-", if_match="")
```
```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}}
await client.machines.delete({ machine_id: "dm-", "If-Match": "" });
```
Deletion is permanent. All storage is wiped. There is no undo.
## Examples
Host OpenClaw on a Dedalus Machine.
# Dedalus Machines Pricing
Source: https://docs.dedaluslabs.ai/dcs/pricing
Per-second compute, persistent storage, no idle tax
Compute is billed per second, only while your machine is awake. Storage persists across sleep/wake and is billed monthly. Sleeping machines cost nothing for compute.
## Plans
**\$0/mo** — no credit card required.
| | |
| -------- | ---------------------------- |
| Credits | \$20 one-time sign-up credit |
| vCPU | Up to 2 |
| RAM | Up to 4 GiB |
| Storage | 10 GB |
| Machines | 5 |
| Compute | 50 hrs/mo ceiling |
| Support | Community |
**\$20/mo** — your subscription is your usage credit.
| | |
| -------- | ------------------------------------- |
| Credits | Sign-up credit + \$20/mo usage credit |
| vCPU | Up to 4 |
| RAM | Up to 8 GiB |
| Storage | 20 GB (expandable) |
| Machines | 25 |
| Compute | Unlimited |
| Support | Priority |
**\$25/mo per user** — everything in Pro, plus team features.
| | |
| -------- | --------------------------------- |
| Credits | Sign-up credit + \$20/mo per user |
| vCPU | Up to 8 |
| RAM | Up to 32 GiB |
| Storage | 50 GB (expandable) |
| Machines | 25 per user |
| Compute | Unlimited |
| SSO | Yes |
| Support | Priority |
**Custom** — dedicated fleet, SLA, RBAC, audit logs.
| | |
| -------- | ---------- |
| Credits | Custom |
| vCPU | Custom |
| RAM | Custom |
| Storage | Custom |
| Machines | Unlimited |
| Compute | Unlimited |
| Support | SLA-backed |
[Contact us](mailto:support@dedaluslabs.ai) for pricing.
## Compute rates
Billed only when awake.
| Resource | Per second | Per hour | \~Monthly (always-on) | vs Daytona |
| -------- | ------------ | --------- | --------------------- | ----------- |
| vCPU | \$0.0000126 | \$0.04536 | \~\$33.11 | 10% cheaper |
| GiB RAM | \$0.00000405 | \$0.01458 | \~\$10.64 | 10% cheaper |
## Storage
Included with your plan. Additional storage at \$0.08/mo per GB.
## How credits work
Your Pro subscription **is** your usage credit. Not an additional charge on top.
| Your usage | You pay |
| ---------- | ---------------------------- |
| Under \$20 | \$20 (just the subscription) |
| Over \$20 | \$20 + the overage |
Recurring plan credits reset each billing cycle. Hobby's \$20 sign-up credit is one-time and does not reset.
| Workspace | Compute/hr | Hours covered |
| ---------------------------- | ---------- | ------------- |
| 1 vCPU / 2 GiB | \$0.07452 | \~268 hrs |
| 2 vCPU / 4 GiB | \$0.14904 | \~134 hrs |
| 2 vCPU / 8 GiB (Pro default) | \$0.20736 | \~96 hrs |
| 4 vCPU / 8 GiB (Pro max) | \$0.29808 | \~67 hrs |
## Free tier
No credit card. \$20 one-time credit. Ship something and see if it fits.
* Up to 2 vCPU / 4 GiB RAM per machine
* Up to 10 GB persistent storage
* Up to 5 machines, 50 hours/month compute ceiling
* When credit runs out, machines sleep automatically (no bill shock)
## Startup program
\$10,000 in compute credits, valid 12 months. At the default Pro config (2 vCPU / 8 GiB), that covers \~48,225 hours of active compute.
Email us with your company name and use case.
# Quickstart
Source: https://docs.dedaluslabs.ai/dcs/quickstart
Create a machine, run a command, get output. Under 2 minutes.
Create a machine, run a command, get output.
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
brew install dedalus-labs/tap/dedalus
```
On macOS, if you see a quarantine warning, run:
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
xattr -d com.apple.quarantine $(which dedalus)
```
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
go install github.com/dedalus-labs/dedalus-cli/cmd/dedalus@latest
```
Get a key from the [Dashboard](https://www.dedaluslabs.ai/dashboard/api-keys).
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
export DEDALUS_API_KEY=
```
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
dedalus machines create --vcpu 1 --memory-mib 1024 --storage-gib 10
```
Note the `machine_id` (starts with `dm-`).
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
dedalus machines exec --machine-id dm- -- whoami
```
## SDK examples
Same thing, in code.
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_sdk import Dedalus
import os, time
client = Dedalus()
dm = client.machines.create(vcpu=1, memory_mib=1024, storage_gib=10)
while dm.status.phase != "running":
time.sleep(1)
dm = client.machines.retrieve(machine_id=dm.machine_id)
exc = client.machines.executions.create(
machine_id=dm.machine_id,
command=["/bin/bash", "-c", "whoami && uname -a"],
)
while exc.status not in ("succeeded", "failed"):
time.sleep(0.5)
exc = client.machines.executions.retrieve(
machine_id=dm.machine_id, execution_id=exc.execution_id,
)
output = client.machines.executions.output(
machine_id=dm.machine_id, execution_id=exc.execution_id,
)
print(output.stdout)
```
```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from "dedalus";
const client = new Dedalus();
let dm = await client.machines.create({
vcpu: 1, memory_mib: 1024, storage_gib: 10,
});
while (dm.status.phase !== "running") {
await new Promise((r) => setTimeout(r, 1000));
dm = await client.machines.retrieve({ machine_id: dm.machine_id });
}
const exec = await client.machines.executions.create({
machine_id: dm.machine_id,
command: ["/bin/bash", "-c", "whoami && uname -a"],
});
let result = exec;
while (result.status !== "succeeded" && result.status !== "failed") {
await new Promise((r) => setTimeout(r, 500));
result = await client.machines.executions.retrieve({
machine_id: dm.machine_id, execution_id: exec.execution_id,
});
}
const output = await client.machines.executions.output({
machine_id: dm.machine_id, execution_id: exec.execution_id,
});
console.log(output.stdout);
```
```go Go theme={"theme":{"light":"github-light","dark":"github-dark"}}
package main
import (
"context"
"fmt"
"time"
"github.com/dedalus-labs/dedalus-go"
"github.com/dedalus-labs/dedalus-go/option"
)
func main() {
client := dedalus.NewClient()
ctx := context.Background()
dm, _ := client.Machines.New(ctx, dedalus.MachineNewParams{
CreateParams: dedalus.CreateParams{
VCPU: 1, MemoryMiB: 1024, StorageGiB: 10,
},
})
for dm.Status.Phase != "running" {
time.Sleep(time.Second)
dm, _ = client.Machines.Get(ctx, dedalus.MachineGetParams{
MachineID: dm.MachineID,
})
}
exc, _ := client.Machines.Executions.New(ctx, dedalus.MachineExecutionNewParams{
MachineID: dm.MachineID,
ExecutionCreateParams: dedalus.ExecutionCreateParams{
Command: []string{"/bin/bash", "-c", "whoami && uname -a"},
},
})
for exc.Status != "succeeded" && exc.Status != "failed" {
time.Sleep(500 * time.Millisecond)
exc, _ = client.Machines.Executions.Get(ctx, dedalus.MachineExecutionGetParams{
MachineID: dm.MachineID, ExecutionID: exc.ExecutionID,
})
}
output, _ := client.Machines.Executions.Output(ctx, dedalus.MachineExecutionOutputParams{
MachineID: dm.MachineID, ExecutionID: exc.ExecutionID,
})
fmt.Println(output.Stdout)
}
```
## What's next
Watch machine status changes in real time via SSE.
Run commands, install packages, clone repos.
Sleep, wake, resize, and delete machines.
# Streaming
Source: https://docs.dedaluslabs.ai/dcs/streaming
Watch machine lifecycle changes in real time with Server-Sent Events
Instead of polling, stream machine status changes via Server-Sent Events (SSE). The stream stays open until the machine is destroyed or you disconnect.
```bash CLI theme={"theme":{"light":"github-light","dark":"github-dark"}}
curl -N https://dcs.dedaluslabs.ai/v1/machines/dm-/status/stream \
-H "Authorization: Bearer $DEDALUS_API_KEY" \
-H "Accept: text/event-stream"
```
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
import os, json, urllib.request
machine_id = "dm-..."
url = f"https://dcs.dedaluslabs.ai/v1/machines/{machine_id}/status/stream"
req = urllib.request.Request(url, headers={
"Authorization": f"Bearer {os.environ['DEDALUS_API_KEY']}",
"Accept": "text/event-stream",
})
with urllib.request.urlopen(req) as resp:
for line in resp:
line = line.decode().strip()
if line.startswith("data: "):
status = json.loads(line[6:])
print(status["status"]["phase"])
if status["status"]["phase"] == "running":
break
```
Each event is a JSON payload with the full machine state:
```text theme={"theme":{"light":"github-light","dark":"github-dark"}}
event: status
data: {"machine_id":"dm-...","status":{"phase":"running",...}}
```
The TypeScript and Go SDKs also support streaming via `client.machines.watch()`.
See the [SDK reference](/sdk/dcs/typescript) for details.
# Bring Your Own Key (BYOK)
Source: https://docs.dedaluslabs.ai/guides/byok
Use your own API keys to call providers directly through Dedalus
BYOK lets you send requests through Dedalus using your own provider API key. The request still flows through our unified API (routing, tool calling, streaming, format normalization), but the LLM call is billed to your account with the provider.
## When to use BYOK
* You have negotiated pricing or credits with a provider.
* You want to use a model tier or region not available on our shared keys.
* Your compliance policy requires that API keys stay under your control.
## Quick start
Pass three headers (or SDK options) alongside your normal Dedalus API key:
| Header | SDK option | Description |
| ------------------ | ---------------- | ----------------------------------------------------- |
| `X-Provider` | `provider` | Provider name (`openai`, `anthropic`, `google`, etc.) |
| `X-Provider-Key` | `provider_key` | Your API key for that provider |
| `X-Provider-Model` | `provider_model` | Model identifier at the provider (optional) |
Only `X-Provider-Key` is strictly required. If you omit `X-Provider`, it is inferred from the model name. If you omit `X-Provider-Model`, the model from the request body is used.
## Examples
### curl
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
curl https://api.dedaluslabs.ai/v1/chat/completions \
-H "Authorization: Bearer $DEDALUS_API_KEY" \
-H "X-Provider: openai" \
-H "X-Provider-Key: $OPENAI_API_KEY" \
-H "X-Provider-Model: gpt-4o" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4o",
"messages": [{"role": "user", "content": "Hello"}]
}'
```
### Python SDK
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_labs import AsyncDedalus
client = AsyncDedalus(
provider="openai",
provider_key="sk-your-openai-key",
provider_model="gpt-4o",
)
response = await client.chat.completions.create(
model="openai/gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
)
```
### TypeScript SDK
```typescript theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from "dedalus-labs";
const client = new Dedalus({
provider: "openai",
providerKey: "sk-your-openai-key",
providerModel: "gpt-4o",
});
const response = await client.chat.completions.create({
model: "openai/gpt-4o",
messages: [{ role: "user", content: "Hello" }],
});
```
### Environment variables
You can also set BYOK options via environment variables instead of passing them in code:
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
export DEDALUS_PROVIDER="anthropic"
export DEDALUS_PROVIDER_KEY="sk-ant-your-key"
export DEDALUS_PROVIDER_MODEL="claude-sonnet-4-5-20250929"
```
The SDK picks these up automatically. No code changes needed.
## Per-request overrides
The SDK options set defaults for every request. You can also override per-request by setting the headers directly:
```python Python theme={"theme":{"light":"github-light","dark":"github-dark"}}
response = await client.chat.completions.create(
model="google/gemini-2.5-pro",
messages=[{"role": "user", "content": "Hello"}],
extra_headers={
"X-Provider": "google",
"X-Provider-Key": "your-google-key",
},
)
```
```typescript TypeScript theme={"theme":{"light":"github-light","dark":"github-dark"}}
const response = await client.chat.completions.create(
{
model: "google/gemini-2.5-pro",
messages: [{ role: "user", content: "Hello" }],
},
{
headers: {
"X-Provider": "google",
"X-Provider-Key": "your-google-key",
},
},
);
```
## Supported providers
Any provider in our [model list](/sdk/guides/providers) works with BYOK:
openai
anthropic
google
xai
mistral
deepseek
groq
cohere
perplexity
cerebras
together\_ai
fireworks\_ai
moonshot
## How it works
Your request still goes through Dedalus. We handle routing, format normalization, streaming, and tool calling. The only difference is which API key is used for the upstream LLM call.
```
You → Dedalus API (your Dedalus key) → Provider (your provider key) → Response → You
```
BYOK keys are sent over HTTPS and are never stored. They are used for the duration of the request
and discarded. If you need Dedalus to manage keys on your behalf, contact us at
[support@dedaluslabs.ai](mailto:support@dedaluslabs.ai).
## Error handling
| Scenario | What happens |
| ------------------------------- | -------------------------------------------------- |
| Invalid provider name | HTTP 400 with supported provider list |
| Missing or invalid provider key | Provider returns its own auth error (usually 401) |
| Model not available on provider | Provider returns its own model error (usually 404) |
The error response always includes the upstream provider's error message so you can debug directly.
# Quickstart
Source: https://docs.dedaluslabs.ai/index
Learn how to use the Dedalus platform
Dedalus offers the **easiest** way to deploy remote MCP servers for your AI agents.
MCP servers are like "extensions" for your agents. It used to take hours (if not days) to set up the required DevOps.
Until now.
## Docs for your coding agents
* [llms.txt](https://docs.dedaluslabs.ai/llms.txt)
* [llms-full.txt](https://docs.dedaluslabs.ai/llms-full.txt)
## Unleash your agents
Get started with the official Dedalus SDKs
A new way to build and deploy MCP servers with authentication.
Use **any model** from **any provider**, all under a clean OpenAI-compatible interface.
# Go SDK
Source: https://docs.dedaluslabs.ai/sdk/api/go
Platform API Go SDK v0.1.0
**v0.1.0** | [GitHub](https://github.com/dedalus-labs/dedalus-sdk-go) | [Changelog](https://github.com/dedalus-labs/dedalus-sdk-go/blob/main/CHANGELOG.md)
## Installation
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
go get github.com/dedalus-labs/dedalus-sdk-go
```
This library requires Go 1.22+.
## Usage
See the full method reference in the [API Reference](/api-reference/dcs) tab.
The full API of this library can be found in [api.md](api.md).
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
package main
import (
"context"
"fmt"
"github.com/dedalus-labs/dedalus-sdk-go"
"github.com/dedalus-labs/dedalus-sdk-go/option"
)
func main() {
client := githubcomdedaluslabsdedalussdkgo.NewClient(
option.WithAPIKey("My API Key"), // defaults to os.LookupEnv("DEDALUS_API_KEY")
option.WithEnvironmentDevelopment(), // defaults to option.WithEnvironmentProduction()
)
completion, err := client.Chat.Completions.New(context.TODO(), githubcomdedaluslabsdedalussdkgo.ChatCompletionNewParams{
Messages: githubcomdedaluslabsdedalussdkgo.ChatCompletionNewParamsMessagesUnion{
OfMapOfAnyMap: []map[string]any{{
"role": "user",
"content": "Hello, how are you today?",
}},
},
Model: githubcomdedaluslabsdedalussdkgo.ChatCompletionNewParamsModelUnion{
OfModelID: githubcomdedaluslabsdedalussdkgo.String("openai/gpt-5"),
},
})
if err != nil {
panic(err.Error())
}
fmt.Printf("%+v\n", completion.ID)
}
```
### Request fields
The githubcomdedaluslabsdedalussdkgo library uses the [`omitzero`](https://tip.golang.org/doc/go1.24#encodingjsonpkgencodingjson)
semantics from the Go 1.24+ `encoding/json` release for request fields.
Required primitive fields (`int64`, `string`, etc.) feature the tag \`json:"...,required"\`. These
fields are always serialized, even their zero values.
Optional primitive types are wrapped in a `param.Opt[T]`. These fields can be set with the provided constructors, `githubcomdedaluslabsdedalussdkgo.String(string)`, `githubcomdedaluslabsdedalussdkgo.Int(int64)`, etc.
Any `param.Opt[T]`, map, slice, struct or string enum uses the
tag \`json:"...,omitzero"\`. Its zero value is considered omitted.
The `param.IsOmitted(any)` function can confirm the presence of any `omitzero` field.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
p := githubcomdedaluslabsdedalussdkgo.ExampleParams{
ID: "id_xxx", // required property
Name: githubcomdedaluslabsdedalussdkgo.String("..."), // optional property
Point: githubcomdedaluslabsdedalussdkgo.Point{
X: 0, // required field will serialize as 0
Y: githubcomdedaluslabsdedalussdkgo.Int(1), // optional field will serialize as 1
// ... omitted non-required fields will not be serialized
},
Origin: githubcomdedaluslabsdedalussdkgo.Origin{}, // the zero value of [Origin] is considered omitted
}
```
To send `null` instead of a `param.Opt[T]`, use `param.Null[T]()`.
To send `null` instead of a struct `T`, use `param.NullStruct[T]()`.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
p.Name = param.Null[string]() // 'null' instead of string
p.Point = param.NullStruct[Point]() // 'null' instead of struct
param.IsNull(p.Name) // true
param.IsNull(p.Point) // true
```
Request structs contain a `.SetExtraFields(map[string]any)` method which can send non-conforming
fields in the request body. Extra fields overwrite any struct fields with a matching
key. For security reasons, only use `SetExtraFields` with trusted data.
To send a custom value instead of a struct, use `param.Override[T](value)`.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
// In cases where the API specifies a given type,
// but you want to send something else, use [SetExtraFields]:
p.SetExtraFields(map[string]any{
"x": 0.01, // send "x" as a float instead of int
})
// Send a number instead of an object
custom := param.Override[githubcomdedaluslabsdedalussdkgo.FooParams](12)
```
### Request unions
Unions are represented as a struct with fields prefixed by "Of" for each of its variants,
only one field can be non-zero. The non-zero field will be serialized.
Sub-properties of the union can be accessed via methods on the union struct.
These methods return a mutable pointer to the underlying data, if present.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
// Only one field can be non-zero, use param.IsOmitted() to check if a field is set
type AnimalUnionParam struct {
OfCat *Cat `json:",omitzero,inline`
OfDog *Dog `json:",omitzero,inline`
}
animal := AnimalUnionParam{
OfCat: &Cat{
Name: "Whiskers",
Owner: PersonParam{
Address: AddressParam{Street: "3333 Coyote Hill Rd", Zip: 0},
},
},
}
// Mutating a field
if address := animal.GetOwner().GetAddress(); address != nil {
address.ZipCode = 94304
}
```
### Response objects
All fields in response structs are ordinary value types (not pointers or wrappers).
Response structs also include a special `JSON` field containing metadata about
each property.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
type Animal struct {
Name string `json:"name,nullable"`
Owners int `json:"owners"`
Age int `json:"age"`
JSON struct {
Name respjson.Field
Owner respjson.Field
Age respjson.Field
ExtraFields map[string]respjson.Field
} `json:"-"`
}
```
To handle optional data, use the `.Valid()` method on the JSON field.
`.Valid()` returns true if a field is not `null`, not present, or couldn't be marshaled.
If `.Valid()` is false, the corresponding field will simply be its zero value.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
raw := `{"owners": 1, "name": null}`
var res Animal
json.Unmarshal([]byte(raw), &res)
// Accessing regular fields
res.Owners // 1
res.Name // ""
res.Age // 0
// Optional field checks
res.JSON.Owners.Valid() // true
res.JSON.Name.Valid() // false
res.JSON.Age.Valid() // false
// Raw JSON values
res.JSON.Owners.Raw() // "1"
res.JSON.Name.Raw() == "null" // true
res.JSON.Name.Raw() == respjson.Null // true
res.JSON.Age.Raw() == "" // true
res.JSON.Age.Raw() == respjson.Omitted // true
```
These `.JSON` structs also include an `ExtraFields` map containing
any properties in the json response that were not specified
in the struct. This can be useful for API features not yet
present in the SDK.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
body := res.JSON.ExtraFields["my_unexpected_field"].Raw()
```
### Response Unions
In responses, unions are represented by a flattened struct containing all possible fields from each of the
object variants.
To convert it to a variant use the `.AsFooVariant()` method or the `.AsAny()` method if present.
If a response value union contains primitive values, primitive fields will be alongside
the properties but prefixed with `Of` and feature the tag `json:"...,inline"`.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
type AnimalUnion struct {
// From variants [Dog], [Cat]
Owner Person `json:"owner"`
// From variant [Dog]
DogBreed string `json:"dog_breed"`
// From variant [Cat]
CatBreed string `json:"cat_breed"`
// ...
JSON struct {
Owner respjson.Field
// ...
} `json:"-"`
}
// If animal variant
if animal.Owner.Address.ZipCode == "" {
panic("missing zip code")
}
// Switch on the variant
switch variant := animal.AsAny().(type) {
case Dog:
case Cat:
default:
panic("unexpected type")
}
```
### RequestOptions
This library uses the functional options pattern. Functions defined in the
`option` package return a `RequestOption`, which is a closure that mutates a
`RequestConfig`. These options can be supplied to the client or at individual
requests. For example:
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
client := githubcomdedaluslabsdedalussdkgo.NewClient(
// Adds a header to every request made by the client
option.WithHeader("X-Some-Header", "custom_header_info"),
)
client.Health.Check(context.TODO(), ...,
// Override the header
option.WithHeader("X-Some-Header", "some_other_custom_header_info"),
// Add an undocumented field to the request body, using sjson syntax
option.WithJSONSet("some.json.path", map[string]string{"my": "object"}),
)
```
The request option `option.WithDebugLog(nil)` may be helpful while debugging.
See the [full list of request options](https://pkg.go.dev/github.com/dedalus-labs/dedalus-sdk-go/option).
### Pagination
This library provides some conveniences for working with paginated list endpoints.
You can use `.ListAutoPaging()` methods to iterate through items across all pages:
Or you can use simple `.List()` methods to fetch a single page and receive a standard response object
with additional helper methods like `.GetNextPage()`, e.g.:
### Errors
When the API returns a non-success status code, we return an error with type
`*githubcomdedaluslabsdedalussdkgo.Error`. This contains the `StatusCode`, `*http.Request`, and
`*http.Response` values of the request, as well as the JSON of the error body
(much like other response objects in the SDK).
To handle errors, we recommend that you use the `errors.As` pattern:
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
_, err := client.Health.Check(context.TODO())
if err != nil {
var apierr *githubcomdedaluslabsdedalussdkgo.Error
if errors.As(err, &apierr) {
println(string(apierr.DumpRequest(true))) // Prints the serialized HTTP request
println(string(apierr.DumpResponse(true))) // Prints the serialized HTTP response
}
panic(err.Error()) // GET "/health": 400 Bad Request { ... }
}
```
When other errors occur, they are returned unwrapped; for example,
if HTTP transport fails, you might receive `*url.Error` wrapping `*net.OpError`.
### Timeouts
Requests do not time out by default; use context to configure a timeout for a request lifecycle.
Note that if a request is [retried](#retries), the context timeout does not start over.
To set a per-retry timeout, use `option.WithRequestTimeout()`.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
// This sets the timeout for the request, including all the retries.
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
client.Health.Check(
ctx,
// This sets the per-retry timeout
option.WithRequestTimeout(20*time.Second),
)
```
### File uploads
Request parameters that correspond to file uploads in multipart requests are typed as
`io.Reader`. The contents of the `io.Reader` will by default be sent as a multipart form
part with the file name of "anonymous\_file" and content-type of "application/octet-stream".
The file name and content-type can be customized by implementing `Name() string` or `ContentType()
string` on the run-time type of `io.Reader`. Note that `os.File` implements `Name() string`, so a
file returned by `os.Open` will be sent with the file name on disk.
We also provide a helper `githubcomdedaluslabsdedalussdkgo.File(reader io.Reader, filename string, contentType string)`
which can be used to wrap any `io.Reader` with the appropriate file name and content type.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
// A file from the file system
file, err := os.Open("/path/to/file")
githubcomdedaluslabsdedalussdkgo.AudioTranscriptionNewParams{
File: file,
Model: "model",
}
// A file from a string
githubcomdedaluslabsdedalussdkgo.AudioTranscriptionNewParams{
File: strings.NewReader("my file contents"),
Model: "model",
}
// With a custom filename and contentType
githubcomdedaluslabsdedalussdkgo.AudioTranscriptionNewParams{
File: githubcomdedaluslabsdedalussdkgo.File(strings.NewReader(`{"hello": "foo"}`), "file.go", "application/json"),
Model: "model",
}
```
### Retries
Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
We retry by default all connection errors, 408 Request Timeout, 409 Conflict, 429 Rate Limit,
and >=500 Internal errors.
You can use the `WithMaxRetries` option to configure or disable this:
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
// Configure the default for all requests:
client := githubcomdedaluslabsdedalussdkgo.NewClient(
option.WithMaxRetries(0), // default is 2
)
// Override per-request:
client.Health.Check(context.TODO(), option.WithMaxRetries(5))
```
### Accessing raw response data (e.g. response headers)
You can access the raw HTTP response data by using the `option.WithResponseInto()` request option. This is useful when
you need to examine response headers, status codes, or other details.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
// Create a variable to store the HTTP response
var response *http.Response
response, err := client.Health.Check(context.TODO(), option.WithResponseInto(&response))
if err != nil {
// handle error
}
fmt.Printf("%+v\n", response)
fmt.Printf("Status Code: %d\n", response.StatusCode)
fmt.Printf("Headers: %+#v\n", response.Header)
```
### Making custom/undocumented requests
This library is typed for convenient access to the documented API. If you need to access undocumented
endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can use `client.Get`, `client.Post`, and other HTTP verbs.
`RequestOptions` on the client, such as retries, will be respected when making these requests.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
var (
// params can be an io.Reader, a []byte, an encoding/json serializable object,
// or a "…Params" struct defined in this library.
params map[string]any
// result can be an []byte, *http.Response, a encoding/json deserializable object,
// or a model defined in this library.
result *http.Response
)
err := client.Post(context.Background(), "/unspecified", params, &result)
if err != nil {
…
}
```
#### Undocumented request params
To make requests using undocumented parameters, you may use either the `option.WithQuerySet()`
or the `option.WithJSONSet()` methods.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
params := FooNewParams{
ID: "id_xxxx",
Data: FooNewParamsData{
FirstName: githubcomdedaluslabsdedalussdkgo.String("John"),
},
}
client.Foo.New(context.Background(), params, option.WithJSONSet("data.last_name", "Doe"))
```
#### Undocumented response properties
To access undocumented response properties, you may either access the raw JSON of the response as a string
with `result.JSON.RawJSON()`, or get the raw JSON of a particular field on the result with
`result.JSON.Foo.Raw()`.
Any fields that are not present on the response struct will be saved and can be accessed by `result.JSON.ExtraFields()` which returns the extra fields as a `map[string]Field`.
### Middleware
We provide `option.WithMiddleware` which applies the given
middleware to requests.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
func Logger(req *http.Request, next option.MiddlewareNext) (res *http.Response, err error) {
// Before the request
start := time.Now()
LogReq(req)
// Forward the request to the next handler
res, err = next(req)
// Handle stuff after the request
end := time.Now()
LogRes(res, err, start - end)
return res, err
}
client := githubcomdedaluslabsdedalussdkgo.NewClient(
option.WithMiddleware(Logger),
)
```
When multiple middlewares are provided as variadic arguments, the middlewares
are applied left to right. If `option.WithMiddleware` is given
multiple times, for example first in the client then the method, the
middleware in the client will run first and the middleware given in the method
will run next.
You may also replace the default `http.Client` with
`option.WithHTTPClient(client)`. Only one http client is
accepted (this overwrites any previous client) and receives requests after any
middleware has been applied.
## Semantic versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes to library internals which are technically public but not intended or documented for external use. *(Please open a GitHub issue to let us know if you are relying on such internals.)*
2. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/dedalus-labs/dedalus-sdk-go/issues) with questions, bugs, or suggestions.
# Python SDK
Source: https://docs.dedaluslabs.ai/sdk/api/python
Platform API Python SDK v0.1.0
**v0.1.0** | [GitHub](https://github.com/dedalus-labs/dedalus-sdk-python) | [Changelog](https://github.com/dedalus-labs/dedalus-sdk-python/blob/main/CHANGELOG.md)
## Installation
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
pip install dedalus-labs
```
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
uv add dedalus-labs
```
## Usage
See the full method reference in the [API Reference](/api-reference/dcs) tab.
The full API of this library can be found in [api.md](api.md).
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
import os
from dedalus_labs import Dedalus
client = Dedalus(
api_key=os.environ.get("DEDALUS_API_KEY"), # This is the default and can be omitted
# defaults to "production".
environment="development",
)
chat_completion = client.chat.completions.create(
model="openai/gpt-5-nano",
messages=[
{
"role": "system",
"content": "You are Stephen Dedalus. Respond in morose Joycean malaise.",
},
{
"role": "user",
"content": "Hello, how are you today?",
},
],
)
print(chat_completion.id)
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `DEDALUS_API_KEY="My API Key"` to your `.env` file
so that your API Key is not stored in source control.
## Async Usage
Simply import `AsyncDedalus` instead of `Dedalus` and use `await` with each API call:
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
import os
import asyncio
from dedalus_labs import AsyncDedalus
client = AsyncDedalus(
api_key=os.environ.get("DEDALUS_API_KEY"), # This is the default and can be omitted
# defaults to "production".
environment="development",
)
async def main() -> None:
chat_completion = await client.chat.completions.create(
model="openai/gpt-5-nano",
messages=[
{
"role": "system",
"content": "You are Stephen Dedalus. Respond in morose Joycean malaise.",
},
{
"role": "user",
"content": "Hello, how are you today?",
},
],
)
print(chat_completion.id)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh theme={"theme":{"light":"github-light","dark":"github-dark"}}
# install from PyPI
pip install dedalus_labs[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
import os
import asyncio
from dedalus_labs import DefaultAioHttpClient
from dedalus_labs import AsyncDedalus
async def main() -> None:
async with AsyncDedalus(
api_key=os.environ.get("DEDALUS_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
chat_completion = await client.chat.completions.create(
model="openai/gpt-5-nano",
messages=[
{
"role": "system",
"content": "You are Stephen Dedalus. Respond in morose Joycean malaise.",
},
{
"role": "user",
"content": "Hello, how are you today?",
},
],
)
print(chat_completion.id)
asyncio.run(main())
```
## Streaming
We provide support for streaming responses using Server Side Events (SSE).
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_labs import Dedalus
client = Dedalus()
stream = client.chat.completions.create(
model="openai/gpt-5-nano",
stream=True,
messages=[
{
"role": "system",
"content": "You are Stephen Dedalus. Respond in morose Joycean malaise.",
},
{
"role": "user",
"content": "What do you think of artificial intelligence?",
},
],
)
for chat_completion in stream:
print(chat_completion.id)
```
The async client uses the exact same interface.
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_labs import AsyncDedalus
client = AsyncDedalus()
stream = await client.chat.completions.create(
model="openai/gpt-5-nano",
stream=True,
messages=[
{
"role": "system",
"content": "You are Stephen Dedalus. Respond in morose Joycean malaise.",
},
{
"role": "user",
"content": "What do you think of artificial intelligence?",
},
],
)
async for chat_completion in stream:
print(chat_completion.id)
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
* Serializing back into JSON, `model.to_json()`
* Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Nested params
Nested parameters are dictionaries, typed using `TypedDict`, for example:
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_labs import Dedalus
client = Dedalus()
chat_completion = client.chat.completions.create(
model="openai/gpt-5",
audio={
"format": "wav",
"voice": "string",
},
)
print(chat_completion.audio)
```
## File uploads
Request parameters that correspond to file uploads can be passed as `bytes`, or a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`.
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from pathlib import Path
from dedalus_labs import Dedalus
client = Dedalus()
client.audio.transcriptions.create(
file=Path("/path/to/file"),
model="model",
)
```
The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically.
## Error Handling
Always wrap API calls in try/catch. The SDK throws typed errors for HTTP failures.
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `dedalus_labs.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `dedalus_labs.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `dedalus_labs.APIError`.
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
import dedalus_labs
from dedalus_labs import Dedalus
client = Dedalus()
try:
client.chat.completions.create(
model="openai/gpt-5-nano",
messages=[
{
"role": "system",
"content": "You are Stephen Dedalus. Respond in morose Joycean malaise.",
},
{
"role": "user",
"content": "Hello, how are you today?",
},
],
)
except dedalus_labs.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except dedalus_labs.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except dedalus_labs.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_labs import Dedalus
# Configure the default for all requests:
client = Dedalus(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).chat.completions.create(
model="openai/gpt-5-nano",
messages=[
{
"role": "system",
"content": "You are Stephen Dedalus. Respond in morose Joycean malaise.",
},
{
"role": "user",
"content": "Hello, how are you today?",
},
],
)
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_labs import Dedalus
# Configure the default for all requests:
client = Dedalus(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = Dedalus(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).chat.completions.create(
model="openai/gpt-5-nano",
messages=[
{
"role": "system",
"content": "You are Stephen Dedalus. Respond in morose Joycean malaise.",
},
{
"role": "user",
"content": "Hello, how are you today?",
},
],
)
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](#retries).
## Default Headers
We automatically send the following headers with all requests.
| Header | Value |
| --------------- | ------------- |
| `User-Agent` | `Dedalus-SDK` |
| `X-SDK-Version` | `1.0.0` |
If you need to, you can override these headers by setting default headers per-request or on the client object.
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_labs import Dedalus
client = Dedalus(
default_headers={"User-Agent": "My-Custom-Value"},
)
```
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `DEDALUS_LOG` to `info`.
```shell theme={"theme":{"light":"github-light","dark":"github-dark"}}
$ export DEDALUS_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py theme={"theme":{"light":"github-light","dark":"github-dark"}}
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_labs import Dedalus
client = Dedalus()
response = client.chat.completions.with_raw_response.create(
model="openai/gpt-5-nano",
messages=[{
"role": "system",
"content": "You are Stephen Dedalus. Respond in morose Joycean malaise.",
}, {
"role": "user",
"content": "Hello, how are you today?",
}],
)
print(response.headers.get('X-My-Header'))
completion = response.parse() # get the object that `chat.completions.create()` would have returned
print(completion.id)
```
These methods return an [`APIResponse`](https://github.com/dedalus-labs/dedalus-sdk-python/tree/main/src/dedalus_labs/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/dedalus-labs/dedalus-sdk-python/tree/main/src/dedalus_labs/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
with client.chat.completions.with_streaming_response.create(
model="openai/gpt-5-nano",
messages=[
{
"role": "system",
"content": "You are Stephen Dedalus. Respond in morose Joycean malaise.",
},
{
"role": "user",
"content": "Hello, how are you today?",
},
],
) as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py theme={"theme":{"light":"github-light","dark":"github-dark"}}
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
* Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
* Custom [transports](https://www.python-httpx.org/advanced/transports/)
* Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
import httpx
from dedalus_labs import Dedalus, DefaultHttpxClient
client = Dedalus(
# Or use the `DEDALUS_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_labs import Dedalus
with Dedalus() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. *(Please open a GitHub issue to let us know if you are relying on such internals.)*
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/dedalus-labs/dedalus-sdk-python/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py theme={"theme":{"light":"github-light","dark":"github-dark"}}
import dedalus_labs
print(dedalus_labs.__version__)
```
Python 3.9 or higher.
# Typescript SDK
Source: https://docs.dedaluslabs.ai/sdk/api/typescript
Platform API Typescript SDK v0.1.0
**v0.1.0** | [GitHub](https://github.com/dedalus-labs/dedalus-sdk-typescript) | [Changelog](https://github.com/dedalus-labs/dedalus-sdk-typescript/blob/main/CHANGELOG.md)
## Installation
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
npm install dedalus-labs
```
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
pnpm add dedalus-labs
```
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
yarn add dedalus-labs
```
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
bun add dedalus-labs
```
## Usage
See the full method reference in the [API Reference](/api-reference/dcs) tab.
The full API of this library can be found in [api.md](api.md).
```js theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus-labs';
const client = new Dedalus({
apiKey: process.env['DEDALUS_API_KEY'], // This is the default and can be omitted
environment: 'development', // defaults to 'production'
});
const completion = await client.chat.completions.create({
model: 'openai/gpt-5-nano',
messages: [
{ role: 'system', content: 'You are Stephen Dedalus. Respond in morose Joycean malaise.' },
{ role: 'user', content: 'Hello, how are you today?' },
],
});
console.log(completion.id);
```
## Streaming
We provide support for streaming responses using Server Sent Events (SSE).
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus-labs';
const client = new Dedalus();
const stream = await client.chat.completions.create({
model: 'openai/gpt-5-nano',
stream: true,
messages: [
{ role: 'system', content: 'You are Stephen Dedalus. Respond in morose Joycean malaise.' },
{ role: 'user', content: 'What do you think of artificial intelligence?' },
],
});
for await (const streamChunk of stream) {
console.log(streamChunk.id);
}
```
If you need to cancel a stream, you can `break` from the loop
or call `stream.controller.abort()`.
### Request & Response types
This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus-labs';
const client = new Dedalus({
apiKey: process.env['DEDALUS_API_KEY'], // This is the default and can be omitted
environment: 'development', // defaults to 'production'
});
const params: Dedalus.Chat.CompletionCreateParams = {
model: 'openai/gpt-5-nano',
messages: [
{ role: 'system', content: 'You are Stephen Dedalus. Respond in morose Joycean malaise.' },
{ role: 'user', content: 'Hello, how are you today?' },
],
};
const completion: Dedalus.Chat.Completion = await client.chat.completions.create(params);
```
Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.
## File uploads
Request parameters that correspond to file uploads can be passed in many different forms:
* `File` (or an object with the same structure)
* a `fetch` `Response` (or an object with the same structure)
* an `fs.ReadStream`
* the return value of our `toFile` helper
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import fs from 'fs';
import Dedalus, { toFile } from 'dedalus-labs';
const client = new Dedalus();
// If you have access to Node `fs` we recommend using `fs.createReadStream()`:
await client.audio.transcriptions.create({ file: fs.createReadStream('/path/to/file'), model: 'model' });
// Or if you have the web `File` API you can pass a `File` instance:
await client.audio.transcriptions.create({ file: new File(['my bytes'], 'file'), model: 'model' });
// You can also pass a `fetch` `Response`:
await client.audio.transcriptions.create({ file: await fetch('https://somesite/file'), model: 'model' });
// Finally, if none of the above are convenient, you can use our `toFile` helper:
await client.audio.transcriptions.create({
file: await toFile(Buffer.from('my bytes'), 'file'),
model: 'model',
});
await client.audio.transcriptions.create({
file: await toFile(new Uint8Array([0, 1, 2]), 'file'),
model: 'model',
});
```
## Error Handling
Always wrap API calls in try/catch. The SDK throws typed errors for HTTP failures.
When the library is unable to connect to the API,
or if the API returns a non-success status code (i.e., 4xx or 5xx response),
a subclass of `APIError` will be thrown:
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
const completion = await client.chat.completions
.create({
model: 'openai/gpt-5-nano',
messages: [
{ role: 'system', content: 'You are Stephen Dedalus. Respond in morose Joycean malaise.' },
{ role: 'user', content: 'Hello, how are you today?' },
],
})
.catch(async (err) => {
if (err instanceof Dedalus.APIError) {
console.log(err.status); // 400
console.log(err.name); // BadRequestError
console.log(err.headers); // {server: 'nginx', ...}
} else {
throw err;
}
});
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors will all be retried by default.
You can use the `maxRetries` option to configure or disable this:
```js theme={"theme":{"light":"github-light","dark":"github-dark"}}
// Configure the default for all requests:
const client = new Dedalus({
maxRetries: 0, // default is 2
});
// Or, configure per-request:
await client.chat.completions.create({ model: 'openai/gpt-5-nano', messages: [{ role: 'system', content: 'You are Stephen Dedalus. Respond in morose Joycean malaise.' }, { role: 'user', content: 'Hello, how are you today?' }] }, {
maxRetries: 5,
});
```
### Timeouts
Requests time out after 1 minute by default. You can configure this with a `timeout` option:
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
// Configure the default for all requests:
const client = new Dedalus({
timeout: 20 * 1000, // 20 seconds (default is 1 minute)
});
// Override per-request:
await client.chat.completions.create({ model: 'openai/gpt-5-nano', messages: [{ role: 'system', content: 'You are Stephen Dedalus. Respond in morose Joycean malaise.' }, { role: 'user', content: 'Hello, how are you today?' }] }, {
timeout: 5 * 1000,
});
```
On timeout, an `APIConnectionTimeoutError` is thrown.
Note that requests which time out will be [retried twice by default](#retries).
## Default Headers
We automatically send the following headers with all requests.
| Header | Value |
| --------------- | ------------- |
| `User-Agent` | `Dedalus-SDK` |
| `X-SDK-Version` | `1.0.0` |
If you need to, you can override these headers by setting default headers on a per-request basis.
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus-labs';
const client = new Dedalus();
const completion = await client.chat.completions.create(
{
model: 'openai/gpt-5-nano',
messages: [
{ role: 'system', content: 'You are Stephen Dedalus. Respond in morose Joycean malaise.' },
{ role: 'user', content: 'Hello, how are you today?' },
],
},
{ headers: { 'User-Agent': 'My-Custom-Value' } },
);
```
### Accessing raw Response data (e.g., headers)
The "raw" `Response` returned by `fetch()` can be accessed through the `.asResponse()` method on the `APIPromise` type that all methods return.
This method returns as soon as the headers for a successful response are received and does not consume the response body, so you are free to write custom parsing or streaming logic.
You can also use the `.withResponse()` method to get the raw `Response` along with the parsed data.
Unlike `.asResponse()` this method consumes the body, returning once it is parsed.
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
const client = new Dedalus();
const response = await client.chat.completions
.create({
model: 'openai/gpt-5-nano',
messages: [
{ role: 'system', content: 'You are Stephen Dedalus. Respond in morose Joycean malaise.' },
{ role: 'user', content: 'Hello, how are you today?' },
],
})
.asResponse();
console.log(response.headers.get('X-My-Header'));
console.log(response.statusText); // access the underlying Response object
const { data: completion, response: raw } = await client.chat.completions
.create({
model: 'openai/gpt-5-nano',
messages: [
{ role: 'system', content: 'You are Stephen Dedalus. Respond in morose Joycean malaise.' },
{ role: 'user', content: 'Hello, how are you today?' },
],
})
.withResponse();
console.log(raw.headers.get('X-My-Header'));
console.log(completion.id);
```
### Logging
> \[!IMPORTANT]
> All log messages are intended for debugging only. The format and content of log messages
> may change between releases.
#### Log levels
The log level can be configured in two ways:
1. Via the `DEDALUS_LOG` environment variable
2. Using the `logLevel` client option (overrides the environment variable if set)
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus-labs';
const client = new Dedalus({
logLevel: 'debug', // Show all log messages
});
```
Available log levels, from most to least verbose:
* `'debug'` - Show debug messages, info, warnings, and errors
* `'info'` - Show info messages, warnings, and errors
* `'warn'` - Show warnings and errors (default)
* `'error'` - Show only errors
* `'off'` - Disable all logging
At the `'debug'` level, all HTTP requests and responses are logged, including headers and bodies.
Some authentication-related headers are redacted, but sensitive data in request and response bodies
may still be visible.
#### Custom logger
By default, this library logs to `globalThis.console`. You can also provide a custom logger.
Most logging libraries are supported, including [pino](https://www.npmjs.com/package/pino), [winston](https://www.npmjs.com/package/winston), [bunyan](https://www.npmjs.com/package/bunyan), [consola](https://www.npmjs.com/package/consola), [signale](https://www.npmjs.com/package/signale), and [@std/log](https://jsr.io/@std/log). If your logger doesn't work, please open an issue.
When providing a custom logger, the `logLevel` option still controls which messages are emitted, messages
below the configured level will not be sent to your logger.
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus-labs';
import pino from 'pino';
const logger = pino();
const client = new Dedalus({
logger: logger.child({ name: 'Dedalus' }),
logLevel: 'debug', // Send all messages to pino, allowing it to filter
});
```
### Making custom/undocumented requests
This library is typed for convenient access to the documented API. If you need to access undocumented
endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can use `client.get`, `client.post`, and other HTTP verbs.
Options on the client, such as retries, will be respected when making these requests.
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
await client.post('/some/path', {
body: { some_prop: 'foo' },
query: { some_query_arg: 'bar' },
});
```
#### Undocumented request params
To make requests using undocumented parameters, you may use `// @ts-expect-error` on the undocumented
parameter. This library doesn't validate at runtime that the request matches the type, so any extra values you
send will be sent as-is.
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
client.chat.completions.create({
// ...
// @ts-expect-error baz is not yet public
baz: 'undocumented option',
});
```
For requests with the `GET` verb, any extra params will be in the query, all other requests will send the
extra param in the body.
If you want to explicitly send an extra argument, you can do so with the `query`, `body`, and `headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you may access the response object with `// @ts-expect-error` on
the response object, or cast the response object to the requisite type. Like the request params, we do not
validate or strip extra properties from the response from the API.
### Customizing the fetch client
By default, this library expects a global `fetch` function is defined.
If you want to use a different `fetch` function, you can either polyfill the global:
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import fetch from 'my-fetch';
globalThis.fetch = fetch;
```
Or pass it to the client:
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus-labs';
import fetch from 'my-fetch';
const client = new Dedalus({ fetch });
```
### Fetch options
If you want to set custom `fetch` options without overriding the `fetch` function, you can provide a `fetchOptions` object when instantiating the client or making a request. (Request-specific options override client options.)
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus-labs';
const client = new Dedalus({
fetchOptions: {
// `RequestInit` options
},
});
```
#### Configuring proxies
To modify proxy behavior, you can provide custom `fetchOptions` that add runtime-specific proxy
options to requests:
**Node** \[[docs](https://github.com/nodejs/undici/blob/main/docs/docs/api/ProxyAgent.md#example---proxyagent-with-fetch)]
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus-labs';
import * as undici from 'undici';
const proxyAgent = new undici.ProxyAgent('http://localhost:8888');
const client = new Dedalus({
fetchOptions: {
dispatcher: proxyAgent,
},
});
```
**Bun** \[[docs](https://bun.sh/guides/http/proxy)]
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus-labs';
const client = new Dedalus({
fetchOptions: {
proxy: 'http://localhost:8888',
},
});
```
**Deno** \[[docs](https://docs.deno.com/api/deno/~/Deno.createHttpClient)]
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'npm:dedalus-labs';
const httpClient = Deno.createHttpClient({ proxy: { url: 'http://localhost:8888' } });
const client = new Dedalus({
fetchOptions: {
client: httpClient,
},
});
```
## Frequently Asked Questions
## Semantic versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. *(Please open a GitHub issue to let us know if you are relying on such internals.)*
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/dedalus-labs/dedalus-sdk-typescript/issues) with questions, bugs, or suggestions.
TypeScript >= 4.9 is supported.
The following runtimes are supported:
* Web browsers (Up-to-date Chrome, Firefox, Safari, Edge, and more)
* Node.js 20 LTS or later ([non-EOL](https://endoflife.date/nodejs)) versions.
* Deno v1.28.0 or higher.
* Bun 1.0 or later.
* Cloudflare Workers.
* Vercel Edge Runtime.
* Jest 28 or greater with the `"node"` environment (`"jsdom"` is not supported at this time).
* Nitro v2.6 or greater.
Note that React Native is not supported at this time.
If you are interested in other runtime environments, please open or upvote an issue on GitHub.
# Go SDK
Source: https://docs.dedaluslabs.ai/sdk/dcs/go
DCS Machines Go SDK v0.1.0
**v0.1.0** | [GitHub](https://github.com/dedalus-labs/dedalus-go) | [Changelog](https://github.com/dedalus-labs/dedalus-go/blob/main/CHANGELOG.md)
## Installation
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
go get github.com/dedalus-labs/dedalus-go
```
This library requires Go 1.22+.
## Usage
See the full method reference in the [API Reference](/api-reference/dcs) tab.
The full API of this library can be found in [api.md](api.md).
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
package main
import (
"context"
"fmt"
"github.com/dedalus-labs/dedalus-go"
"github.com/dedalus-labs/dedalus-go/option"
)
func main() {
client := dedalus.NewClient(
option.WithAPIKey("My API Key"), // defaults to os.LookupEnv("DEDALUS_API_KEY")
)
machine, err := client.Machines.New(context.TODO(), dedalus.MachineNewParams{
CreateParams: dedalus.CreateParams{
MemoryMiB: 0,
StorageGiB: 0,
VCPU: 0,
},
})
if err != nil {
panic(err.Error())
}
fmt.Printf("%+v\n", machine.MachineID)
}
```
### Request fields
The dedalus library uses the [`omitzero`](https://tip.golang.org/doc/go1.24#encodingjsonpkgencodingjson)
semantics from the Go 1.24+ `encoding/json` release for request fields.
Required primitive fields (`int64`, `string`, etc.) feature the tag \`api:"required"\`. These
fields are always serialized, even their zero values.
Optional primitive types are wrapped in a `param.Opt[T]`. These fields can be set with the provided constructors, `dedalus.String(string)`, `dedalus.Int(int64)`, etc.
Any `param.Opt[T]`, map, slice, struct or string enum uses the
tag \`json:"...,omitzero"\`. Its zero value is considered omitted.
The `param.IsOmitted(any)` function can confirm the presence of any `omitzero` field.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
p := dedalus.ExampleParams{
ID: "id_xxx", // required property
Name: dedalus.String("..."), // optional property
Point: dedalus.Point{
X: 0, // required field will serialize as 0
Y: dedalus.Int(1), // optional field will serialize as 1
// ... omitted non-required fields will not be serialized
},
Origin: dedalus.Origin{}, // the zero value of [Origin] is considered omitted
}
```
To send `null` instead of a `param.Opt[T]`, use `param.Null[T]()`.
To send `null` instead of a struct `T`, use `param.NullStruct[T]()`.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
p.Name = param.Null[string]() // 'null' instead of string
p.Point = param.NullStruct[Point]() // 'null' instead of struct
param.IsNull(p.Name) // true
param.IsNull(p.Point) // true
```
Request structs contain a `.SetExtraFields(map[string]any)` method which can send non-conforming
fields in the request body. Extra fields overwrite any struct fields with a matching
key. For security reasons, only use `SetExtraFields` with trusted data.
To send a custom value instead of a struct, use `param.Override[T](value)`.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
// In cases where the API specifies a given type,
// but you want to send something else, use [SetExtraFields]:
p.SetExtraFields(map[string]any{
"x": 0.01, // send "x" as a float instead of int
})
// Send a number instead of an object
custom := param.Override[dedalus.FooParams](12)
```
### Request unions
Unions are represented as a struct with fields prefixed by "Of" for each of its variants,
only one field can be non-zero. The non-zero field will be serialized.
Sub-properties of the union can be accessed via methods on the union struct.
These methods return a mutable pointer to the underlying data, if present.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
// Only one field can be non-zero, use param.IsOmitted() to check if a field is set
type AnimalUnionParam struct {
OfCat *Cat `json:",omitzero,inline`
OfDog *Dog `json:",omitzero,inline`
}
animal := AnimalUnionParam{
OfCat: &Cat{
Name: "Whiskers",
Owner: PersonParam{
Address: AddressParam{Street: "3333 Coyote Hill Rd", Zip: 0},
},
},
}
// Mutating a field
if address := animal.GetOwner().GetAddress(); address != nil {
address.ZipCode = 94304
}
```
### Response objects
All fields in response structs are ordinary value types (not pointers or wrappers).
Response structs also include a special `JSON` field containing metadata about
each property.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
type Animal struct {
Name string `json:"name,nullable"`
Owners int `json:"owners"`
Age int `json:"age"`
JSON struct {
Name respjson.Field
Owner respjson.Field
Age respjson.Field
ExtraFields map[string]respjson.Field
} `json:"-"`
}
```
To handle optional data, use the `.Valid()` method on the JSON field.
`.Valid()` returns true if a field is not `null`, not present, or couldn't be marshaled.
If `.Valid()` is false, the corresponding field will simply be its zero value.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
raw := `{"owners": 1, "name": null}`
var res Animal
json.Unmarshal([]byte(raw), &res)
// Accessing regular fields
res.Owners // 1
res.Name // ""
res.Age // 0
// Optional field checks
res.JSON.Owners.Valid() // true
res.JSON.Name.Valid() // false
res.JSON.Age.Valid() // false
// Raw JSON values
res.JSON.Owners.Raw() // "1"
res.JSON.Name.Raw() == "null" // true
res.JSON.Name.Raw() == respjson.Null // true
res.JSON.Age.Raw() == "" // true
res.JSON.Age.Raw() == respjson.Omitted // true
```
These `.JSON` structs also include an `ExtraFields` map containing
any properties in the json response that were not specified
in the struct. This can be useful for API features not yet
present in the SDK.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
body := res.JSON.ExtraFields["my_unexpected_field"].Raw()
```
### Response Unions
In responses, unions are represented by a flattened struct containing all possible fields from each of the
object variants.
To convert it to a variant use the `.AsFooVariant()` method or the `.AsAny()` method if present.
If a response value union contains primitive values, primitive fields will be alongside
the properties but prefixed with `Of` and feature the tag `json:"...,inline"`.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
type AnimalUnion struct {
// From variants [Dog], [Cat]
Owner Person `json:"owner"`
// From variant [Dog]
DogBreed string `json:"dog_breed"`
// From variant [Cat]
CatBreed string `json:"cat_breed"`
// ...
JSON struct {
Owner respjson.Field
// ...
} `json:"-"`
}
// If animal variant
if animal.Owner.Address.ZipCode == "" {
panic("missing zip code")
}
// Switch on the variant
switch variant := animal.AsAny().(type) {
case Dog:
case Cat:
default:
panic("unexpected type")
}
```
### RequestOptions
This library uses the functional options pattern. Functions defined in the
`option` package return a `RequestOption`, which is a closure that mutates a
`RequestConfig`. These options can be supplied to the client or at individual
requests. For example:
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
client := dedalus.NewClient(
// Adds a header to every request made by the client
option.WithHeader("X-Some-Header", "custom_header_info"),
)
client.Machines.New(context.TODO(), ...,
// Override the header
option.WithHeader("X-Some-Header", "some_other_custom_header_info"),
// Add an undocumented field to the request body, using sjson syntax
option.WithJSONSet("some.json.path", map[string]string{"my": "object"}),
)
```
The request option `option.WithDebugLog(nil)` may be helpful while debugging.
See the [full list of request options](https://pkg.go.dev/github.com/dedalus-labs/dedalus-go/option).
### Pagination
This library provides some conveniences for working with paginated list endpoints.
You can use `.ListAutoPaging()` methods to iterate through items across all pages:
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
iter := client.Machines.ListAutoPaging(context.TODO(), dedalus.MachineListParams{})
// Automatically fetches more pages as needed.
for iter.Next() {
machineListItem := iter.Current()
fmt.Printf("%+v\n", machineListItem)
}
if err := iter.Err(); err != nil {
panic(err.Error())
}
```
Or you can use simple `.List()` methods to fetch a single page and receive a standard response object
with additional helper methods like `.GetNextPage()`, e.g.:
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
page, err := client.Machines.List(context.TODO(), dedalus.MachineListParams{})
for page != nil {
for _, machine := range page.Items {
fmt.Printf("%+v\n", machine)
}
page, err = page.GetNextPage()
}
if err != nil {
panic(err.Error())
}
```
### Errors
When the API returns a non-success status code, we return an error with type
`*dedalus.Error`. This contains the `StatusCode`, `*http.Request`, and
`*http.Response` values of the request, as well as the JSON of the error body
(much like other response objects in the SDK).
To handle errors, we recommend that you use the `errors.As` pattern:
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
_, err := client.Machines.New(context.TODO(), dedalus.MachineNewParams{
CreateParams: dedalus.CreateParams{
MemoryMiB: 0,
StorageGiB: 0,
VCPU: 0,
},
})
if err != nil {
var apierr *dedalus.Error
if errors.As(err, &apierr) {
println(string(apierr.DumpRequest(true))) // Prints the serialized HTTP request
println(string(apierr.DumpResponse(true))) // Prints the serialized HTTP response
println(apierr.ErrorCode) // IDEMPOTENCY_KEY_REUSED
println(apierr.Message) // idempotency key reused with different request parameters
println(apierr.Retryable) // false
}
panic(err.Error()) // GET "/v1/machines": 400 Bad Request { ... }
}
```
When other errors occur, they are returned unwrapped; for example,
if HTTP transport fails, you might receive `*url.Error` wrapping `*net.OpError`.
### Timeouts
Requests do not time out by default; use context to configure a timeout for a request lifecycle.
Note that if a request is [retried](#retries), the context timeout does not start over.
To set a per-retry timeout, use `option.WithRequestTimeout()`.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
// This sets the timeout for the request, including all the retries.
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
client.Machines.New(
ctx,
dedalus.MachineNewParams{
CreateParams: dedalus.CreateParams{
MemoryMiB: 0,
StorageGiB: 0,
VCPU: 0,
},
},
// This sets the per-retry timeout
option.WithRequestTimeout(20*time.Second),
)
```
### File uploads
Request parameters that correspond to file uploads in multipart requests are typed as
`io.Reader`. The contents of the `io.Reader` will by default be sent as a multipart form
part with the file name of "anonymous\_file" and content-type of "application/octet-stream".
The file name and content-type can be customized by implementing `Name() string` or `ContentType()
string` on the run-time type of `io.Reader`. Note that `os.File` implements `Name() string`, so a
file returned by `os.Open` will be sent with the file name on disk.
We also provide a helper `dedalus.File(reader io.Reader, filename string, contentType string)`
which can be used to wrap any `io.Reader` with the appropriate file name and content type.
### Retries
Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
We retry by default all connection errors, 408 Request Timeout, 409 Conflict, 429 Rate Limit,
and >=500 Internal errors.
You can use the `WithMaxRetries` option to configure or disable this:
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
// Configure the default for all requests:
client := dedalus.NewClient(
option.WithMaxRetries(0), // default is 2
)
// Override per-request:
client.Machines.New(
context.TODO(),
dedalus.MachineNewParams{
CreateParams: dedalus.CreateParams{
MemoryMiB: 0,
StorageGiB: 0,
VCPU: 0,
},
},
option.WithMaxRetries(5),
)
```
### Accessing raw response data (e.g. response headers)
You can access the raw HTTP response data by using the `option.WithResponseInto()` request option. This is useful when
you need to examine response headers, status codes, or other details.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
// Create a variable to store the HTTP response
var response *http.Response
machine, err := client.Machines.New(
context.TODO(),
dedalus.MachineNewParams{
CreateParams: dedalus.CreateParams{
MemoryMiB: 0,
StorageGiB: 0,
VCPU: 0,
},
},
option.WithResponseInto(&response),
)
if err != nil {
// handle error
}
fmt.Printf("%+v\n", machine)
fmt.Printf("Status Code: %d\n", response.StatusCode)
fmt.Printf("Headers: %+#v\n", response.Header)
```
### Making custom/undocumented requests
This library is typed for convenient access to the documented API. If you need to access undocumented
endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can use `client.Get`, `client.Post`, and other HTTP verbs.
`RequestOptions` on the client, such as retries, will be respected when making these requests.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
var (
// params can be an io.Reader, a []byte, an encoding/json serializable object,
// or a "…Params" struct defined in this library.
params map[string]any
// result can be an []byte, *http.Response, a encoding/json deserializable object,
// or a model defined in this library.
result *http.Response
)
err := client.Post(context.Background(), "/unspecified", params, &result)
if err != nil {
…
}
```
#### Undocumented request params
To make requests using undocumented parameters, you may use either the `option.WithQuerySet()`
or the `option.WithJSONSet()` methods.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
params := FooNewParams{
ID: "id_xxxx",
Data: FooNewParamsData{
FirstName: dedalus.String("John"),
},
}
client.Foo.New(context.Background(), params, option.WithJSONSet("data.last_name", "Doe"))
```
#### Undocumented response properties
To access undocumented response properties, you may either access the raw JSON of the response as a string
with `result.JSON.RawJSON()`, or get the raw JSON of a particular field on the result with
`result.JSON.Foo.Raw()`.
Any fields that are not present on the response struct will be saved and can be accessed by `result.JSON.ExtraFields()` which returns the extra fields as a `map[string]Field`.
### Middleware
We provide `option.WithMiddleware` which applies the given
middleware to requests.
```go theme={"theme":{"light":"github-light","dark":"github-dark"}}
func Logger(req *http.Request, next option.MiddlewareNext) (res *http.Response, err error) {
// Before the request
start := time.Now()
LogReq(req)
// Forward the request to the next handler
res, err = next(req)
// Handle stuff after the request
end := time.Now()
LogRes(res, err, start - end)
return res, err
}
client := dedalus.NewClient(
option.WithMiddleware(Logger),
)
```
When multiple middlewares are provided as variadic arguments, the middlewares
are applied left to right. If `option.WithMiddleware` is given
multiple times, for example first in the client then the method, the
middleware in the client will run first and the middleware given in the method
will run next.
You may also replace the default `http.Client` with
`option.WithHTTPClient(client)`. Only one http client is
accepted (this overwrites any previous client) and receives requests after any
middleware has been applied.
## Semantic versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes to library internals which are technically public but not intended or documented for external use. *(Please open a GitHub issue to let us know if you are relying on such internals.)*
2. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/dedalus-labs/dedalus-go/issues) with questions, bugs, or suggestions.
# Python SDK
Source: https://docs.dedaluslabs.ai/sdk/dcs/python
DCS Machines Python SDK v0.1.0
**v0.1.0** | [GitHub](https://github.com/dedalus-labs/dedalus-python) | [Changelog](https://github.com/dedalus-labs/dedalus-python/blob/main/CHANGELOG.md)
## Installation
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
pip install dedalus-sdk
```
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
uv add dedalus-sdk
```
## Usage
See the full method reference in the [API Reference](/api-reference/dcs) tab.
The full API of this library can be found in [api.md](api.md).
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
import os
from dedalus_sdk import Dedalus
client = Dedalus(
api_key=os.environ.get("DEDALUS_API_KEY"), # This is the default and can be omitted
)
machine = client.machines.create(
memory_mib=2048,
storage_gib=10,
vcpu=1,
)
print(machine.machine_id)
```
While you can provide a `x_api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `DEDALUS_X_API_KEY="My X API Key"` to your `.env` file
so that your X API Key is not stored in source control.
## Async Usage
Simply import `AsyncDedalus` instead of `Dedalus` and use `await` with each API call:
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
import os
import asyncio
from dedalus_sdk import AsyncDedalus
client = AsyncDedalus(
api_key=os.environ.get("DEDALUS_API_KEY"), # This is the default and can be omitted
)
async def main() -> None:
machine = await client.machines.create(
memory_mib=2048,
storage_gib=10,
vcpu=1,
)
print(machine.machine_id)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh theme={"theme":{"light":"github-light","dark":"github-dark"}}
# install from PyPI
pip install dedalus-sdk[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
import os
import asyncio
from dedalus_sdk import DefaultAioHttpClient
from dedalus_sdk import AsyncDedalus
async def main() -> None:
async with AsyncDedalus(
api_key=os.environ.get("DEDALUS_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
machine = await client.machines.create(
memory_mib=2048,
storage_gib=10,
vcpu=1,
)
print(machine.machine_id)
asyncio.run(main())
```
## Streaming
We provide support for streaming responses using Server Side Events (SSE).
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_sdk import Dedalus
client = Dedalus()
stream = client.machines.watch(
machine_id="machine_id",
)
for machine in stream:
print(machine.machine_id)
```
The async client uses the exact same interface.
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_sdk import AsyncDedalus
client = AsyncDedalus()
stream = await client.machines.watch(
machine_id="machine_id",
)
async for machine in stream:
print(machine.machine_id)
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
* Serializing back into JSON, `model.to_json()`
* Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Pagination
List methods in the Dedalus API are paginated.
This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_sdk import Dedalus
client = Dedalus()
all_machines = []
# Automatically fetches more pages as needed.
for machine in client.machines.list():
# Do something with machine here
all_machines.append(machine)
print(all_machines)
```
Or, asynchronously:
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
import asyncio
from dedalus_sdk import AsyncDedalus
client = AsyncDedalus()
async def main() -> None:
all_machines = []
# Iterate through items across all pages, issuing requests as needed.
async for machine in client.machines.list():
all_machines.append(machine)
print(all_machines)
asyncio.run(main())
```
Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
first_page = await client.machines.list()
if first_page.has_next_page():
print(f"will fetch next page using these details: {first_page.next_page_info()}")
next_page = await first_page.get_next_page()
print(f"number of items we just fetched: {len(next_page.items)}")
# Remove `await` for non-async usage.
```
Or just work directly with the returned data:
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
first_page = await client.machines.list()
print(f"next page cursor: {first_page.next_cursor}") # => "next page cursor: ..."
for machine in first_page.items:
print(machine.machine_id)
# Remove `await` for non-async usage.
```
## Error Handling
Always wrap API calls in try/catch. The SDK throws typed errors for HTTP failures.
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `dedalus_sdk.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `dedalus_sdk.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `dedalus_sdk.APIError`.
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
import dedalus_sdk
from dedalus_sdk import Dedalus
client = Dedalus()
try:
client.machines.create(
memory_mib=2048,
storage_gib=10,
vcpu=1,
)
except dedalus_sdk.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except dedalus_sdk.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except dedalus_sdk.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_sdk import Dedalus
# Configure the default for all requests:
client = Dedalus(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).machines.create(
memory_mib=2048,
storage_gib=10,
vcpu=1,
)
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_sdk import Dedalus
# Configure the default for all requests:
client = Dedalus(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = Dedalus(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).machines.create(
memory_mib=2048,
storage_gib=10,
vcpu=1,
)
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](#retries).
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `DEDALUS_LOG` to `info`.
```shell theme={"theme":{"light":"github-light","dark":"github-dark"}}
$ export DEDALUS_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py theme={"theme":{"light":"github-light","dark":"github-dark"}}
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_sdk import Dedalus
client = Dedalus()
response = client.machines.with_raw_response.create(
memory_mib=2048,
storage_gib=10,
vcpu=1,
)
print(response.headers.get('X-My-Header'))
machine = response.parse() # get the object that `machines.create()` would have returned
print(machine.machine_id)
```
These methods return an [`APIResponse`](https://github.com/dedalus-labs/dedalus-python/tree/main/src/dedalus_sdk/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/dedalus-labs/dedalus-python/tree/main/src/dedalus_sdk/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
with client.machines.with_streaming_response.create(
memory_mib=2048,
storage_gib=10,
vcpu=1,
) as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py theme={"theme":{"light":"github-light","dark":"github-dark"}}
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
* Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
* Custom [transports](https://www.python-httpx.org/advanced/transports/)
* Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
import httpx
from dedalus_sdk import Dedalus, DefaultHttpxClient
client = Dedalus(
# Or use the `DEDALUS_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python theme={"theme":{"light":"github-light","dark":"github-dark"}}
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py theme={"theme":{"light":"github-light","dark":"github-dark"}}
from dedalus_sdk import Dedalus
with Dedalus() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. *(Please open a GitHub issue to let us know if you are relying on such internals.)*
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/dedalus-labs/dedalus-python/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py theme={"theme":{"light":"github-light","dark":"github-dark"}}
import dedalus_sdk
print(dedalus_sdk.__version__)
```
Python 3.9 or higher.
# Typescript SDK
Source: https://docs.dedaluslabs.ai/sdk/dcs/typescript
DCS Machines Typescript SDK v0.1.0
**v0.1.0** | [GitHub](https://github.com/dedalus-labs/dedalus-typescript) | [Changelog](https://github.com/dedalus-labs/dedalus-typescript/blob/main/CHANGELOG.md)
## Installation
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
npm install dedalus
```
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
pnpm add dedalus
```
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
yarn add dedalus
```
```bash theme={"theme":{"light":"github-light","dark":"github-dark"}}
bun add dedalus
```
## Usage
See the full method reference in the [API Reference](/api-reference/dcs) tab.
The full API of this library can be found in [api.md](api.md).
```js theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus';
const client = new Dedalus({
apiKey: process.env['DEDALUS_API_KEY'], // This is the default and can be omitted
});
const machine = await client.machines.create({
memory_mib: 2048,
storage_gib: 10,
vcpu: 1,
});
console.log(machine.machine_id);
```
## Streaming
We provide support for streaming responses using Server Sent Events (SSE).
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus';
const client = new Dedalus();
const stream = await client.machines.watch({ machine_id: 'machine_id' });
for await (const machine of stream) {
console.log(machine.machine_id);
}
```
If you need to cancel a stream, you can `break` from the loop
or call `stream.controller.abort()`.
### Request & Response types
This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus';
const client = new Dedalus({
apiKey: process.env['DEDALUS_API_KEY'], // This is the default and can be omitted
});
const params: Dedalus.MachineCreateParams = {
memory_mib: 2048,
storage_gib: 10,
vcpu: 1,
};
const machine: Dedalus.Machine = await client.machines.create(params);
```
Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.
## Error Handling
Always wrap API calls in try/catch. The SDK throws typed errors for HTTP failures.
When the library is unable to connect to the API,
or if the API returns a non-success status code (i.e., 4xx or 5xx response),
a subclass of `APIError` will be thrown:
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
const machine = await client.machines
.create({
memory_mib: 2048,
storage_gib: 10,
vcpu: 1,
})
.catch(async (err) => {
if (err instanceof Dedalus.APIError) {
console.log(err.status); // 400
console.log(err.name); // BadRequestError
console.log(err.error?.error_code); // IDEMPOTENCY_KEY_REUSED
console.log(err.error?.message); // idempotency key reused with different request parameters
console.log(err.error?.retryable); // false
console.log(err.headers); // {server: 'nginx', ...}
} else {
throw err;
}
});
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors will all be retried by default.
You can use the `maxRetries` option to configure or disable this:
```js theme={"theme":{"light":"github-light","dark":"github-dark"}}
// Configure the default for all requests:
const client = new Dedalus({
maxRetries: 0, // default is 2
});
// Or, configure per-request:
await client.machines.create({
memory_mib: 2048,
storage_gib: 10,
vcpu: 1,
}, {
maxRetries: 5,
});
```
### Timeouts
Requests time out after 1 minute by default. You can configure this with a `timeout` option:
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
// Configure the default for all requests:
const client = new Dedalus({
timeout: 20 * 1000, // 20 seconds (default is 1 minute)
});
// Override per-request:
await client.machines.create({
memory_mib: 2048,
storage_gib: 10,
vcpu: 1,
}, {
timeout: 5 * 1000,
});
```
On timeout, an `APIConnectionTimeoutError` is thrown.
Note that requests which time out will be [retried twice by default](#retries).
## Pagination
List methods in the Dedalus API are paginated.
You can use the `for await … of` syntax to iterate through items across all pages:
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
async function fetchAllMachineListItems(params) {
const allMachineListItems = [];
// Automatically fetches more pages as needed.
for await (const machineListItem of client.machines.list()) {
allMachineListItems.push(machineListItem);
}
return allMachineListItems;
}
```
Alternatively, you can request a single page at a time:
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
let page = await client.machines.list();
for (const machineListItem of page.items) {
console.log(machineListItem);
}
// Convenience methods are provided for manually paginating:
while (page.hasNextPage()) {
page = await page.getNextPage();
// ...
}
```
### Accessing raw Response data (e.g., headers)
The "raw" `Response` returned by `fetch()` can be accessed through the `.asResponse()` method on the `APIPromise` type that all methods return.
This method returns as soon as the headers for a successful response are received and does not consume the response body, so you are free to write custom parsing or streaming logic.
You can also use the `.withResponse()` method to get the raw `Response` along with the parsed data.
Unlike `.asResponse()` this method consumes the body, returning once it is parsed.
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
const client = new Dedalus();
const response = await client.machines
.create({
memory_mib: 2048,
storage_gib: 10,
vcpu: 1,
})
.asResponse();
console.log(response.headers.get('X-My-Header'));
console.log(response.statusText); // access the underlying Response object
const { data: machine, response: raw } = await client.machines
.create({
memory_mib: 2048,
storage_gib: 10,
vcpu: 1,
})
.withResponse();
console.log(raw.headers.get('X-My-Header'));
console.log(machine.machine_id);
```
### Logging
> \[!IMPORTANT]
> All log messages are intended for debugging only. The format and content of log messages
> may change between releases.
#### Log levels
The log level can be configured in two ways:
1. Via the `DEDALUS_LOG` environment variable
2. Using the `logLevel` client option (overrides the environment variable if set)
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus';
const client = new Dedalus({
logLevel: 'debug', // Show all log messages
});
```
Available log levels, from most to least verbose:
* `'debug'` - Show debug messages, info, warnings, and errors
* `'info'` - Show info messages, warnings, and errors
* `'warn'` - Show warnings and errors (default)
* `'error'` - Show only errors
* `'off'` - Disable all logging
At the `'debug'` level, all HTTP requests and responses are logged, including headers and bodies.
Some authentication-related headers are redacted, but sensitive data in request and response bodies
may still be visible.
#### Custom logger
By default, this library logs to `globalThis.console`. You can also provide a custom logger.
Most logging libraries are supported, including [pino](https://www.npmjs.com/package/pino), [winston](https://www.npmjs.com/package/winston), [bunyan](https://www.npmjs.com/package/bunyan), [consola](https://www.npmjs.com/package/consola), [signale](https://www.npmjs.com/package/signale), and [@std/log](https://jsr.io/@std/log). If your logger doesn't work, please open an issue.
When providing a custom logger, the `logLevel` option still controls which messages are emitted, messages
below the configured level will not be sent to your logger.
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus';
import pino from 'pino';
const logger = pino();
const client = new Dedalus({
logger: logger.child({ name: 'Dedalus' }),
logLevel: 'debug', // Send all messages to pino, allowing it to filter
});
```
### Making custom/undocumented requests
This library is typed for convenient access to the documented API. If you need to access undocumented
endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can use `client.get`, `client.post`, and other HTTP verbs.
Options on the client, such as retries, will be respected when making these requests.
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
await client.post('/some/path', {
body: { some_prop: 'foo' },
query: { some_query_arg: 'bar' },
});
```
#### Undocumented request params
To make requests using undocumented parameters, you may use `// @ts-expect-error` on the undocumented
parameter. This library doesn't validate at runtime that the request matches the type, so any extra values you
send will be sent as-is.
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
client.machines.create({
// ...
// @ts-expect-error baz is not yet public
baz: 'undocumented option',
});
```
For requests with the `GET` verb, any extra params will be in the query, all other requests will send the
extra param in the body.
If you want to explicitly send an extra argument, you can do so with the `query`, `body`, and `headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you may access the response object with `// @ts-expect-error` on
the response object, or cast the response object to the requisite type. Like the request params, we do not
validate or strip extra properties from the response from the API.
### Customizing the fetch client
By default, this library expects a global `fetch` function is defined.
If you want to use a different `fetch` function, you can either polyfill the global:
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import fetch from 'my-fetch';
globalThis.fetch = fetch;
```
Or pass it to the client:
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus';
import fetch from 'my-fetch';
const client = new Dedalus({ fetch });
```
### Fetch options
If you want to set custom `fetch` options without overriding the `fetch` function, you can provide a `fetchOptions` object when instantiating the client or making a request. (Request-specific options override client options.)
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus';
const client = new Dedalus({
fetchOptions: {
// `RequestInit` options
},
});
```
#### Configuring proxies
To modify proxy behavior, you can provide custom `fetchOptions` that add runtime-specific proxy
options to requests:
**Node** \[[docs](https://github.com/nodejs/undici/blob/main/docs/docs/api/ProxyAgent.md#example---proxyagent-with-fetch)]
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus';
import * as undici from 'undici';
const proxyAgent = new undici.ProxyAgent('http://localhost:8888');
const client = new Dedalus({
fetchOptions: {
dispatcher: proxyAgent,
},
});
```
**Bun** \[[docs](https://bun.sh/guides/http/proxy)]
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'dedalus';
const client = new Dedalus({
fetchOptions: {
proxy: 'http://localhost:8888',
},
});
```
**Deno** \[[docs](https://docs.deno.com/api/deno/~/Deno.createHttpClient)]
```ts theme={"theme":{"light":"github-light","dark":"github-dark"}}
import Dedalus from 'npm:dedalus';
const httpClient = Deno.createHttpClient({ proxy: { url: 'http://localhost:8888' } });
const client = new Dedalus({
fetchOptions: {
client: httpClient,
},
});
```
## Frequently Asked Questions
## Semantic versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. *(Please open a GitHub issue to let us know if you are relying on such internals.)*
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/dedalus-labs/dedalus-typescript/issues) with questions, bugs, or suggestions.
TypeScript >= 4.9 is supported.
The following runtimes are supported:
* Web browsers (Up-to-date Chrome, Firefox, Safari, Edge, and more)
* Node.js 20 LTS or later ([non-EOL](https://endoflife.date/nodejs)) versions.
* Deno v1.28.0 or higher.
* Bun 1.0 or later.
* Cloudflare Workers.
* Vercel Edge Runtime.
* Jest 28 or greater with the `"node"` environment (`"jsdom"` is not supported at this time).
* Nitro v2.6 or greater.
Note that React Native is not supported at this time.
If you are interested in other runtime environments, please open or upvote an issue on GitHub.
# Quickstart
Source: https://docs.dedaluslabs.ai/sdk/quickstart
Install an official Dedalus SDK and make your first API call
Use the official SDKs below. For complete generated references, use the API tab and switch languages from the sidebar selector.
For orchestration patterns beyond raw SDK calls, see the [Dedalus Runner](/sdk/runner) - it is built to mix and chain tool calls across local and remote MCP servers in one agent loop.
# Cloud Deployment
Source: https://docs.dedaluslabs.ai/why-dedalus/3-click-deploy
Go from code to prod at the speed of thought.
# Use Any Model
Source: https://docs.dedaluslabs.ai/why-dedalus/any-model
Any model, any tool.
* **Any model**: OpenAI, Anthropic, Google, xAI, DeepSeek, and Mistral behind one interface.
* **MCP native**: first-class support for MCP tools, resources, prompts, and deployment flows.
* **Production ready**: streaming, structured outputs, handoffs, and policies for real workloads.
# MCP Marketplace
Source: https://docs.dedaluslabs.ai/why-dedalus/marketplace
A true MCP marketplace
Most platforms out there either give you an easy way to deploy, or an easy way to authenticate, or an easy way to integrate MCPs into your tech stack—but no single platform hits all three.
# Security First
Source: https://docs.dedaluslabs.ai/why-dedalus/security
Authentication and multi-tenancy as first-class concerns.
Security is a first-class concern in our SDKs.
You're left on your own when it comes to table-stakes features for your MCP server, such as authentication and multi-tenancy.
With Dedalus Auth (DAuth), we guarantee that third-party MCP servers hosted on our marketplace are **never** able to see your credentials during authentication.