BYOK lets you send requests through Dedalus using your own provider API key. The request still flows through our unified API (routing, tool calling, streaming, format normalization), but the LLM call is billed to your account with the provider.
When to use BYOK
- You have negotiated pricing or credits with a provider.
- You want to use a model tier or region not available on our shared keys.
- Your compliance policy requires that API keys stay under your control.
Quick start
Pass your provider credentials to the Dedalus constructor alongside your normal API key:
import Dedalus from "dedalus-labs";
const client = new Dedalus({
apiKey: process.env["DEDALUS_API_KEY"],
provider: "openai",
providerKey: process.env["OPENAI_API_KEY"],
providerModel: "gpt-4o-mini",
});
const response = await client.chat.completions.create({
model: "openai/gpt-4o-mini",
messages: [{ role: "user", content: "Hello" }],
});
console.log(response.choices[0]?.message?.content);
You can also set DEDALUS_PROVIDER, DEDALUS_PROVIDER_KEY, and DEDALUS_PROVIDER_MODEL as
environment variables. The SDK reads them automatically — constructor params take precedence.
The SDK attaches three headers to every outgoing request:
| Header | SDK option | Description |
|---|
X-Provider | provider | Provider name (openai, anthropic, google, etc.) |
X-Provider-Key | providerKey | Your API key for that provider |
X-Provider-Model | providerModel | Model identifier at the provider (optional) |
Only providerKey is strictly required. If you omit provider, it is inferred from the model name. If you omit providerModel, the model from the request body is used.
Provider examples
const client = new Dedalus({
apiKey: process.env["DEDALUS_API_KEY"],
provider: "openai",
providerKey: process.env["OPENAI_API_KEY"],
providerModel: "gpt-4o-mini",
});
const response = await client.chat.completions.create({
model: "openai/gpt-4o-mini",
messages: [{ role: "user", content: "Hello" }],
});
Streaming
const stream = await client.chat.completions.create({
model: "openai/gpt-4o-mini",
stream: true,
messages: [{ role: "user", content: "Tell me a short story." }],
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) process.stdout.write(content);
}
Switching providers at runtime
Use withOptions() to create a new client with different BYOK credentials. Both clients are independent — the original is not mutated.
const openaiClient = new Dedalus({
apiKey: process.env["DEDALUS_API_KEY"],
provider: "openai",
providerKey: process.env["OPENAI_API_KEY"],
providerModel: "gpt-4o-mini",
});
const anthropicClient = openaiClient.withOptions({
provider: "anthropic",
providerKey: process.env["ANTHROPIC_API_KEY"],
providerModel: "claude-sonnet-4-20250514",
});
const [a, b] = await Promise.all([
openaiClient.chat.completions.create({
model: "openai/gpt-4o-mini",
messages: [{ role: "user", content: "Hello from OpenAI" }],
}),
anthropicClient.chat.completions.create({
model: "anthropic/claude-sonnet-4-20250514",
messages: [{ role: "user", content: "Hello from Anthropic" }],
max_tokens: 50,
}),
]);
Per-request overrides
Override BYOK headers for a single request without changing the client:
const response = await client.chat.completions.create(
{
model: "google/gemini-2.5-pro",
messages: [{ role: "user", content: "Hello" }],
},
{
headers: {
"X-Provider": "google",
"X-Provider-Key": "your-google-key",
},
},
);
DedalusRunner with BYOK
DedalusRunner inherits BYOK from the client you pass in — no extra configuration needed.
Its core job is to run multi-step loops that can mix and chain tool calls across local tooling plus remote and local MCP servers in one run.
Basic runner
import Dedalus, { DedalusRunner } from "dedalus-labs";
const client = new Dedalus({
apiKey: process.env["DEDALUS_API_KEY"],
provider: "openai",
providerKey: process.env["OPENAI_API_KEY"],
providerModel: "gpt-4o-mini",
});
const runner = new DedalusRunner(client);
const result = await runner.run({
model: "openai/gpt-4o-mini",
instructions: "You are a helpful assistant.",
input: "What are the three laws of thermodynamics?",
maxSteps: 5,
});
console.log(result.output);
console.log(result.stepsUsed);
The Dedalus API currently strips tool_calls[].function.arguments from provider responses, so
local tool execution does not work yet with BYOK. This affects both SDKs.
import Dedalus, { DedalusRunner } from "dedalus-labs";
import { zodFunction } from "dedalus-labs/helpers/zod";
import { z } from "zod";
const client = new Dedalus({
apiKey: process.env["DEDALUS_API_KEY"],
provider: "openai",
providerKey: process.env["OPENAI_API_KEY"],
providerModel: "gpt-4o-mini",
});
const calculator = zodFunction({
name: "calculator",
description: "Perform basic math operations",
parameters: z.object({
a: z.number(),
b: z.number(),
operation: z.enum(["add", "subtract", "multiply", "divide"]),
}),
function: (args) => {
switch (args.operation) {
case "add":
return args.a + args.b;
case "subtract":
return args.a - args.b;
case "multiply":
return args.a * args.b;
case "divide":
return args.a / args.b;
}
},
});
const runner = new DedalusRunner(client);
const result = await runner.run({
model: "openai/gpt-4o-mini",
input: "What is 42 * 17?",
tools: [calculator],
maxSteps: 5,
});
Runner with MCP servers
import Dedalus, { DedalusRunner } from "dedalus-labs";
const client = new Dedalus({
apiKey: process.env["DEDALUS_API_KEY"],
});
const runner = new DedalusRunner(client);
const result = await runner.run({
model: "openai/gpt-4o-mini",
input: "What's the weather forecast for San Francisco this week?",
mcpServers: ["windsor/open-meteo-mcp"],
maxSteps: 10,
});
console.log(result.output);
Runner with streaming
import Dedalus, { DedalusRunner } from "dedalus-labs";
const client = new Dedalus({
apiKey: process.env["DEDALUS_API_KEY"],
provider: "openai",
providerKey: process.env["OPENAI_API_KEY"],
providerModel: "gpt-4o-mini",
});
const runner = new DedalusRunner(client);
const result = await runner.run({
model: "openai/gpt-4o-mini",
input: "Write a short story about a robot learning to cook.",
stream: true,
maxSteps: 5,
});
if (Symbol.asyncIterator in result) {
for await (const chunk of result) {
if (chunk.choices?.[0]?.delta?.content) {
process.stdout.write(chunk.choices[0].delta.content);
}
}
}
Switching providers between runner calls
import Dedalus, { DedalusRunner } from "dedalus-labs";
const openaiClient = new Dedalus({
apiKey: process.env["DEDALUS_API_KEY"],
provider: "openai",
providerKey: process.env["OPENAI_API_KEY"],
providerModel: "gpt-4o-mini",
});
const anthropicClient = openaiClient.withOptions({
provider: "anthropic",
providerKey: process.env["ANTHROPIC_API_KEY"],
providerModel: "claude-sonnet-4-20250514",
});
const openaiRunner = new DedalusRunner(openaiClient);
const anthropicRunner = new DedalusRunner(anthropicClient);
const codeResult = await openaiRunner.run({
model: "openai/gpt-4o-mini",
input: "Write a fizzbuzz function in TypeScript",
maxSteps: 3,
});
const reviewResult = await anthropicRunner.run({
model: "anthropic/claude-sonnet-4-20250514",
input: `Review this code:\n${codeResult.output}`,
maxSteps: 3,
});
Partial BYOK
You don’t need to set all three parameters. If you only provide providerKey, the provider is inferred from the model name and the model from the request body.
const client = new Dedalus({
apiKey: process.env["DEDALUS_API_KEY"],
providerKey: process.env["OPENAI_API_KEY"],
});
Supported providers
Any provider in our model list works with BYOK:
How it works
Your request still goes through Dedalus. We handle routing, format normalization, streaming, and tool calling. The only difference is which API key is used for the upstream LLM call.
You → Dedalus API (your Dedalus key) → Provider (your provider key) → Response → You
BYOK keys are sent over HTTPS and are never stored. They are used for the duration of the request
and discarded. If you need Dedalus to manage keys on your behalf, contact us at
support@dedaluslabs.ai.
Error handling
| Scenario | What happens |
|---|
| Invalid provider name | HTTP 400 with supported provider list |
| Missing or invalid provider key | Provider returns its own auth error (usually 401) |
| Model not available on provider | Provider returns its own model error (usually 404) |
The error response always includes the upstream provider’s error message so you can debug directly.