Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.dedaluslabs.ai/llms.txt

Use this file to discover all available pages before exploring further.

For chat products that want their AI agent to actually do work: install packages, write files, run scripts. One Dedalus Machine per user, persistent home directory, sleeps between conversations. The user’s ~/.bash_history, project files, npm caches, and shell context all survive across sessions. The browser agent cookbook shows the agent loop. This page is the multi-tenant wrapper: provision lazily, sleep aggressively, route the right agent to the right machine.
npm install dedalus-labs

1. Provision lazily — one VM per user, on first message

import Dedalus from "dedalus-labs";
const dedalus = new Dedalus({ apiKey: process.env.DEDALUS_API_KEY! });

async function machineForUser(userId: string) {
  let id = await db.machineIdFor(userId);
  if (id) return id;

  const m = await dedalus.machines.create({ vcpu: 1, memory_mib: 2048, storage_gib: 10 });
  await runAndWait(m.machine_id, ["/bin/bash", "-c",
    "apt-get update && apt-get install -y curl git python3-pip nodejs",
  ]);
  await db.setMachineIdFor(userId, m.machine_id);
  return m.machine_id;
}

2. On every chat turn: wake, drive, sleep

async function handleTurn(userId: string, userPrompt: string) {
  const id = await machineForUser(userId);
  const m = await dedalus.machines.retrieve(id);
  await dedalus.machines.wake(id, { "If-Match": m.revision });

  // Open or reuse the agent's terminal — see /cookbook/browser-agent
  // for the full WebSocket-driven agent loop.
  const term = await dedalus.machines.terminals.create(id, { width: 100, height: 30 });
  const reply = await runAgentLoop(term.stream_url, userPrompt);

  // Don't sleep eagerly here — auto-sleep handles idle. But you *can*
  // call sleep explicitly if you know the conversation just ended.
  return reply;
}
That’s the whole multi-tenant pattern. Per-user state is isolated by VM boundary; the agent has root inside its own box and zero access to anyone else’s.

Why a per-user VM beats a per-session container

  • State persists across turns and sessions. cd /tmp/project && ls followed three days later by git status Just Works. Containers reset; the agent re-installs Node every conversation.
  • The filesystem is the agent’s long-term memory. Markdown notes the agent took, scripts it wrote, scraped data — all survive in /root indefinitely. No vector store required for “remember this project.”
  • Sleep-to-zero between conversations. A user who chats for 10 minutes a day costs 10 minutes of compute, not 24 hours.
  • Real apt-get, real Python venvs, real Cargo. No Lambda layer hacks.

Cost shape

  • ~$X / vCPU-hour while awake (see pricing).
  • ~$Y / GiB-month for sleeping storage.
  • A user who actively chats 30 min/day on a 1-vCPU machine pays for 0.5 hours of compute plus 10 GiB of storage per month. Sleeping users only pay storage.

Security notes

  • Per-user isolation is at the VM boundary. Each machine runs its own Linux kernel. There’s no shared-kernel container risk.
  • Outbound egress is on by default. If the agent is allowed to run untrusted code on the user’s behalf, treat the VM as compromisable and don’t store cross-tenant secrets there. Issue scoped credentials per machine.
  • Inbound is closed by default. A user’s machine isn’t reachable from the internet unless you explicitly create a port for it.

See also

  • Browser Agent — the agent loop driving the terminal.
  • Terminals — the WebSocket protocol your agent talks.
  • Lifecycle — sleep / wake / destroy semantics.
Last modified on May 2, 2026