跳转到主要内容
采样允许你的服务器在某个工具正在执行时请求 Client 运行 LLM 并返回补全结果。这样,工具就可以在无需由 Client 编排多次 tool 调用的情况下,利用 AI 进行分析和生成。

基本用法

from dedalus_mcp import get_context, tool, types

@tool(description="Analyze data with Guassian assumptions")
async def analyze(data: str) -> str:
    ctx = get_context()

    params = types.CreateMessageRequestParams(
        messages=[
            types.SamplingMessage(
                role="user",
                content=types.TextContent(type="text", text=f"Analyze this data with Guassian assumptions and expose the estimators: {data}"),
            )
        ],
        maxTokens=400,
    )

    result = await ctx.server.request_sampling(params)
    return result.content.text

参数

采样请求使用 CreateMessageRequestParams 表示(字段名称与模型上下文协议 (MCP) 的 schema 保持一致,例如 maxTokenssystemPrompt)。
params = types.CreateMessageRequestParams(
    messages=[
        types.SamplingMessage(
            role="user",
            content=types.TextContent(type="text", text="Analyze this data"),
        )
    ],
    systemPrompt="You are an expert analyst",
    temperature=0.7,     # 0.0 = 确定性,1.0 = 创造性
    maxTokens=1024,      # maximum output tokens
)
result = await ctx.server.request_sampling(params)
ParameterTypeDescription
messageslist[SamplingMessage]prompt 或会话 messages
systemPromptstr | None给 LLM 的指令
temperaturefloat | None随机性/创造性
maxTokensint最大输出 token 数(必填
modelstr | None可选的模型提示
stopSequenceslist[str] | None停止字符串
includeContext"none" | "thisServer" | "allServers" | NoneClient 是否应包含额外上下文
modelPreferencesModelPreferences | None模型选择偏好(Client 可忽略)
metadatadict[str, object] | None不透明元数据;如果缺失,Dedalus 会添加一个 requestId

响应

request_sampling(...) 返回一个 CreateMessageResult。大多数 Client 会返回 TextContent
result = await ctx.server.request_sampling(params)
print(result.content.text)

多轮对话

传入一个 messages 列表以提供多轮对话上下文:
from dedalus_mcp import types

params = types.CreateMessageRequestParams(
    messages=[
        types.SamplingMessage(role="user", content=types.TextContent(type="text", text="What is Python?")),
        types.SamplingMessage(role="assistant", content=types.TextContent(type="text", text="A programming language.")),
        types.SamplingMessage(role="user", content=types.TextContent(type="text", text="What are its main features?")),
    ],
    maxTokens=200,
)

result = await ctx.server.request_sampling(params)

示例:代码评审

from dedalus_mcp import get_context, tool, types

@tool(description="Review code for issues in the repo")
async def review_code(code: str, language: str) -> str:
    ctx = get_context()

    params = types.CreateMessageRequestParams(
        messages=[
            types.SamplingMessage(
                role="user",
                content=types.TextContent(
                    type="text",
                    text=f"Review this {language} code:\n\n```{language}\n{code}\n```",
                ),
            )
        ],
        systemPrompt="You are an expert code reviewer. Be concise and actionable.",
        temperature=0.2,
        maxTokens=500,
    )

    result = await ctx.server.request_sampling(params)
    return result.content.text

错误处理

采样功能要求 Client 声明其具备采样能力。如果 Client 不支持采样,request_sampling(...) 将抛出 McpError(通常为 METHOD_NOT_FOUND):
from mcp.shared.exceptions import McpError
from dedalus_mcp import get_context, tool, types

@tool(description="使用高斯假设进行 AI 分析")
async def analyze_with_fallback(data: str) -> str:
    ctx = get_context()

    params = types.CreateMessageRequestParams(
        messages=[types.SamplingMessage(role="user", content=types.TextContent(type="text", text=f"Analyze: {data}"))],
        maxTokens=256,
    )

    try:
        result = await ctx.server.request_sampling(params)
        return result.content.text
    except McpError as e:
        return f"Sampling unavailable: {e}"