When building MCP servers, tools can request sampling using the context parameter:
Copy
Ask AI
from fastmcp import Context, FastMCPmcp = FastMCP(name="MyServer")@mcp.toolasync def analyze_sentiment(text: str, ctx: Context) -> str: """Analyze the sentiment of text using the client's LLM.""" prompt = f"""Analyze the sentiment of the following text as positive, negative, or neutral. Just output a single word - 'positive', 'negative', or 'neutral'. Text to analyze: {text}""" # Request LLM analysis through sampling response = await ctx.sample(prompt) return response.text.strip()
If no sampling callback is provided but a tool requests sampling:
Copy
Ask AI
# Without sampling callbackclient = MCPClient(config="config.json") # No sampling_callback# Tool that requires sampling will return an errorresult = await session.call_tool("analyze_sentiment", {"text": "Hello"})# result.isError will be True