docs/servers/prompts.mdx
import { VersionBadge } from "/snippets/version-badge.mdx"
Prompts are reusable message templates that help LLMs generate structured, purposeful responses. FastMCP simplifies defining these templates, primarily using the @mcp.prompt decorator.
Prompts provide parameterized message templates for LLMs. When a client requests a prompt:
This allows you to define consistent, reusable templates that LLMs can use across different clients and contexts.
@prompt DecoratorThe most common way to define a prompt is by decorating a Python function. The decorator uses the function name as the prompt's identifier.
from fastmcp import FastMCP
from fastmcp.prompts import Message
mcp = FastMCP(name="PromptServer")
# Basic prompt returning a string (converted to user message automatically)
@mcp.prompt
def ask_about_topic(topic: str) -> str:
"""Generates a user message asking for an explanation of a topic."""
return f"Can you please explain the concept of '{topic}'?"
# Prompt returning multiple messages
@mcp.prompt
def generate_code_request(language: str, task_description: str) -> list[Message]:
"""Generates a conversation for code generation."""
return [
Message(f"Write a {language} function that performs the following task: {task_description}"),
Message("I'll help you write that function.", role="assistant"),
]
Key Concepts:
ask_about_topic).Functions with *args or **kwargs are not supported as prompts. This restriction exists because FastMCP needs to generate a complete parameter schema for the MCP protocol, which isn't possible with variable argument lists.
</Tip>
While FastMCP infers the name and description from your function, you can override these and add additional metadata using arguments to the @mcp.prompt decorator:
@mcp.prompt(
name="analyze_data_request", # Custom prompt name
description="Creates a request to analyze data with specific parameters", # Custom description
tags={"analysis", "data"}, # Optional categorization tags
meta={"version": "1.1", "author": "data-team"} # Custom metadata
)
def data_analysis_prompt(
data_uri: str = Field(description="The URI of the resource containing the data."),
analysis_type: str = Field(default="summary", description="Type of analysis.")
) -> str:
"""This docstring is ignored when description is provided."""
return f"Please perform a '{analysis_type}' analysis on the data found at {data_uri}."
Optional list of icon representations for this prompt. See Icons for detailed examples </ParamField>
<ParamField body="meta" type="dict[str, Any] | None"> <VersionBadge version="2.11.0" />Optional meta information about the prompt. This data is passed through to the MCP client as the meta field of the client-side prompt object and can be used for custom metadata, versioning, or other application-specific purposes.
</ParamField>
Optional version identifier for this prompt. See Versioning for details. </ParamField> </Card>
For decorating instance or class methods, use the standalone @prompt decorator and register the bound method. See Tools: Using with Methods for the pattern.
The MCP specification requires that all prompt arguments be passed as strings, but FastMCP allows you to use typed annotations for better developer experience. When you use complex types like list[int] or dict[str, str], FastMCP:
Since the MCP specification only allows string arguments, clients need to know what string format to use for complex types. FastMCP solves this by automatically enhancing the argument descriptions with JSON schema information, making it clear to both humans and LLMs how to format their arguments.
<CodeGroup>@mcp.prompt
def analyze_data(
numbers: list[int],
metadata: dict[str, str],
threshold: float
) -> str:
"""Analyze numerical data."""
avg = sum(numbers) / len(numbers)
return f"Average: {avg}, above threshold: {avg > threshold}"
{
"name": "analyze_data",
"description": "Analyze numerical data.",
"arguments": [
{
"name": "numbers",
"description": "Provide as a JSON string matching the following schema: {\"items\":{\"type\":\"integer\"},\"type\":\"array\"}",
"required": true
},
{
"name": "metadata",
"description": "Provide as a JSON string matching the following schema: {\"additionalProperties\":{\"type\":\"string\"},\"type\":\"object\"}",
"required": true
},
{
"name": "threshold",
"description": "Provide as a JSON string matching the following schema: {\"type\":\"number\"}",
"required": true
}
]
}
MCP clients will call this prompt with string arguments:
{
"numbers": "[1, 2, 3, 4, 5]",
"metadata": "{\"source\": \"api\", \"version\": \"1.0\"}",
"threshold": "2.5"
}
But you can still call it directly with proper types:
# This also works for direct calls
result = await prompt.render({
"numbers": [1, 2, 3, 4, 5],
"metadata": {"source": "api", "version": "1.0"},
"threshold": 2.5
})
Good choices: list[int], dict[str, str], float, bool
Avoid: Complex Pydantic models, deeply nested structures, custom classes
</Warning>
FastMCP parses your function's docstring to extract the prompt description and per-argument descriptions. Google, NumPy, and Sphinx styles are all supported:
@mcp.prompt
def analyze_data(dataset: str, method: str = "summary") -> str:
"""Generate an analysis prompt for a dataset.
Args:
dataset: URI or identifier of the dataset to analyze.
method: Type of analysis to perform (summary, detailed, etc).
"""
return f"Please perform a '{method}' analysis on {dataset}."
The free-form text above the Args section — whether a single line or multiple paragraphs — becomes the prompt description, and each argument's docstring entry becomes the description on the corresponding PromptArgument in the MCP protocol. Sections like Returns, Raises, and Example are excluded from the description but otherwise ignored.
If an argument already has an explicit description — via Annotated[x, "..."] or Field(description=...) — that description takes precedence over the docstring. This makes it safe to adopt docstring-based descriptions incrementally: existing annotations keep working, and docstrings fill in the gaps.
Prompt functions must return one of these types:
str: Sent as a single user message.list[Message | str]: A sequence of messages (a conversation). Strings are auto-converted to user Messages.PromptResult: Full control over messages, description, and metadata. See PromptResult below.from fastmcp.prompts import Message
@mcp.prompt
def roleplay_scenario(character: str, situation: str) -> list[Message]:
"""Sets up a roleplaying scenario with initial messages."""
return [
Message(f"Let's roleplay. You are {character}. The situation is: {situation}"),
Message("Okay, I understand. I am ready. What happens next?", role="assistant")
]
Message provides a user-friendly wrapper for prompt messages with automatic serialization.
from fastmcp.prompts import Message
# String content (user role by default)
Message("Hello, world!")
# Explicit role
Message("I can help with that.", role="assistant")
# Auto-serialized to JSON text
Message({"key": "value"})
Message(["item1", "item2"])
Message accepts two fields:
content - The message content. Strings pass through directly. Other types (dict, list, BaseModel) are automatically JSON-serialized to text.
role - The message role, either "user" (default) or "assistant".
PromptResult gives you explicit control over prompt responses: multiple messages, roles, and metadata at both the message and result level.
from fastmcp import FastMCP
from fastmcp.prompts import PromptResult, Message
mcp = FastMCP(name="PromptServer")
@mcp.prompt
def code_review(code: str) -> PromptResult:
"""Returns a code review prompt with metadata."""
return PromptResult(
messages=[
Message(f"Please review this code:\n\n```\n{code}\n```"),
Message("I'll analyze this code for issues.", role="assistant"),
],
description="Code review prompt",
meta={"review_type": "security", "priority": "high"}
)
For simple cases, you can pass a string directly to PromptResult:
return PromptResult("Please help me with this task") # auto-converts to single Message
You can still return plain str or list[Message | str] from your prompt functions—PromptResult is opt-in for when you need to include metadata.
Parameters in your function signature are considered required unless they have a default value.
@mcp.prompt
def data_analysis_prompt(
data_uri: str, # Required - no default value
analysis_type: str = "summary", # Optional - has default value
include_charts: bool = False # Optional - has default value
) -> str:
"""Creates a request to analyze data with specific parameters."""
prompt = f"Please perform a '{analysis_type}' analysis on the data found at {data_uri}."
if include_charts:
prompt += " Include relevant charts and visualizations."
return prompt
In this example, the client must provide data_uri. If analysis_type or include_charts are omitted, their default values will be used.
You can control which prompts are enabled for clients using server-level enabled control. Disabled prompts don't appear in list_prompts and can't be called.
from fastmcp import FastMCP
mcp = FastMCP("MyServer")
@mcp.prompt(tags={"public"})
def public_prompt(topic: str) -> str:
return f"Discuss: {topic}"
@mcp.prompt(tags={"internal"})
def internal_prompt() -> str:
return "Internal system prompt"
# Disable specific prompts by key
mcp.disable(keys={"prompt:internal_prompt"})
# Disable prompts by tag
mcp.disable(tags={"internal"})
# Or use allowlist mode - only enable prompts with specific tags
mcp.enable(tags={"public"}, only=True)
See Visibility for the complete visibility control API including key formats, tag-based filtering, and provider-level control.
FastMCP supports both standard (def) and asynchronous (async def) functions as prompts. Synchronous functions automatically run in a threadpool to avoid blocking the event loop.
# Synchronous prompt (runs in threadpool)
@mcp.prompt
def simple_question(question: str) -> str:
"""Generates a simple question to ask the LLM."""
return f"Question: {question}"
# Asynchronous prompt
@mcp.prompt
async def data_based_prompt(data_id: str) -> str:
"""Generates a prompt based on data that needs to be fetched."""
# In a real scenario, you might fetch data from a database or API
async with aiohttp.ClientSession() as session:
async with session.get(f"https://api.example.com/data/{data_id}") as response:
data = await response.json()
return f"Analyze this data: {data['content']}"
Use async def when your prompt function performs I/O operations like network requests or database queries, since async is more efficient than threadpool dispatch.
Prompts can access additional MCP information and features through the Context object. To access it, add a parameter to your prompt function with a type annotation of Context:
from fastmcp import FastMCP, Context
mcp = FastMCP(name="PromptServer")
@mcp.prompt
async def generate_report_request(report_type: str, ctx: Context) -> str:
"""Generates a request for a report."""
return f"Please create a {report_type} report. Request ID: {ctx.request_id}"
For full documentation on the Context object and all its capabilities, see the Context documentation.
FastMCP automatically sends notifications/prompts/list_changed notifications to connected clients when prompts are added, enabled, or disabled. This allows clients to stay up-to-date with the current prompt set without manually polling for changes.
@mcp.prompt
def example_prompt() -> str:
return "Hello!"
# These operations trigger notifications:
mcp.add_prompt(example_prompt) # Sends prompts/list_changed notification
mcp.disable(keys={"prompt:example_prompt"}) # Sends prompts/list_changed notification
mcp.enable(keys={"prompt:example_prompt"}) # Sends prompts/list_changed notification
Notifications are only sent when these operations occur within an active MCP request context (e.g., when called from within a tool or other MCP operation). Operations performed during server initialization do not trigger notifications.
Clients can handle these notifications using a message handler to automatically refresh their prompt lists or update their interfaces.
You can configure how the FastMCP server handles attempts to register multiple prompts with the same name. Use the on_duplicate_prompts setting during FastMCP initialization.
from fastmcp import FastMCP
mcp = FastMCP(
name="PromptServer",
on_duplicate_prompts="error" # Raise an error if a prompt name is duplicated
)
@mcp.prompt
def greeting(): return "Hello, how can I help you today?"
# This registration attempt will raise a ValueError because
# "greeting" is already registered and the behavior is "error".
# @mcp.prompt
# def greeting(): return "Hi there! What can I do for you?"
The duplicate behavior options are:
"warn" (default): Logs a warning, and the new prompt replaces the old one."error": Raises a ValueError, preventing the duplicate registration."replace": Silently replaces the existing prompt with the new one."ignore": Keeps the original prompt and ignores the new registration attempt.Prompts support versioning, allowing you to maintain multiple implementations under the same name while clients automatically receive the highest version. See Versioning for complete documentation on version comparison, retrieval, and migration patterns.