docs/clients/prompts.mdx
import { VersionBadge } from '/snippets/version-badge.mdx'
<VersionBadge version="2.0.0" />Use this when you need to retrieve server-defined message templates for LLM interactions.
Prompts are reusable message templates exposed by MCP servers. They can accept arguments to generate personalized message sequences for LLM interactions.
Request a rendered prompt with get_prompt():
async with client:
# Simple prompt without arguments
result = await client.get_prompt("welcome_message")
# result -> mcp.types.GetPromptResult
# Access the generated messages
for message in result.messages:
print(f"Role: {message.role}")
print(f"Content: {message.content}")
Pass arguments to customize the prompt:
async with client:
result = await client.get_prompt("user_greeting", {
"name": "Alice",
"role": "administrator"
})
for message in result.messages:
print(f"Generated message: {message.content}")
FastMCP automatically serializes complex arguments to JSON strings as required by the MCP specification. You can pass typed objects directly:
from dataclasses import dataclass
@dataclass
class UserData:
name: str
age: int
async with client:
result = await client.get_prompt("analyze_user", {
"user": UserData(name="Alice", age=30), # Automatically serialized
"preferences": {"theme": "dark"}, # Dict serialized
"scores": [85, 92, 78], # List serialized
"simple_name": "Bob" # Strings unchanged
})
The client handles serialization using pydantic_core.to_json() for consistent formatting. FastMCP servers automatically deserialize these JSON strings back to the expected types.
The get_prompt() method returns a GetPromptResult containing a list of messages:
async with client:
result = await client.get_prompt("conversation_starter", {"topic": "climate"})
for i, message in enumerate(result.messages):
print(f"Message {i + 1}:")
print(f" Role: {message.role}")
print(f" Content: {message.content.text if hasattr(message.content, 'text') else message.content}")
Prompts can generate different message types. System messages configure LLM behavior:
async with client:
result = await client.get_prompt("system_configuration", {
"role": "helpful assistant",
"expertise": "python programming"
})
# Access the returned messages
message = result.messages[0]
print(f"Prompt: {message.content}")
Conversation templates generate multi-turn flows:
async with client:
result = await client.get_prompt("interview_template", {
"candidate_name": "Alice",
"position": "Senior Developer"
})
# Multiple messages for a conversation flow
for message in result.messages:
print(f"{message.role}: {message.content}")
When a server exposes multiple versions of a prompt, you can request a specific version:
async with client:
# Get the highest version (default)
result = await client.get_prompt("summarize", {"text": "..."})
# Get a specific version
result_v1 = await client.get_prompt("summarize", {"text": "..."}, version="1.0")
See Metadata for how to discover available versions.
When using multi-server clients, prompts are accessible directly without prefixing:
async with client: # Multi-server client
result1 = await client.get_prompt("weather_prompt", {"city": "London"})
result2 = await client.get_prompt("assistant_prompt", {"query": "help"})
For complete control, use get_prompt_mcp() which returns the full MCP protocol object:
async with client:
result = await client.get_prompt_mcp("example_prompt", {"arg": "value"})
# result -> mcp.types.GetPromptResult