docs/user/mcp-server_en.md
Prompt Optimizer supports the Model Context Protocol (MCP), enabling integration with AI applications that support MCP such as Claude Desktop.
Docker is the simplest deployment method, with both Web interface and MCP server starting together:
# Basic deployment
docker run -d -p 8081:80 \
-e VITE_OPENAI_API_KEY=your-openai-key \
-e MCP_DEFAULT_MODEL_PROVIDER=openai \
--name prompt-optimizer \
linshen/prompt-optimizer
# Access URLs
# Web Interface: http://localhost:8081
# MCP Server: http://localhost:8081/mcp
Note: This method is only for developers for development and debugging. Regular users should use Docker deployment.
# 1. Clone the project
git clone https://github.com/your-repo/prompt-optimizer.git
cd prompt-optimizer
# 2. Install dependencies
pnpm install
# 3. Configure environment variables (copy and edit .env.local)
cp env.local.example .env.local
# 4. Start MCP server
pnpm mcp:dev
The server will start at http://localhost:3000/mcp. Developers can refer to the Developer Documentation for more development-related information.
At least one API key must be configured:
# Choose one or more API keys
VITE_OPENAI_API_KEY=your-openai-key
VITE_GEMINI_API_KEY=your-gemini-key
VITE_DEEPSEEK_API_KEY=your-deepseek-key
VITE_SILICONFLOW_API_KEY=your-siliconflow-key
VITE_ZHIPU_API_KEY=your-zhipu-key
# Custom API (e.g., Ollama)
VITE_CUSTOM_API_KEY=your-custom-key
VITE_CUSTOM_API_BASE_URL=http://localhost:11434/v1
VITE_CUSTOM_API_MODEL=qwen2.5:0.5b
# Preferred model provider (when multiple API keys are configured)
# Options: openai, gemini, anthropic, deepseek, siliconflow, zhipu, dashscope, openrouter, modelscope, custom
MCP_DEFAULT_MODEL_PROVIDER=openai
# Log level (optional, default: debug)
# Options: debug, info, warn, error
MCP_LOG_LEVEL=info
# HTTP port (optional, default: 3000, not needed for Docker deployment)
MCP_HTTP_PORT=3000
# Default language (optional, default: zh)
# Options: zh, en
MCP_DEFAULT_LANGUAGE=zh
%APPDATA%\Claude\services~/Library/Application Support/Claude/services~/.config/Claude/servicesCreate or edit the services.json file:
{
"services": [
{
"name": "Prompt Optimizer",
"url": "http://localhost:8081/mcp"
}
]
}
Note: If you are using developer local deployment (port 3000), please change the URL to
http://localhost:3000/mcp.
The MCP server supports the standard MCP protocol and can be used by any compatible client:
http://localhost:8081/mcphttp://localhost:3000/mcpMCP Inspector is the official testing tool:
# 1. Start MCP server
pnpm mcp:dev
# 2. Start Inspector in another terminal
npx @modelcontextprotocol/inspector
In the Inspector Web UI:
Streamable HTTPhttp://localhost:3000/mcpError: Error: listen EADDRINUSE: address already in use
Solution: Port is occupied, change port or stop the occupying process
# Check port usage
netstat -ano | findstr :3000
# Change port
MCP_HTTP_PORT=3001 pnpm mcp:dev
Error: No enabled models found
Solution: Check API key configuration
# Ensure at least one valid API key is configured
echo $VITE_OPENAI_API_KEY
Error: Using wrong model
Solution: Check MCP_DEFAULT_MODEL_PROVIDER configuration
# Ensure provider name is correct
MCP_DEFAULT_MODEL_PROVIDER=openai # not OpenAI
Issue: After enabling ACCESS_PASSWORD in Docker deployment, MCP Inspector connection fails with 401 error
Cause: When password protection is enabled in Docker deployment, Nginx enables Basic authentication for all routes, including the /mcp route
Solutions:
/mcp route is now configured to bypass Basic authenticationACCESS_PASSWORD environment variabledocker run -p 3000:3000 ...Technical Details:
auth_basic off; for the /mcp route in docker/nginx.confSolution Steps:
Enable verbose logging:
# Development environment
MCP_LOG_LEVEL=debug pnpm mcp:dev
# Docker environment
docker run -e MCP_LOG_LEVEL=debug ...
If you encounter issues: