examples/clients/simple-chatbot/README.MD
This example demonstrates how to integrate the Model Context Protocol (MCP) into a simple CLI chatbot. The implementation showcases MCP's flexibility by supporting multiple tools through MCP servers and is compatible with any LLM provider that follows OpenAI API standards.
python-dotenvrequestsmcpuvicornInstall the dependencies:
pip install -r requirements.txt
Set up environment variables:
Create a .env file in the root directory and add your API key:
LLM_API_KEY=your_api_key_here
Note: The current implementation is configured to use the Groq API endpoint (https://api.groq.com/openai/v1/chat/completions) with the llama-3.2-90b-vision-preview model. If you plan to use a different LLM provider, you'll need to modify the LLMClient class in main.py to use the appropriate endpoint URL and model parameters.
Configure servers:
The servers_config.json follows the same structure as Claude Desktop, allowing for easy integration of multiple servers.
Here's an example:
{
"mcpServers": {
"sqlite": {
"command": "uvx",
"args": ["mcp-server-sqlite", "--db-path", "./test.db"]
},
"puppeteer": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-puppeteer"]
}
}
}
Environment variables are supported as well. Pass them as you would with the Claude Desktop App.
Example:
{
"mcpServers": {
"server_name": {
"command": "uvx",
"args": ["mcp-server-name", "--additional-args"],
"env": {
"API_KEY": "your_api_key_here"
}
}
}
}
Run the client:
python main.py
Interact with the assistant:
The assistant will automatically detect available tools and can respond to queries based on the tools provided by the configured servers.
Exit the session:
Type quit or exit to end the session.
Tool Integration:
Runtime Flow: