docs/customize/deep-dives/mcp.mdx
As AI systems get better, they're still held back by their training data and can't access real-time information or specialized tools. The Model Context Protocol (MCP) fixes this by letting AI models connect with outside data sources, tools, and environments. This allows smooth sharing of information and abilities between AI systems and the wider digital world. This standard, created by Anthropic to bring together prompts, context, and tool use, is key for building truly useful AI experiences that can be set up with custom tools.
Currently custom tools can be configured using the Model Context Protocol standard to unify prompts, context, and tool use.
MCP Servers can be added to hub configs using mcpServers. You can
explore available MCP servers
here.
<Info>MCP can only be used in the agent mode.</Info>
Below is a quick example of setting up a new MCP server for use in your config:
.continue/mcpServers at the top level of your workspaceplaywright-mcp.yaml to this foldername: Playwright mcpServer
version: 0.0.1
schema: v1
mcpServers:
- name: Browser search
command: npx
args:
- "@playwright/mcp@latest"
Now test your MCP server by prompting the following command:
Open the browser and navigate Hacker News. Save the top 10 headlines in a hn.txt file.
The result will be a generated file called hn.txt in the current working directory.
You can set up an MCP server to search the Continue documentation directly from your config. This is particularly useful for getting help with Continue configuration and features.
For complete setup instructions, troubleshooting, and usage examples, see the Continue MCP Reference.
For example, place your JSON MCP config file at .continue/mcpServers/mcp.json in your workspace.
</Info>
To set up your own MCP server, read the MCP
quickstart and then create an
mcpServers or add a local MCP
server block to your config file:
# ...
mcpServers:
- name: SQLite MCP
command: npx
args:
- "-y"
- "mcp-sqlite"
- "/path/to/your/database.db"
# ...
MCP components include a few additional properties specific to MCP servers.
name: A display name for the MCP server.type: The type of the MCP server: sse, stdio, streamable-httpcommand: The command to run to start the MCP server.args: Arguments to pass to the command.env: Secrets to be injected into the command as environment variables.MCP now supports remote server connections through HTTP-based transports, expanding beyond the traditional local stdio transport method. This enables integration with cloud-hosted MCP servers and distributed architectures.
sse)For real-time streaming communication, use the SSE transport:
# ...
mcpServers:
- name: Name
type: sse
url: https://....
# ...
stdio)For local MCP servers that communicate via standard input and output:
# ...
mcpServers:
- name: Name
type: stdio
command: npx
args:
- "@modelcontextprotocol/server-sqlite"
- "/path/to/your/database.db"
# ...
For standard HTTP-based communication with streaming capabilities:
# ...
mcpServers:
- name: Name
type: streamable-http
url: https://....
# ...
These remote transport options allow you to connect to MCP servers hosted on remote infrastructure, enabling more flexible deployment architectures and shared server resources across multiple clients.
For detailed information about transport mechanisms and their use cases, refer to the official MCP documentation on transports.
With some MCP servers you will need to use API keys or other secrets. You can leverage locally stored environments secrets
as well as access hosted secrets in the Continue Mission Control. To leverage Hub secrets, you can use the inputs property in your MCP env block instead of secrets.
# ...
mcpServers:
- name: Supabase MCP
command: npx
args:
- -y
- "@supabase/mcp-server-supabase@latest"
- --access-token
- ${{ secrets.SUPABASE_TOKEN }}
env:
SUPABASE_TOKEN: ${{ secrets.SUPABASE_TOKEN }}
- name: GitHub
command: npx
args:
- "-y"
- "@modelcontextprotocol/server-github"
env:
GITHUB_PERSONAL_ACCESS_TOKEN: ${{ secrets.GITHUB_PERSONAL_ACCESS_TOKEN }}
# ...