docs/doc/developer/MCP.mdx
The Omi MCP server enables AI assistants like Claude to interact with your Omi data using natural language. Query your memories, manage conversations, and automate workflows through AI-powered tools.
<CardGroup cols={3}> <Card title="Read Memories" icon="brain"> Retrieve and search your memories </Card> <Card title="Manage Data" icon="pen-to-square"> Create, edit, and delete memories </Card> <Card title="Access Conversations" icon="comments"> Browse full conversation transcripts </Card> </CardGroup>The easiest way to specific Omi as an MCP server is to use our hosted endpoint with SSE (Server-Sent Events).
<Steps> <Step title="Generate an API Key" icon="key"> Open the Omi app and navigate to **Settings → Developer → MCP** to generate your API key. </Step> <Step title="Configure your Client" icon="sliders"> Use the following connection details in your MCP client (like [Poke](https://poke.com)):- **Server URL:** `https://api.omi.me/v1/mcp/sse` (or shown custom URL)
- **API Key:** `omi_mcp_...` (your generated key)
### Example: Poke
If you prefer to run the MCP server locally:
<Steps> <Step title="Generate an API Key" icon="key"> Open the Omi app and navigate to **Settings → Developer → MCP** to generate your API key. </Step> <Step title="Install Docker" icon="docker"> Install Docker to run the MCP server. We recommend [OrbStack](https://orbstack.dev/) for macOS. </Step> <Step title="Configure Claude Desktop" icon="gear"> Add the following to your `claude_desktop_config.json`:```json
"mcpServers": {
"omi": {
"command": "docker",
"args": ["run", "--rm", "-i", "-e", "OMI_API_KEY=your_api_key_here", "omiai/mcp-server"]
}
}
```
Replace `your_api_key_here` with the key you generated in Step 1.
**Parameters:**
| Name | Type | Required | Description |
|------|------|----------|-------------|
| `limit` | number | No | Maximum number of memories to retrieve (default: 100) |
| `categories` | array | No | Categories to filter by (default: []) |
**Returns:** JSON object containing list of memories
**Parameters:**
| Name | Type | Required | Description |
|------|------|----------|-------------|
| `content` | string | Yes | Content of the memory |
| `category` | MemoryFilterOptions | Yes | Category of the memory |
**Returns:** Created memory object
**Parameters:**
| Name | Type | Required | Description |
|------|------|----------|-------------|
| `memory_id` | string | Yes | ID of the memory to edit |
| `content` | string | Yes | New content for the memory |
**Returns:** Status of the operation
**Parameters:**
| Name | Type | Required | Description |
|------|------|----------|-------------|
| `memory_id` | string | Yes | ID of the memory to delete |
**Returns:** Status of the operation
**Parameters:**
| Name | Type | Required | Description |
|------|------|----------|-------------|
| `include_discarded` | boolean | No | Include discarded conversations (default: false) |
| `limit` | number | No | Maximum number of conversations (default: 25) |
**Returns:** List of conversation objects containing transcripts, timestamps, geolocation, and structured summaries
```bash
npx @modelcontextprotocol/inspector uvx mcp-server-omi
```
For local development:
```bash
cd path/to/servers/src/omi
npx @modelcontextprotocol/inspector uv run mcp-server-omi
```
```bash
tail -n 20 -f ~/Library/Logs/Claude/mcp-server-omi.log
```
If you are self-hosting the Omi backend, specify the API endpoint by setting the OMI_API_BASE_URL environment variable:
export OMI_API_BASE_URL="https://your-backend-url.com"
| Feature | MCP | Developer API |
|---|---|---|
| Purpose | AI assistant integration | Direct HTTP API access |
| Access | Read/write with AI context | Read & write user data |
| Use Case | Claude Desktop, AI agents | Custom apps, dashboards |
| Best For | AI-powered workflows | Batch operations, integrations |