docs/development/mcp-server.md
The testing backend includes an embedded MCP (Model Context Protocol) server that allows MCP-compatible clients (e.g. Claude Code) to inspect and interact with a running Slint application over HTTP. This document covers the architecture and internals for developers working on internal/backends/testing/.
The MCP server shares a common introspection layer with the system-testing (protobuf/TCP) transport. Both transports use the same IntrospectionState for window and element tracking, the same protobuf-derived types for data structures, and the same ElementHandle API for interacting with the UI. The MCP transport adds a thin JSON-RPC/HTTP wrapper on top.
┌─────────────────────────────────────────────┐
│ Slint Application (event loop) │
├──────────────────┬──────────────────────────┤
│ │ introspection.rs │
│ │ IntrospectionState │
│ │ (window/element arenas) │
│ ┌──────────┴──────────┐ │
│ │ │ │
│ systest.rs mcp_server.rs │
│ (TCP/protobuf) (HTTP/JSON-RPC) │
│ system-testing mcp feature │
│ feature │
└───────┴─────────────────────┴───────────────┘
The MCP server is controlled by two layers:
Cargo feature mcp — Compiles the MCP server code. Defined in internal/backends/testing/Cargo.toml and forwarded through internal/backends/selector/Cargo.toml. Not currently exposed through the public slint crate.
Environment variable SLINT_MCP_PORT — Controls whether the server actually starts at runtime. If not set, mcp_server::init() returns immediately with no overhead.
See the README for setup instructions.
Initialization is triggered from the backend selector (internal/backends/selector/lib.rs) after the platform is successfully created:
mcp_server::init() checks SLINT_MCP_PORT. If absent, returns early.introspection::ensure_window_tracking() to install a window-shown hook that registers windows with the shared IntrospectionState.context.spawn_local().The lazy start via OnceCell ensures the server only binds the port once the application has an event loop running and a window to inspect.
introspection.rs)The central data structure, stored as a thread-local Rc<IntrospectionState>:
windows — Arena<TrackedWindow>: tracks live windows via weak references to their WindowAdapter.element_handles — Arena<ElementHandle>: maps arena indices to ElementHandle instances.element_handle_order — VecDeque<Index>: tracks insertion order for FIFO eviction.Both transports use generational_arena::Index internally. The proto Handle type ({index, generation}) is the wire format — index_to_handle() and handle_to_index() convert between them.
Handles are generational: if an element is evicted and its arena slot reused, stale handles are detected because the generation won't match.
The element arena is capped at 10,000 entries (ELEMENT_HANDLE_CAP). When the cap is exceeded, the oldest handles are evicted (FIFO order), with one exception: root element handles for tracked windows are never evicted — they are pushed to the back of the queue instead.
When a handle is resolved via IntrospectionState::element(), the returned ElementHandle is checked with is_valid(). If the underlying UI element has been destroyed (e.g. the component was removed), the stale handle is cleaned up and an error is returned.
mcp_server.rs)The server implements MCP's Streamable HTTP transport:
POST /mcp (or POST /)application/jsonThe server is stateless (no session management). Each request is a single JSON-RPC call — batch requests are rejected.
The HTTP server is built directly on async-net (async TCP) and httparse (HTTP/1.1 parsing), with no framework dependency. It supports:
OPTIONS) for browser-based clientslocalhost, 127.0.0.1, and ::1 origins are accepted127.0.0.1, not 0.0.0.0.Tool calls arrive as tools/call JSON-RPC methods. The handle_tool_call() function dispatches by tool name. All tools deserialize parameters into proto request types (leveraging pbjson-generated Deserialize impls), call methods on IntrospectionState, and serialize the response back to JSON.
The initialize response includes a detailed instructions field that guides MCP clients through the workflow, handle format, enum values, and query syntax. This is the primary documentation that AI clients see when connecting.
build.rs)Both system-testing and mcp features trigger the same build pipeline:
protox compiles slint_systest.proto (pure-Rust, no external protoc needed)prost-build generates Rust structs from the proto descriptors → proto.rspbjson-build generates Serialize/Deserialize impls → proto.serde.rsThe MCP transport uses the serde_json-based serialization, while the system-testing transport uses prost's binary encoding. Both share the same proto types.
slint_systest.proto. The build pipeline will auto-generate the JSON schema for the MCP tool's inputSchema.ToolDef entry to the TOOLS table in mcp_server.rs with name, description, proto request type, and optional fields.handle_tool_call().IntrospectionState in introspection.rs so both transports can use them.instructions string in the initialize response if the new tool changes the recommended workflow.| File | Purpose |
|---|---|
internal/backends/testing/introspection.rs | Shared IntrospectionState, arena management, window/element operations |
internal/backends/testing/mcp_server.rs | HTTP server, JSON-RPC dispatch, MCP tool definitions |
internal/backends/testing/systest.rs | System-testing TCP/protobuf transport (shares introspection layer) |
internal/backends/testing/slint_systest.proto | Protobuf definitions (source of truth for data types) |
internal/backends/testing/build.rs | Proto compilation pipeline |
internal/backends/selector/lib.rs | Backend initialization, MCP server startup hook |