services/minecraft/codex-skills/minecraft-debug-mcp/references/mcp-surface.md
Implementation source: /path/to/project/root/services/minecraft/src/debug/mcp-repl-server.ts.
http://localhost:3001http://localhost:3001/sseGET /sse + POST /messagesThe bot starts this server during normal runtime from:
/path/to/project/root/services/minecraft/src/cognitive/index.tsbrain://state
brain://context
brain://history
brain://logs
get_state()
get_last_prompt()
systemPrompt and drops messages items with role: "system".get_logs(limit?: number)
get_llm_trace(limit?: number, turnId?: number)
turnId to isolate trace for one injected test event.messages items with role: "system" to save tokens.execute_repl(code: string)
forget_conversation() for conversation-memory reset.inject_chat(username: string, message: string)
inject_event(type, payload, source)
type: perception | feedback | world_update | system_alertsource.type: minecraft | airi | systemsource.id: stringpnpm dev is running in the service directory.MCP REPL server running at http://localhost:3001./sse as MCP entrypoint.inject_chat) and retry get_last_prompt or get_logs.inject_chat is not a passive write: it enters the normal cognition pipeline and can cause the bot to send chat/actions.get_last_prompt may be very large (full system prompt + history); avoid repeated calls unless needed.get_last_prompt is now MCP-compacted (no raw system prompt text), which makes it cheaper for automation checks.execute_repl response includes metadata (source, durationMs, actions, logs) and a stringified returnValue.query.self()query.inventory().count(name)query.inventory().has(name, atLeast?)query.inventory().summary()query.snapshot(range?)patterns helper for known working recipes:
patterns.get(id)patterns.find(query, limit?)patterns.ids()patterns.list(limit?)inject_chat(...)get_logs(limit: 10)turn_input -> llm_attempt -> feedback -> repl_resultUse this exact sequence for fast live validation:
get_state()execute_repl("query.inventory().list().map(i => ({ name: i.name, count: i.count }))")execute_repl("forget_conversation()")inject_chat({ username: \"codex-live-test\", message: \"please gather 3 dirt blocks\" })get_logs({ limit: 10 })collectBlocks success feedback + REPL summary.get_llm_trace({ limit: 5 })await skip()).role: "system" entries.execute_repl call again and compare item counts.To validate read->action behavior:
get_logs/get_llm_trace).inject_chat refreshes reflex context first, so this should not appear in normal MCP chat-injection tests.