services/minecraft/codex-skills/minecraft-debug-mcp/SKILL.md
Use this skill to run the local bot and interact with its MCP debug interface safely and quickly.
pnpm dev from /path/to/project/root/services/minecraft and keep it running.MCP REPL server running at http://localhost:3001 in logs.http://localhost:3001/sse.brain://state, orget_state.get_state, get_last_prompt, and get_logs for diagnostics before execute_repl.get_llm_trace for structured per-attempt reasoning/content inspection.execute_repl snippets minimal and reversible.inject_chat for conversational simulation and inject_event only when specific event-shape testing is required.inject_chat as side-effectful: it can trigger actual in-game bot replies/actions.pnpm dev is still running and port 3001 is free.get_state to inspect queue/processing state and available tools/actions (skips REPL builtins by default; pass { includeBuiltins: true } to include them).get_logs with a small limit first.get_last_prompt to inspect latest LLM input.execute_repl for deep object inspection or one-off targeted calls on the running brain.inject_chat to simulate player chat and verify behavior loop.get_llm_trace to assert REPL behavior in automation (for example, detect repeated await skip() on specific events).execute_repl("forget_conversation()") to clear conversation memory before prompt-engineering tests.Read references/mcp-surface.md for exact tool/resource names and argument schemas.
get_state returns available tools/actions and runtime state (skips REPL builtins like skip, use, log by default to reduce noise; pass { includeBuiltins: true } if you need to inspect them).get_last_prompt can return very large payloads; call only when prompt-level debugging is needed.execute_repl returns a structured result where returnValue is stringified; parse mentally as display output, not typed JSON.get_logs(limit=10) is enough to verify whether an injected event reached REPL/executor.get_llm_trace(limit, turnId?) gives structured attempt-level trace data (messages, content, reasoning, usage, duration).get_last_prompt and get_llm_trace are compacted for MCP: system prompt/system-role messages are omitted to reduce token cost.query.self() for bot status.query.inventory().has(name, n) / query.inventory().count(name) for checks.query.inventory().summary() for stable aggregated item output.query.snapshot(range?) for one-shot world+inventory capture.forget_conversation() is available as a runtime function in REPL/global context and clears only conversation memory.get_state.execute_repl with query.inventory().list().map(i => ({ name: i.name, count: i.count })).inject_chat with a clear instruction (example: "please gather 3 dirt blocks").get_logs(limit=10) and check for:
collectBlocks)get_llm_trace(limit=5) when you need exact model output/reasoning for assertions.Use this workflow when validating behavior changes, tool wiring, or regressions in planning/execution loops.