.opencode/commands/investigate.md
Load the test-driven-investigation, investigation-notes, find-and-run-tests,
parent-project-skills, and dad-jokes skills, then investigate: $ARGUMENTS
Use the Sentry MCP tools when given a Sentry issue ID or URL. The Sentry MCP connection requires a
user-specific X-Sentry-Token header configured in ~/.config/opencode/opencode.json under
mcp.sentry.headers. If the Sentry tools fail with auth errors, tell the user to check their token
configuration and stop — do not guess at issue details.
The argument can be:
6181478) — fetch from SentryEDGEWORKER-RUNTIME-4MS) — fetch from Sentryhttps://sentry.io/organizations/.../issues/...) — extract the issue ID,
fetch from Sentry"concurrent write()s not allowed" in kj/compat/http.c++) —
skip Sentry, go straight to orientationCreate a tracking document in the investigation-notes tool to keep track of hypotheses, code read,
and test results. Always actively consult and update this document throughout to avoid losing
insights, going in circles, or forgetting what you've tried. See the "Investigation Notes" section
below for format and rules.
If Sentry issue:
sentry_get_sentry_issue.sentry_list_sentry_issue_events (limit 1), then
sentry_get_sentry_event to get the full stack trace.If plain text: Parse the error message and file reference from the description.
Output to user: The error message, crash site, and entry point, time range, and status. One short paragraph. Do not go deeper yet.
Find three things:
The crash site source. Read the assertion/crash line and its immediate context (~50 lines).
Understand what invariant was violated and what state would cause it. If the crash is in a C++
class method, use the cross-reference tool to quickly locate the header, implementation
files, JSG registration, and test files for that class.
Recent changes. If the incident being investigated started, re-occurred, or increased in rate
recently, look at the git history around the crash site to see if recent changes may have caused
the bug. Use git blame to find when the crash line or the code around it was last modified, and
git log to see recent commits in that file.
The test file. Use /find-test on the source file containing the crash site (the
cross-reference output may already list relevant test files). If no test exists, identify the
nearest test file in the same directory.
Existing feature tests. Search for existing tests that exercise the feature involved in the
bug — not just tests near the crash site file. The crash may be in pipeline.c++ but the relevant
working test may be an integration test in a completely different directory. These existing tests
encode setup, verification, and framework patterns you need. They are your starting template.
The build command. Construct the exact bazel test invocation to run a single test case from
that test file.
Output to user: The crash site with a one-sentence explanation of the invariant, the test file path, and the build command.
Form a hypothesis in the format:
"If I do X after Y, Z will happen because W."
This does not need to be correct. It needs to be testable. State it to the user.
Ask for clarification or additional details if you cannot form a hypothesis with the information you have. But do not ask for more information just to delay writing a test.
Start from an existing test if one exists (from step 2.3). Clone it and modify the single variable that your hypothesis targets (disable an autogate, change a config flag, alter the setup). This is almost always faster and more correct than writing from scratch, because existing tests already have the right verification (subrequest checks, expected log patterns, shutdown handling).
If no existing test is suitable, write a new one that:
Keep it short. Prefer public API. Do not try to reproduce the full production call stack.
Do not interrupt your flow to investigate tangents while writing the test. If you realize you need to understand something else to write the test, make a note of it and move on — you can investigate it in the next iteration if the test doesn't reproduce the bug.
Build and run using the command from step 2. Start the build immediately. Do not read more code before starting the build.
Using parallel sub-agents, waiting for the build, read code that would inform the next test iteration if this one doesn't reproduce the bug
After every test run:
Based on the result:
Repeat until the bug mechanism is confirmed or you've exhausted reasonable hypotheses (at which point, report what you've tried and what you've ruled out).
When the mechanism is confirmed, output:
file:line with explanationdad-jokes skill). Don't overdo it.
When summarizing, always preserve any jokes from the subagent output, and always including
the intro prefix ("Here's a dad joke for you:", etc.) so the user knows it's intentional.