docs/checks/iterating.mdx
Checks improve over time as you review results and provide feedback. There are two main levers: metrics to identify which checks need work, and rejection feedback to tune behavior.
The Metrics page shows acceptance and rejection rates per check across your repositories. A high rejection rate tells you a check is producing unhelpful suggestions and needs refinement.
Start there to identify which checks to focus on before diving into feedback or prompt edits.
When you reject a check result on the PR review page, a dialog appears where you can explain why the suggestion was wrong. This feedback is saved and included in the check's system prompt on future runs, so the check learns from your corrections.
To leave feedback:
Good feedback is specific and actionable:
| Good feedback | Bad feedback |
|---|---|
"Don't flag console.log in test files" | "Be better" |
| "Ignore TODO comments in draft PRs" | "Too many false positives" |
| "Only flag missing error handling in public API endpoints, not internal helpers" | "Wrong" |
The more precise your feedback, the faster the check improves.
The rejection dialog also lets you set a sensitivity level for the check:
If a check is producing too many low-value suggestions, try lowering the sensitivity to Conservative. If it's missing real issues, raise it to Thorough.