docs/explanation/adversarial-review.md
Force deeper analysis by requiring problems to be found.
A review technique where the reviewer must find issues. No "looks good" allowed. The reviewer adopts a cynical stance - assume problems exist and find them.
This isn't about being negative. It's about forcing genuine analysis instead of a cursory glance that rubber-stamps whatever was submitted.
The core rule: You must find issues. Zero findings triggers a halt - re-analyze or explain why.
Normal reviews suffer from confirmation bias. You skim the work, nothing jumps out, you approve it. The "find problems" mandate breaks this pattern:
Adversarial review appears throughout BMad workflows - code review, implementation readiness checks, spec validation, and others. Sometimes it's a required step, sometimes optional (like advanced elicitation or party mode). The pattern adapts to whatever artifact needs scrutiny.
Because the AI is instructed to find problems, it will find problems - even when they don't exist. Expect false positives: nitpicks dressed as issues, misunderstandings of intent, or outright hallucinated concerns.
You decide what's real. Review each finding, dismiss the noise, fix what matters.
Instead of:
"The authentication implementation looks reasonable. Approved."
An adversarial review produces:
- HIGH -
login.ts:47- No rate limiting on failed attempts- HIGH - Session token stored in localStorage (XSS vulnerable)
- MEDIUM - Password validation happens client-side only
- MEDIUM - No audit logging for failed login attempts
- LOW - Magic number
3600should beSESSION_TIMEOUT_SECONDS
The first review might miss a security vulnerability. The second caught four.
After addressing findings, consider running it again. A second pass usually catches more. A third isn't always useless either. But each pass takes time, and eventually you hit diminishing returns - just nitpicks and false findings.
:::tip[Better Reviews] Assume problems exist. Look for what's missing, not just what's wrong. :::