docs/users/features/code-review.md
Review code changes for correctness, security, performance, and code quality using
/review.
# Review local uncommitted changes
/review
# Review a pull request (by number or URL)
/review 123
/review https://github.com/org/repo/pull/123
# Review and post inline comments on the PR
/review 123 --comment
# Review a specific file
/review src/utils/auth.ts
If there are no uncommitted changes, /review will let you know and stop — no agents are launched.
The /review command runs a multi-stage pipeline:
Step 1: Determine scope (local diff / PR worktree / file)
Step 2: Load project review rules
Step 3: Run deterministic analysis (linter, typecheck) [zero LLM cost]
Step 4: 9 parallel review agents [9 LLM calls]
|-- Agent 1: Correctness
|-- Agent 2: Security
|-- Agent 3: Code Quality
|-- Agent 4: Performance & Efficiency
|-- Agent 5: Test Coverage
|-- Agent 6: Undirected Audit (3 personas: 6a/6b/6c)
'-- Agent 7: Build & Test (runs shell commands)
Step 5: Deduplicate --> Batch verify --> Aggregate [1 LLM call]
Step 6: Iterative reverse audit (1-3 rounds, gap finding) [1-3 LLM calls]
Step 7: Present findings + verdict
Step 8: Autofix (user-confirmed, optional)
Step 9: Post PR inline comments (if requested)
Step 10: Save report + incremental cache
Step 11: Clean up (remove worktree + temp files)
| Agent | Focus |
|---|---|
| Agent 1: Correctness | Logic errors, edge cases, null handling, race conditions, type safety |
| Agent 2: Security | Injection, XSS, SSRF, auth bypass, sensitive data exposure |
| Agent 3: Code Quality | Style consistency, naming, duplication, dead code |
| Agent 4: Performance & Efficiency | N+1 queries, memory leaks, unnecessary re-renders, bundle size |
| Agent 5: Test Coverage | Untested code paths in the diff, missing branch coverage, weak assertions |
| Agent 6: Undirected Audit | 3 parallel personas (attacker / 3am-oncall / maintainer) — catches cross-dimensional issues |
| Agent 7: Build & Test | Runs build and test commands, reports failures |
All agents run in parallel (Agent 6 launches 3 persona variants concurrently, totaling 9 parallel tasks for same-repo reviews). Findings from Agents 1-6 are verified in a single batch verification pass (one agent reviews all findings at once, keeping verification cost fixed regardless of finding count). After verification, iterative reverse audit runs 1-3 rounds of gap-finding — each round receives the cumulative finding list from prior rounds, so successive rounds focus on whatever's left undiscovered. The loop stops as soon as a round returns "No issues found", or after 3 rounds (hard cap). Reverse audit findings skip verification (the agent already has full context) and are included as high-confidence results.
Before the LLM agents run, /review automatically runs your project's existing linters and type checkers:
| Language | Tools detected |
|---|---|
| TypeScript/JavaScript | tsc --noEmit, npm run lint, eslint |
| Python | ruff, mypy, flake8 |
| Rust | cargo clippy |
| Go | go vet, golangci-lint |
| Java | mvn compile, checkstyle, spotbugs, pmd |
| C/C++ | clang-tidy (if compile_commands.json available) |
| Other | Auto-discovered from CI config (.github/workflows/*.yml, etc.) |
For projects that don't match standard patterns (e.g., OpenJDK), /review reads CI configuration files to discover what lint/check commands the project uses. No user configuration needed.
Deterministic findings are tagged with [linter] or [typecheck] and skip LLM verification — they are ground truth.
If a tool is not installed or times out, it is skipped with an informational note.
| Severity | Meaning | Posted as PR comment? |
|---|---|---|
| Critical | Must fix before merging (bugs, security, data loss, build failures) | Yes (high-confidence only) |
| Suggestion | Recommended improvement | Yes (high-confidence only) |
| Nice to have | Optional optimization | No (terminal only) |
Low-confidence findings appear in a separate "Needs Human Review" section in the terminal and are never posted as PR comments.
After presenting findings, /review offers to auto-apply fixes for Critical and Suggestion findings that have clear solutions:
Found 3 issues with auto-fixable suggestions. Apply auto-fixes? (y/n)
edit tool (targeted replacements, not full-file rewrites)When reviewing a PR, /review creates a temporary git worktree (.qwen/tmp/review-pr-<number>) instead of switching your current branch. This means:
npm ci, etc.) so linting and build/test work/review of the same PR automatically cleans up the stale worktree before starting freshYou can review PRs from other repositories by passing the full URL:
/review https://github.com/other-org/other-repo/pull/456
This runs in lightweight mode — no worktree, no linter, no build/test, no autofix. The review is based on the diff text only (fetched via GitHub API). PR comments can still be posted if you have write access.
| Capability | Same-repo | Cross-repo |
|---|---|---|
| LLM review (Agents 1-6 + verify + iterative reverse audit) | ✅ | ✅ |
| Agent 7: Build & test | ✅ | ❌ (no local codebase) |
| Deterministic analysis (linter/typecheck) | ✅ | ❌ |
| Cross-file impact analysis | ✅ | ❌ |
| Autofix | ✅ | ❌ |
| PR inline comments | ✅ | ✅ (if you have write access) |
| Incremental review cache | ✅ | ❌ |
Use --comment to post findings directly on the PR:
/review 123 --comment
Or, after running /review 123, type post comments to publish findings without re-running the review.
What gets posted:
What stays terminal-only:
Self-authored PRs: GitHub does not allow you to submit APPROVE or REQUEST_CHANGES reviews on your own pull request — both fail with HTTP 422. When /review detects that the PR author matches the current authenticated user, it automatically downgrades the API event to COMMENT regardless of verdict, so the submission still succeeds. The terminal still shows the honest verdict ("Approve" / "Request changes" / "Comment") — only the GitHub-side review event is neutralized. The actual findings still appear as inline comments on specific lines, so substantive feedback is unchanged.
Re-reviewing a PR with prior Qwen Code comments: when /review runs on a PR that already has previous Qwen Code review comments, it classifies them before posting new ones. Only same-line overlap (an existing comment on the same (path, line) as a new finding) prompts you to confirm — that's the case where you'd see a visual duplicate on the same code line. Comments from older commits, replied-to comments (treated as resolved), and comments that simply don't overlap with any new finding are silently skipped, with a terminal log line so you know what was filtered.
CI / build status check before APPROVE: if the verdict is "Approve", /review queries the PR's check-runs and commit statuses before submitting. If any check has failed (or all checks are still pending), the API event is automatically downgraded from APPROVE to COMMENT, with the review body explaining why. Rationale: the LLM review reads code statically and cannot see runtime test failures; approving while CI is red would be misleading. The inline findings are still posted unchanged. If you want to approve anyway (e.g., a known-flaky CI failure), submit the GitHub approval manually after verifying.
After the review, context-aware tips appear as ghost text. Press Tab to accept:
| State after review | Tip | What happens |
|---|---|---|
| Local review with unfixed findings | fix these issues | LLM interactively fixes each finding |
| PR review with findings | post comments | Posts PR inline comments (no re-review) |
| PR review, zero findings | post comments | Approves the PR on GitHub (LGTM) |
| Local review, all clear | commit | Commits your changes |
Note: fix these issues is only available for local reviews. For PR reviews, use Autofix (Step 8) — the worktree is cleaned up after the review, so post-review interactive fixing is not possible.
You can customize review criteria per project. /review reads rules from these files (in order):
.qwen/review-rules.md (Qwen Code native).github/copilot-instructions.md (preferred) or copilot-instructions.md (fallback — only one is loaded, not both)AGENTS.md — ## Code Review sectionQWEN.md — ## Code Review sectionRules are injected into the LLM review agents (1-6) as additional criteria. For PR reviews, rules are read from the base branch to prevent a malicious PR from injecting bypass rules.
Example .qwen/review-rules.md:
# Review Rules
- All API endpoints must validate authentication
- Database queries must use parameterized statements
- React components must not use inline styles
- Error messages must not expose internal paths
When reviewing a PR that was previously reviewed, /review only examines changes since the last review:
# First review — full review, cache created
/review 123
# PR updated with new commits — only new changes reviewed
/review 123
If you switch models (via /model) and re-review the same PR, /review detects the model change and runs a full review instead of skipping:
# Review with model A
/review 123
# Switch model
/model
# Review again — full review with model B (not skipped)
/review 123
# → "Previous review used qwen3-coder. Running full review with gpt-4o for a second opinion."
Cache is stored in .qwen/review-cache/ and tracks both the commit SHA and model ID. Make sure this directory is in your .gitignore (a broader rule like .qwen/* also works). If the cached commit was rebased away, it falls back to a full review.
For same-repo reviews, results are saved as a Markdown file in your project's .qwen/reviews/ directory (cross-repo lightweight reviews skip report persistence):
.qwen/reviews/2026-04-06-143022-pr-123.md
.qwen/reviews/2026-04-06-150510-local.md
Reports include: timestamp, diff stats, deterministic analysis results, all findings with verification status, and the verdict.
When code changes modify exported functions, classes, or interfaces, the review agents automatically search for all callers and check compatibility:
For large diffs (>10 modified symbols), analysis prioritizes functions with signature changes.
The review pipeline uses a bounded number of LLM calls regardless of how many findings are produced:
| Stage | LLM calls | Notes |
|---|---|---|
| Deterministic analysis (Step 3) | 0 | Shell commands only |
| Review agents (Step 4) | 9 (or 8) | Run in parallel; Agent 7 skipped in cross-repo mode |
| Batch verification (Step 5) | 1 | Single agent verifies all findings at once |
| Iterative reverse audit (Step 6) | 1-3 | Loops until "No issues found" or 3-round cap |
| Total | 11-13 (10-12) | Same-repo: 11-13; cross-repo: 10-12 (no Agent 7) |
Most PRs converge to the lower end of the range (1 reverse audit round); the cap prevents runaway cost on pathological cases.
The review intentionally excludes:
Silence is better than noise. Every comment should be worth the reader's time.