.agents/skills/improve-code-quality/SKILL.md
Perform a structured code quality audit on a target path in the monorepo. The workflow is: run automated tools, do deeper manual analysis, present a report, then apply fixes only after the user approves.
/improve-code-quality <path>
The path can be a directory (e.g., crates/task-runner), a single file (e.g.,
crates/task-runner/src/lib.rs), or the name of a package (e.g., moon_task_runner).
First, verify the target path exists. If it doesn't, tell the user and stop.
Then detect what languages are present:
crates/, wasm/, or legacy/, or contains .rs files or a
Cargo.tomlTell the user what you detected before continuing.
If the target is a single file, skip the automated tool phase and go straight to manual analysis.
Read the source files in the target path and look for issues that automated tools miss. Organize findings by category:
unsafe blocks without justification comments.unwrap() on user-facing code paths (not tests — unwrap is fine in tests)std::collections::HashMap or std::collections::HashSet (refer to conventions below).clone() where borrowing would work.collect() in iterator chainsBoxed.iter() instead of .into_iter() for Copy types.collect() calls.unwrap() in non-test production code)&T over .clone() unless ownership transfer is requiredCopy types (≤24 bytes) can be passed by valueCow<'_, T> when ownership is ambiguousCargo.tomlRun the appropriate tools from the repository root (without asking for permission if possible). Capture all output, including errors.
Find the crate name by reading the Cargo.toml in the target directory (the [package] name
field). If the target is a subdirectory within a crate, walk up to find the crate root.
# Type check
cargo check -p <crate_name> 2>&1
# Linting
cargo clippy -p <crate_name> --all-targets -- -D warnings 2>&1
# Format check
cargo fmt -p <crate_name> --check 2>&1
# Testing
cargo nextest run -p <crate_name> --no-fail-fast -j 4 2>&1
For crates under wasm/, run from the wasm/ directory instead.
If clippy can't run because of build errors, capture those errors — they become the highest priority findings. Don't abort the rest of the analysis.
If the target is the repo root or a very broad path, warn the user it will take a while and suggest narrowing scope.
Show all findings in a structured report:
# Code quality report: <target_path>
**Language(s):** Rust
**Files analyzed:** <count>
---
## Automated tool results
### Checking
<summarize warnings/errors>
### Linting
<summarize warnings/errors with file:line references>
### Formatting
<list files with issues, or "All files properly formatted">
### Testing
<list failing tests>
---
## Manual analysis
### Critical — Security
| # | File:Line | Finding |
|---|-----------|---------|
| 1 | path/file.rs:42 | Description |
### High — Performance
| # | File:Line | Finding |
|---|-----------|---------|
### Medium — Readability & Structure
| # | File:Line | Finding |
|---|-----------|---------|
### Low — Robustness
| # | File:Line | Finding |
|---|-----------|---------|
### Info — Dependencies
| # | File:Line | Finding |
|---|-----------|---------|
---
## Summary
- Critical: <n> | High: <n> | Medium: <n> | Low: <n> | Info: <n>
- **Total: <n>**
## Recommended fix order
1. <highest priority>
2. ...
If a category has no findings, show the header with "No issues found."
Severity guide:
After showing the report, ask: "Which findings should I fix? (all / critical+high / specific numbers / skip)"
Wait for the user's response. Do not apply anything without explicit approval.
Run auto-fix tools first:
cargo clippy -p <crate> --all-targets --fix --allow-dirty --allow-stagedcargo fmt -p <crate> -- --emit=filesApply manual fixes one at a time, stating what changed and why for each.
Re-run automated tools to verify no regressions were introduced.
Do not commit. Tell the user the changes are ready for their review.
After applying fixes, check that the target path adheres to best practices by running the
/rust-skills skill. If this skill does not exist, skip this step.
If you notice any new issues or deviations from conventions, add them to the report and ask the user if they want to address those as well.
Always enforce these project rules during analysis:
std::collections::HashMap/HashSet — use rustc_hash::FxHashMap/FxHashSet
std collections in its public API
(e.g., indexmap), but even then internal code should prefer FxHashMap/FxHashSetrust-toolchain.tomlcargo-nextest for running tests (cargo nextest, not cargo test)