external/ag-shared/prompts/skills/reflect/SKILL.md
Analyse the current conversation to find where agentic configuration caused friction, wasted context, or produced wrong results. Output a structured report with actionable recommendations, then optionally apply improvements.
Parse optional flags from the invocation:
--focus skills|rules|agents|commands|all — narrow analysis scope (default: all)Examples: /reflect, /reflect --focus skills, /reflect the jira skill kept loading when I was just editing code
Scan the conversation history above this skill invocation. For each friction signal, record the evidence (what happened), the config source (which rule, skill, or agent was involved), and the impact (wasted tokens, wrong output, user correction needed, time lost).
The user explicitly corrected the agent's approach or output — "no", "wrong", "don't do that", "use X instead", "I said...". The agent made assumptions a rule should have prevented, or lacked knowledge a rule should have provided.
Likely root cause: Rule gap, rule ambiguity, or rule not loaded for the relevant file pattern.
A skill was invoked but didn't match the user's intent. A skill was invoked multiple times for the same task (retry signal). A skill loaded but its guidance was ignored or contradicted by the agent. The user had to manually name a skill that should have auto-triggered, or a skill auto-triggered when it shouldn't have.
Likely root cause: Skill description too broad or too narrow, missing trigger phrases, wrong invocable setting, or skill body missing key guidance.
The agent lacked knowledge that a rule should have provided. The agent violated a convention that exists in a rule but the rule wasn't loaded because the glob pattern didn't match the files being edited. The agent had to ask the user for information that's already documented somewhere.
Likely root cause: Missing rule, wrong glob pattern, rule content incomplete or outdated.
A sub-agent produced off-topic or low-quality output. A sub-agent lacked necessary tools or permissions. A sub-agent duplicated work already done by the main agent or another sub-agent.
Likely root cause: Agent description too vague, wrong tool list, insufficient context in the agent prompt.
Large rules loaded that were irrelevant to the task. Skills auto-triggered when they shouldn't have. Redundant information loaded from overlapping rules. The conversation burned significant tokens on a dead-end approach that better guidance would have prevented.
Likely root cause: Overly broad globs, missing invocable: user-only, rule overlap, or missing early-exit guidance.
Tool calls denied that should have been allowed. MCP servers unavailable when needed. Hooks blocked a legitimate workflow.
Likely root cause: .claude-settings.json gaps, MCP config issues, hook logic too strict.
The user needed a workflow that no skill or command covers. The agent had to improvise a multi-step process that should be codified. A repeatable task with no existing automation.
Likely root cause: Missing skill or command.
For each friction signal, trace it to the responsible configuration file.
Read .rulesync/README.md to understand what's available. Then, only for signals that need deeper investigation, read the specific config files involved.
Do not read all config files upfront — only the ones relevant to identified friction.
For each friction signal, determine:
| Field | Description |
|---|---|
| Category | skill-misfire, rule-gap, rule-overload, agent-focus, context-waste, permission-gap, or missing-capability |
| Severity | high (blocked work or significant rework), medium (friction but worked around), low (minor annoyance) |
| Config file | Exact path to the responsible file, or "N/A — new file needed" |
| Evidence | Brief description of what happened in the conversation |
| Recommendation | Specific change: text to add/remove/modify, glob to adjust, frontmatter to change, or new file to create |
Present findings using this structure:
# Reflection Report
**Conversation focus**: {brief description of what the session was about}
**Analysis scope**: {all | skills | rules | agents | commands}
**Findings**: {N} issue(s) across {M} categories
---
## High Severity
### H1: {Short title}
**Category**: {category}
**Evidence**: {what happened}
**Config**: `{file path}`
**Problem**: {why the current config caused this}
**Recommendation**: {specific change — be concrete enough to act on}
---
## Medium Severity
### M1: {Short title}
{Same structure}
---
## Low Severity
### L1: {Short title}
{Same structure}
---
## Recommended Changes
| # | File | Change | Severity |
|---|------|--------|----------|
| 1 | `.rulesync/rules/foo.md` | Add section on X | High |
| 2 | `.rulesync/skills/bar/SKILL.md` | Narrow description trigger | Medium |
## New Capabilities Needed
{List any missing skills, commands, or rules that should be created, with a one-paragraph spec for each}
If no friction was found, say so — a clean conversation is valuable signal too.
After presenting the report, ask the user what they'd like to do:
.rulesync/ conventions from the /rulesync skill)/remember (Project Memory Path) as rule/skill updates/skill-creator is available, use it to draft themUse AskUserQuestion with these options. Wait for the user's response.
For each approved change:
.rulesync/ conventions:
.claude/ directly — it's generated outputCLAUDE.md, AGENTS.md, or root: true filesexternal/ag-shared/prompts/external/prompts/.rulesync/ directly./external/ag-shared/scripts/setup-prompts/setup-prompts.sh
./external/ag-shared/scripts/setup-prompts/verify-rulesync.sh
If the report identified missing capabilities and the user chose option 3:
/skill-creator with the spec as context/validate-prompts — that skill checks structural path hygiene. This skill checks functional effectiveness./optimise-context — that command audits token budget. This skill audits conversation-level friction.| Skill | Boundary |
|---|---|
/remember | Reflect identifies what to improve; Remember persists the learnings |
/validate-prompts | Validates structural correctness; Reflect validates functional effectiveness |
/optimise-context | Optimises token budget; Reflect optimises guidance quality |
/rulesync | Provides the conventions Reflect follows when applying changes |
/skill-creator | Drafts new skills that Reflect identifies as missing capabilities |