agents/rules/writing-skills.md
agents/This rule applies to any addition or modification to the agents/ directory. When creating new skills or rules, adhere to the following best practices.
Every skill must follow this directory structure:
skill-name/
├── SKILL.md # Required: Metadata + core instructions (<500 lines)
├── scripts/ # Executable code (Python/Bash) designed as tiny CLIs
├── references/ # Supplementary context (schemas, cheatsheets)
└── assets/ # Templates or static files used in output
SKILL.md. Keep them one level deep only.The name and description in the frontmatter of your SKILL.md are the only fields that the agent sees before triggering a skill.
name field must be 1-64 characters, contain only lowercase letters, numbers, and hyphens (no consecutive hyphens), and must exactly match the parent directory name (e.g., name: wiz-testing must live in wiz-testing/SKILL.md). Consider using gerund form (verb + -ing) for Skill names (e.g., processing-pdfs).Maintain a pristine context window by loading information only when needed.
references/schema.md, not references/db/v1/schema.md).SKILL.md to ensure complete file reads.references/auth-flow.md for specific error codes")./), regardless of the OS. Avoid Windows-style paths.README.md, CHANGELOG.md, or installation guides. Delete redundant logic.Create instructions for LLMs instead of humans.
assets/ and instruct the agent to copy them.Do not write instructions in SKILL.md that require the agent to generate complex parsing logic or boilerplate code from scratch.
scripts/ for operations like parsing complex datasets or querying databases. Instruct the agent to execute these scripts rather than generating code.When creating or modifying a skill, you must validate your draft before declaring it complete. If you have the capability to invoke subagents or start new conversations, use the prompts below to automate this validation. If not, perform a self-simulation of these steps.
Discovery Validation: Invoke a subagent with the following prompt (fill in your frontmatter):
I am building an Agent Skill based on the spec. Agents will decide whether to load this skill based entirely on the YAML metadata below.
[Paste your YAML frontmatter here]
Based strictly on this description:
1. Generate 3 realistic user prompts that you are 100% confident should trigger this skill.
2. Generate 3 user prompts that sound similar but should NOT trigger this skill.
3. Critique the description: Is it too broad? Suggest an optimized rewrite.
Logic Validation: Invoke a subagent with the following prompt (fill in your content):
Here is the full draft of my SKILL.md and the directory tree of its supporting files.
[Paste your directory structure and SKILL.md here]
Act as an autonomous agent that has just triggered this skill. Simulate your execution step-by-step based on a request to: [Insert a sample request here]
For each step, write out your internal monologue:
1. What exactly are you doing?
2. Which specific file/script are you reading or running?
3. Flag any Execution Blockers: Point out the exact line where you are forced to guess or hallucinate because my instructions are ambiguous.
Edge Case Testing: Invoke a subagent with the following prompt:
Act as a ruthless QA tester. Your goal is to break this skill.
Ask me 3 to 5 highly specific, challenging questions about edge cases, failure states, or missing fallbacks in the SKILL.md. Focus on fragile operations, environment assumptions, and unsupported configurations.
Do not fix these issues yet. Just ask the numbered questions and wait for my answer.
Before finalizing a skill, verify:
SKILL.md body is under 500 lines.