.qwen/skills/feat-dev/SKILL.md
Use this workflow when implementing a feature in qwen-code that needs design, behavioral validation, or coordinated changes across multiple files. Each phase produces a concrete artifact. Do not combine phases; the output of each phase feeds the next.
Use .qwen/ paths for planning artifacts:
.qwen/design/<feature>.md.qwen/e2e-tests/<feature>.mdUnderstand the requested behavior and the current qwen-code implementation.
Use a code exploration agent when available. Ask it to inspect the relevant qwen-code areas for:
In parallel, inspect docs, issues, tests, and nearby implementations that define or constrain the expected behavior. If no exploration agent is available, do the same investigation locally.
Output: mental model of current behavior, desired behavior, constraints, and key file paths with line numbers.
Write a design doc covering:
Use prose, tables, and bullets. Avoid code snippets unless essential for a key data structure. JSON config examples are acceptable.
Output: design doc on disk.
Use the e2e-testing skill to choose test modes. Then write an E2E test plan
covering:
test-engineer agents.Output: test plan on disk.
Validate the test plan against the current baseline using the globally installed
qwen CLI, not the local build.
Spawn test-engineer agents for independent test groups when the runtime
supports it. The feature is not implemented yet, so tests should either fail or
show the gap. Iterate the test plan if the dry-run reveals broken commands,
wrong filters, or false positives.
Output: confirmed-working test plan with accurate pre-implementation baseline.
Read the relevant source files before editing. Implement the changes described in the design doc and follow project conventions:
After implementation:
npm run build
npm run typecheck
npm run bundle
Also run focused unit tests for changed files from the relevant package directory.
Output: local implementation that builds and passes focused tests.
Run the full E2E test plan against the local build with node dist/cli.js.
Spawn independent test-engineer agents when useful and available.
If tests fail, diagnose, fix, rebuild, re-bundle, and re-test until all groups pass.
Output: E2E results appended to the test plan.
Run /review with a review task listing all changed files. Triage each comment
before acting:
After fixes, re-run unit tests and a quick E2E sanity check.
Output: clean implementation with valid review findings addressed.
Skip unless the user asks. Create the branch, commit with Conventional Commits, push, and create a draft PR using the project PR template. Post E2E results as a separate PR comment when applicable.