.agents/skills/analyze-github-action-logs/SKILL.md
Fetch and analyze recent GitHub Actions runs for a given workflow. Review agent/step performance, identify wasted effort and mistakes, and produce a report with actionable improvements.
You need:
workflow (required) — The workflow file name or ID (e.g., issue-triage.yml, deploy.yml).repo (optional) — The GitHub repository in OWNER/REPO format. Defaults to withastro/astro.count (optional) — Number of recent completed runs to analyze. Defaults to 5.Fetch the most recent completed runs for the workflow. Filter by --status=completed:
gh run list --workflow=<workflow> -R <repo> --status=completed -L <count>
Present the list to orient yourself: run IDs, titles, status (success/failure), and duration. Pick the runs to analyze — prefer a mix of successes and failures if available, and prefer runs that exercised more steps (longer runs tend to go through more stages, while shorter runs may exit early).
For each run you want to analyze, save the full log to a temp file:
gh run view <run_id> -R <repo> --log > /tmp/actions-run-<run_id>.log
Search each log file for markers that indicate where each step or skill starts and ends. The markers depend on the workflow — look for patterns like:
[flue] skill("..."): starting / completedSTART/END or similar delimiters the workflow usesgrep -n "skill(\|step\|START\|END\|starting\|completed" /tmp/actions-run-<run_id>.log | head -50
From this, determine which line ranges correspond to each step/skill. Also find any result markers:
grep -n "RESULT_START\|RESULT_END\|extractResult" /tmp/actions-run-<run_id>.log
Note: Some log files may contain binary/null bytes. Use grep -a if needed.
For each step/skill that ran, launch a subagent to analyze that section's log. This is critical to avoid polluting your context with thousands of log lines.
For each subagent, provide:
Tell each subagent to evaluate:
Tell each subagent to return a structured response with: Summary, Time Analysis, Issues Found (with estimated time wasted for each), and Suggestions for Improvement.
After all subagents return, synthesize their findings into a single report. Structure it as:
For each run analyzed, include a table:
| Step/Skill | Time | Result | Time Wasted | Top Issue |
|---|
Identify issues that appeared across multiple runs or multiple steps. These are the highest-value improvements. Common patterns to look for:
curl instead of gh, jq not found, etc.Rank your improvement suggestions by estimated time savings across all runs. For each recommendation:
Present the full consolidated report. Do NOT edit any workflow or skill files — only report findings and recommendations. The user will decide which changes to apply.