.agents/skills/make-custom-agent/SKILL.md
This skill guides you through creating a custom GitHub Copilot agent — an @-invokable chat participant that extends Copilot with domain-specific expertise. Custom agents are distinct from Agent Skills: skills provide reusable instructions loaded on demand, while agents own the full conversational interaction and can orchestrate tools, call APIs, and maintain their own prompt strategies.
.agents/skills/) instead.github/instructions/) instead.github/copilot-instructions.md instead| Type | Location | Best for |
|---|---|---|
| Declarative (prompt file) | .github/agents/<name>.md | Simple prompt-driven cross-surface agents with no code |
| Extension-based (chat participant) | VS Code extension project | Full control, tool calling, VS Code API access |
| GitHub App (Copilot Extension) | Hosted service + GitHub App | Cross-surface agents (github.com, VS Code, Visual Studio) |
If the agent only needs a scoped system prompt and doesn't require custom code, start with a declarative agent.
Declarative agents are Markdown files in .github/agents/. VS Code and GitHub Copilot discover them automatically.
.github/agents/
└── <agent-name>.md # Agent definition
Template:
---
name: my-agent
description: A short description of what this agent does and when to use it.
---
# <Agent Title>
You are an expert in <domain>. Your job is to:
- <behavior 1>
- <behavior 2>
## Guidelines
- <guideline 1>
- <guideline 2>
## Workflow
1. <step 1>
2. <step 2>
## Constraints
- <constraint 1>
- <constraint 2>
Supported frontmatter fields:
| Field | Required | Description |
|---|---|---|
name | Yes | Lowercase, hyphens allowed. Used for @-mention. |
description | Yes | What the agent does and when to use it. Shown in the participant list. |
target | No | Target environment: vscode or github-copilot (defaults to both) |
tools | No | List of allowed tools/tool sets |
model | No | LLM name or prioritized array of models |
user-invokable | No | Show in agents dropdown (default: true) |
disable-model-invocation | No | Prevent subagent invocation (default: false) |
mcp-servers | No | MCP server configs for GitHub Copilot target |
metadata | No | Key-value mapping for additional arbitrary metadata. |
argument-hint | No | Hint text guiding user interaction (VS Code only) |
agents | No | List of allowed subagents (* for all, [] for none, VS Code only) |
handoffs | No | List of next-step agent transitions (VS Code only) |
Tips for instructions:
#tool:<tool-name> syntaxSpecify which tools the agent can use:
tools:
- search # Built-in tool
- fetch # Built-in tool
- codebase # Tool set
- myServer/* # All tools from MCP server
Common tool patterns:
['search', 'fetch', 'codebase']['*'] or specific editing toolsConfigure transitions to other agents:
handoffs:
- label: Start Implementation
agent: implementation
prompt: Implement the plan outlined above.
send: false
model: GPT-5.2 (copilot)
Handoff fields:
label: Button text displayed to useragent: Target agent identifierprompt: Pre-filled prompt for target agentsend: Auto-submit prompt (default: false)model: Optional model override for handoffFor full control, implement a VS Code extension with a chat participant:
package.json:"contributes": {
"chatParticipants": [
{
"id": "my-extension.my-agent",
"name": "my-agent",
"fullName": "My Agent",
"description": "Short description shown in chat input",
"isSticky": false,
"commands": [
{
"name": "explain",
"description": "Explain the selected code"
}
]
}
]
}
extension.ts:export function activate(context: vscode.ExtensionContext) {
const agent = vscode.chat.createChatParticipant('my-extension.my-agent', handler);
agent.iconPath = vscode.Uri.joinPath(context.extensionUri, 'icon.png');
}
const handler: vscode.ChatRequestHandler = async (
request: vscode.ChatRequest,
context: vscode.ChatContext,
stream: vscode.ChatResponseStream,
token: vscode.CancellationToken
) => {
const model = request.model;
const messages = [
vscode.LanguageModelChatMessage.User(request.prompt)
];
const response = await model.sendRequest(messages, {}, token);
for await (const fragment of response.text) {
stream.markdown(fragment);
}
};
package.json:"extensionDependencies": ["github.copilot-chat"]
Agents can invoke language model tools registered by other extensions:
const tools = vscode.lm.tools.filter(tool => tool.tags.includes('my-domain'));
const result = await chatUtils.sendChatParticipantRequest(request, context, {
prompt: 'You are an expert in <domain>.',
tools,
responseStreamOptions: { stream, references: true, responseText: true }
}, token);
return await result.result;
If the agent should be available on GitHub.com, Visual Studio, JetBrains, and VS Code simultaneously, implement a GitHub App that acts as a Copilot Extension. The app registers a webhook endpoint, receives chat requests, and streams responses back.
Key considerations:
After creating or modifying an agent, verify:
name is lowercase, uses hyphens (no spaces), and is uniquedescription clearly describes what the agent does and when to invoke it.github/agents/package.json and createChatParticipant call@workspace, @vscode, @terminal)| Pitfall | Solution |
|---|---|
| Agent name conflicts with built-in participants | Use a unique prefix (domain name) |
| Description is too vague | Include specific keywords users would naturally say |
| System prompt is too long | Keep instructions to essential behaviors; move reference material to Agent Skills |
| Agent requires VS Code API but is authored as declarative | Switch to extension-based participant |
Using isSticky: true unnecessarily | Only set sticky if the agent should persist between turns by default |
No extensionDependencies on github.copilot-chat | Add it; otherwise the contribution point may not be available |
| Agent invoked as subagent unexpectedly | Set disable-model-invocation: true |
| Subagent appears in the dropdown | Set user-invocable: false |