internal/website/docs/guides/tool-descriptions.md
Your Go doc comments become the documentation that AI agents read when deciding how to call your service. Better descriptions lead to fewer errors, faster task completion, and a better user experience.
When an AI agent receives a user request like "create a task for Alice", it:
If any of these are missing or unclear, the agent guesses — and often guesses wrong.
Every handler method needs three things:
// Create creates a new task with the given title and description.
// Returns the created task with a generated ID and initial status of "todo".
// The assignee field is optional; if omitted, the task is unassigned.
Rules:
@example)// @example {"title": "Fix login bug", "description": "Users can't log in with SSO", "assignee": "alice"}
Rules:
"string" or "test"description tag)type CreateRequest struct {
Title string `json:"title" description:"Task title (required, max 100 chars)"`
Assignee string `json:"assignee,omitempty" description:"Username to assign (optional)"`
}
Rules:
omitempty)Good:
// GetUser retrieves a user by their unique ID from the database.
// Returns the full profile including name, email, and preferences.
// Returns an error if the user does not exist.
//
// @example {"id": "user-123"}
func (s *UserService) GetUser(ctx context.Context, req *GetRequest, rsp *GetResponse) error {
Bad:
// Gets user
func (s *UserService) GetUser(ctx context.Context, req *GetRequest, rsp *GetResponse) error {
The bad version forces the agent to guess what "gets user" means, what parameters are needed, and what format the ID takes.
Good:
type SearchRequest struct {
Query string `json:"query" description:"Search query string (min 2 chars, max 200)"`
Page int `json:"page,omitempty" description:"Page number, starting from 1 (default: 1)"`
PerPage int `json:"per_page,omitempty" description:"Results per page, 1-100 (default: 20)"`
SortBy string `json:"sort_by,omitempty" description:"Sort field: relevance, date, or name (default: relevance)"`
}
Bad:
type SearchRequest struct {
Q string `json:"q"`
P int `json:"p"`
N int `json:"n"`
S string `json:"s"`
}
Good:
// @example {"query": "microservices architecture", "page": 1, "per_page": 10, "sort_by": "relevance"}
Bad:
// @example {"q": "string", "p": 0, "n": 0}
// Create creates a new [resource].
// Returns the created [resource] with a generated ID.
//
// @example {realistic create payload}
// Get retrieves a [resource] by ID.
// Returns an error if the [resource] does not exist.
//
// @example {"id": "realistic-id"}
// List returns all [resources], optionally filtered by [criteria].
// Returns an empty list if no [resources] match.
//
// @example {"status": "active"}
// Update modifies an existing [resource].
// Only the provided fields are updated; omitted fields are unchanged.
// Returns an error if the [resource] does not exist.
//
// @example {"id": "realistic-id", "field": "new-value"}
// Delete removes a [resource] by ID. This action is irreversible.
// Returns an error if the [resource] does not exist.
//
// @example {"id": "realistic-id"}
// Search finds [resources] matching the query string.
// Supports full-text search across [fields].
// Results are paginated; use page and per_page to control pagination.
// Returns results sorted by relevance by default.
//
// @example {"query": "realistic search term", "page": 1, "per_page": 20}
// SendEmail sends an email notification to the specified recipient.
// This triggers an actual email delivery — use with caution.
// Returns an error if the email address is invalid or the mail server is unavailable.
//
// @example {"to": "[email protected]", "subject": "Task assigned", "body": "You have a new task."}
// CreateReport generates a report for the specified date range and metrics.
// Processing may take up to 30 seconds for large date ranges.
// Valid metrics: cpu_usage, memory_usage, request_count, error_rate.
// Date format: YYYY-MM-DD (e.g., "2026-01-15").
//
// @example {"start_date": "2026-01-01", "end_date": "2026-01-31", "metrics": ["cpu_usage", "error_rate"]}
| Documentation Quality | First-Call Success Rate | Avg Calls to Complete |
|---|---|---|
| No docs | ~25% | 3-4 calls |
| Basic (name only) | ~50% | 2-3 calls |
| Good (description + types) | ~80% | 1-2 calls |
| Excellent (description + types + example) | ~95% | 1 call |
micro mcp listCheck what agents will see:
micro mcp list
Verify each tool has a description and the schema looks correct.
micro mcp docsGenerate the full documentation:
micro mcp docs
Read through it as if you were an AI agent. Does it make sense without seeing the code?
The ultimate test — add your service to Claude Code and try natural language commands:
"Create a task for Alice to fix the login bug"
"What tasks are assigned to Bob?"
"Mark task-1 as done"
If Claude gets it right on the first try, your docs are good.
micro mcp testTest individual tools with specific inputs:
micro mcp test tasks.TaskService.Create
If you can't modify the source code (e.g., third-party services), override descriptions at handler registration:
handler := service.Server().NewHandler(
new(LegacyService),
server.WithEndpointDocs("LegacyService.Process", server.EndpointDocs{
Description: "Process a payment transaction. Charges the specified amount to the customer's payment method on file.",
Example: `{"customer_id": "cust-123", "amount_cents": 4999, "currency": "USD"}`,
}),
)
Manual docs take precedence over auto-extracted comments. This is useful for:
You can export tool descriptions in different formats for use with agent frameworks:
# Human-readable documentation
micro mcp docs
# JSON for custom tooling
micro mcp export --format json
# LangChain Python format
micro mcp export --format langchain
# OpenAPI specification
micro mcp export --format openapi
"string" or "test" instead of realistic valuesomitempty or noting "(optional)"