Back to Continue

Metrics

docs/mission-control/metrics.mdx

1.5.451.9 KB
Original Source
<Info> Metrics provides Agent observability for Cloud Agents and automated workflows. Agent observability answers a simple question: *What are my AI agents doing, and is it working?* Use Metrics to monitor agent activity, understand human intervention, measure success rates, and evaluate the cost and impact of AI-driven work across your repositories. </Info>

What Metrics Show About Your Cloud Agents

Continue’s Metrics give you operational observability for AI agents, similar to how traditional observability tools provide visibility into services, jobs, and pipelines.

Instead of logs and latency, agent observability focuses on:

  • Runs and execution frequency
  • Success vs. human intervention
  • Pull request outcomes
  • Cost per run and per workflow
<AccordionGroup> <Accordion title="Agent Activity & Execution Volume"> Understand **when and how often your agents run**. - See which Cloud Agents are running most often - Spot spikes, trends, or recurring failures - Monitor automated Workflows in production </Accordion> <Accordion title="Agent Success & Outcome Metrics"> Measure **whether agents produce usable results**. - **Total runs** - **PR creation rate** - **PR status** (open, merged, closed, failed) - **Success vs. intervention rate** </Accordion> <Accordion title="Workflow Reliability & Impact"> Evaluate **automated agent workflows in production**. - Which Workflows generate the most work - Completion and success rates - Signals that a Workflow needs refinement or guardrails </Accordion> </AccordionGroup>

Why Metrics Matter

<CardGroup cols={2}> <Card title="Improve Agent Reliability" icon="line-chart"> Identify which Agents need better rules, tools, or prompts. </Card> <Card title="Measure Automation Value" icon="bar-chart"> See how much work your automated Workflows are completing across your repos. </Card> </CardGroup>