Back to Kubeshark

Network Root Cause Analysis with Kubeshark MCP

skills/network-rca/SKILL.md

72.3.8313.1 KB
Original Source

Network Root Cause Analysis with Kubeshark MCP

You are a Kubernetes network forensics specialist. Your job is to help users investigate past incidents by working with traffic snapshots — immutable captures of all network activity across a cluster during a specific time window.

Kubeshark is a search engine for network traffic. Just as Google crawls and indexes the web so you can query it instantly, Kubeshark captures and indexes (dissects) cluster traffic so you can query any API call, header, payload, or timing metric across your entire infrastructure. Snapshots are the raw data; dissection is the indexing step; KFL queries are your search bar.

Unlike real-time monitoring, retrospective analysis lets you go back in time: reconstruct what happened, compare against known-good baselines, and pinpoint root causes with full L4/L7 visibility.

Prerequisites

Before starting any analysis, verify the environment is ready.

Kubeshark MCP Health Check

Confirm the Kubeshark MCP is accessible and tools are available. Look for tools like list_api_calls, list_l4_flows, create_snapshot, etc.

Tool: check_kubeshark_status

If tools like list_api_calls or list_l4_flows are missing from the response, something is wrong with the MCP connection. Guide the user through setup (see Setup Reference at the bottom).

Raw Capture Must Be Enabled

Retrospective analysis depends on raw capture — Kubeshark's kernel-level (eBPF) packet recording that stores traffic at the node level. Without it, snapshots have nothing to work with.

Raw capture runs as a FIFO buffer: old data is discarded as new data arrives. The buffer size determines how far back you can go. Larger buffer = wider snapshot window.

yaml
tap:
  capture:
    raw:
      enabled: true
      storageSize: 10Gi    # Per-node FIFO buffer

If raw capture isn't enabled, inform the user that retrospective analysis requires it and share the configuration above.

Snapshot Storage

Snapshots are assembled on the Hub's storage, which is ephemeral by default. For serious forensic work, persistent storage is recommended:

yaml
tap:
  snapshots:
    local:
      storageClass: gp2
      storageSize: 1000Gi

Core Workflow

Every investigation starts with a snapshot. After that, you choose one of two investigation routes depending on your goal:

  1. Determine time window — When did the issue occur? Use get_data_boundaries to see what raw capture data is available.
  2. Create or locate a snapshot — Either take a new snapshot covering the incident window, or find an existing one with list_snapshots.
  3. Choose your investigation route — PCAP or Dissection (see below).

Choosing the Right Route

PCAP RouteDissection Route
SpeedImmediate — no indexing neededTakes time to index
FilteringNodes, time window, BPF filtersKubernetes & API-level (pods, labels, paths, status codes)
OutputCluster-wide PCAP filesStructured query results
Investigation byHuman (Wireshark)AI agent or human (queryable database)
Best forCompliance, sharing with network teams, Wireshark deep-divesRoot cause analysis, API-level debugging, automated investigation

Both routes are valid and complementary. Use PCAP when you need raw packets for human analysis or compliance. Use Dissection when you want an AI agent to search and analyze traffic programmatically.

Snapshot Operations

Both routes start here. A snapshot is an immutable freeze of all cluster traffic in a time window.

Check Data Boundaries

Tool: get_data_boundaries

Check what raw capture data exists across the cluster. You can only create snapshots within these boundaries — data outside the window has been rotated out of the FIFO buffer.

Example response:

Cluster-wide:
  Oldest: 2026-03-14 16:12:34 UTC
  Newest: 2026-03-14 18:05:20 UTC

Per node:
  ┌─────────────────────────────┬──────────┬──────────┐
  │            Node             │  Oldest  │  Newest  │
  ├─────────────────────────────┼──────────┼──────────┤
  │ ip-10-0-25-170.ec2.internal │ 16:12:34 │ 18:03:39 │
  │ ip-10-0-32-115.ec2.internal │ 16:13:45 │ 18:05:20 │
  └─────────────────────────────┴──────────┴──────────┘

If the incident falls outside the available window, the data has been rotated out. Suggest increasing storageSize for future coverage.

Create a Snapshot

Tool: create_snapshot

Specify nodes (or cluster-wide) and a time window within the data boundaries. Snapshots include raw capture files, Kubernetes pod events, and eBPF cgroup events.

Snapshots take time to build. Check status with get_snapshot — wait until completed before proceeding with either route.

List Existing Snapshots

Tool: list_snapshots

Shows all snapshots on the local Hub, with name, size, status, and node count.

Cloud Storage

Snapshots on the Hub are ephemeral. Cloud storage (S3, GCS, Azure Blob) provides long-term retention. Snapshots can be downloaded to any cluster with Kubeshark — not necessarily the original one.

Check cloud status: get_cloud_storage_status Upload to cloud: upload_snapshot_to_cloud Download from cloud: download_snapshot_from_cloud


Route 1: PCAP

The PCAP route does not require dissection. It works directly with the raw snapshot data to produce filtered, cluster-wide PCAP files. Use this route when:

  • You need raw packets for Wireshark analysis
  • You're sharing captures with network teams
  • You need evidence for compliance or audit
  • A human will perform the investigation (not an AI agent)

Filtering a PCAP

Tool: export_snapshot_pcap

Filter the snapshot down to what matters using:

  • Nodes — specific cluster nodes only
  • Time — sub-window within the snapshot
  • BPF filter — standard Berkeley Packet Filter syntax (e.g., host 10.0.53.101, port 8080, net 10.0.0.0/16)

These filters are combinable — select specific nodes, narrow the time range, and apply a BPF expression all at once.

Workload-to-BPF Workflow

When you know the workload names but not their IPs, resolve them from the snapshot's metadata. Snapshots preserve pod-to-IP mappings from capture time, so resolution is accurate even if pods have been rescheduled since.

Tool: resolve_workload

Example workflow — extract PCAP for specific workloads:

  1. Resolve IPs: resolve_workload for orders-594487879c-7ddxf10.0.53.101
  2. Resolve IPs: resolve_workload for payment-service-6b8f9d-x2k4p10.0.53.205
  3. Build BPF: host 10.0.53.101 or host 10.0.53.205
  4. Export: export_snapshot_pcap with that BPF filter

This gives you a cluster-wide PCAP filtered to exactly the workloads involved in the incident — ready for Wireshark or long-term storage.


Route 2: Dissection

The Dissection route indexes raw packets into structured L7 API calls, building a queryable database from the snapshot. Use this route when:

  • An AI agent is performing the investigation
  • You need to search by Kubernetes context (pods, namespaces, labels, services)
  • You need to search by API elements (paths, status codes, headers, payloads)
  • You want structured responses you can analyze programmatically
  • You need to drill into the payload of a specific API call

KFL requirement: The Dissection route uses KFL filters for all queries (list_api_calls, get_api_stats, etc.). Before constructing any KFL filter, load the KFL skill (skills/kfl/). KFL is statically typed — incorrect field names or syntax will fail silently or error. If the KFL skill is not available, suggest the user install it:

bash
ln -s /path/to/kubeshark/skills/kfl ~/.claude/skills/kfl

If the KFL skill cannot be loaded, only use the exact filter examples shown in this skill. Do not improvise or guess at field names, operators, or syntax. KFL field names differ from what you might expect (e.g., status_code not response.status, src.pod.namespace not src.namespace). Using incorrect fields produces wrong results without warning.

Activate Dissection

Tool: start_snapshot_dissection

Dissection takes time proportional to snapshot size — it parses every packet, reassembles streams, and builds the index. After completion, these tools become available:

  • list_api_calls — Search API transactions with KFL filters
  • get_api_call — Drill into a specific call (headers, body, timing, payload)
  • get_api_stats — Aggregated statistics (throughput, error rates, latency)

Investigation Strategy

Start broad, then narrow:

  1. get_api_stats — Get the overall picture: error rates, latency percentiles, throughput. Look for spikes or anomalies.
  2. list_api_calls filtered by error codes (4xx, 5xx) or high latency — find the problematic transactions.
  3. get_api_call on specific calls — inspect headers, bodies, timing, and full payload to understand what went wrong.
  4. Use KFL filters to slice by namespace, service, protocol, or any combination.

Example list_api_calls response (filtered to http && status_code >= 500):

┌──────────────────────┬────────┬──────────────────────────┬────────┬───────────┐
│      Timestamp       │ Method │           URL            │ Status │  Elapsed  │
├──────────────────────┼────────┼──────────────────────────┼────────┼───────────┤
│ 2026-03-14 17:23:45  │ POST   │ /api/v1/orders/charge    │ 503    │ 12,340 ms │
│ 2026-03-14 17:23:46  │ POST   │ /api/v1/orders/charge    │ 503    │ 11,890 ms │
│ 2026-03-14 17:23:48  │ GET    │ /api/v1/inventory/check  │ 500    │  8,210 ms │
│ 2026-03-14 17:24:01  │ POST   │ /api/v1/payments/process │ 502    │ 30,000 ms │
└──────────────────────┴────────┴──────────────────────────┴────────┴───────────┘
Src: api-gateway (prod)  →  Dst: payment-service (prod)

Use the pattern of repeated failures and high latency to identify the failing service chain, then drill into individual calls with get_api_call.

KFL Filters for Dissected Traffic

Layer filters progressively when investigating:

// Step 1: Protocol + namespace
http && dst.pod.namespace == "production"

// Step 2: Add error condition
http && dst.pod.namespace == "production" && status_code >= 500

// Step 3: Narrow to service
http && dst.pod.namespace == "production" && status_code >= 500 && dst.service.name == "payment-service"

// Step 4: Narrow to endpoint
http && dst.pod.namespace == "production" && status_code >= 500 && dst.service.name == "payment-service" && path.contains("/charge")

Other common RCA filters:

dns && dns_response && status_code != 0              // Failed DNS lookups
src.service.namespace != dst.service.namespace        // Cross-namespace traffic
http && elapsed_time > 5000000                        // Slow transactions (> 5s)
conn && conn_state == "open" && conn_local_bytes > 1000000  // High-volume connections

Combining Both Routes

The two routes are complementary. A common pattern:

  1. Start with Dissection — let the AI agent search and identify the root cause
  2. Once you've pinpointed the problematic workloads, use resolve_workload to get their IPs
  3. Switch to PCAP — export a filtered PCAP of just those workloads for Wireshark deep-dive, sharing with the network team, or compliance archival

Use Cases

Post-Incident RCA

  1. Identify the incident time window from alerts, logs, or user reports
  2. Check get_data_boundaries — is the window still in raw capture?
  3. create_snapshot covering the incident window (add 15 minutes buffer)
  4. Dissection route: start_snapshot_dissectionget_api_statslist_api_callsget_api_call → follow the dependency chain
  5. PCAP route: resolve_workloadexport_snapshot_pcap with BPF → hand off to Wireshark or archive

Other Use Cases

  • Trend analysis — Take snapshots at regular intervals and compare get_api_stats across them to detect latency drift, error rate changes, or new service-to-service connections.
  • Forensic preservationcreate_snapshot + upload_snapshot_to_cloud for immutable, long-term evidence. Downloadable to any cluster months later.
  • Production-to-local replay — Upload a production snapshot to cloud, download it on a local KinD cluster, and investigate safely.

Setup Reference

For CLI installation, MCP configuration, verification, and troubleshooting, see references/setup.md.