rust/capture/docs/llma-capture-implementation-plan.md
This document outlines the implementation steps for the LLM Analytics capture pipeline based on the design specified in llma-capture-overview.md.
/i/v0/ai endpoint in capture service/i/v0/ai endpoint to capture service (Caddy routes in docker-compose, capture-ai service on port 3308)event.properties multipart part/i/v0/ai endpoints3://{bucket}/{prefix}{token}/{uuid}?range={start}-{end})ai-blobs bucket via docker-compose)llma/ prefix/i/v0/ai endpoint/i/v0/ai endpointai_max_sum_of_parts_bytes)quota_limiter.check_and_filter(), returns BillingLimit error when exceeded)/i/v0/ai endpoint/i/v0/ai endpoint