Back to Opik

Home

apps/opik-documentation/documentation/fern/docs-v2/home.mdx

2.0.22-6605-merge-20653.8 KB
Original Source

Opik is an open-source logging, debugging, and optimization platform for AI agents and LLM applications. If you're building AI features, you know it's easy to spin up a working prototype but harder to log, test, iterate, and monitor to meet production requirements.

Opik gives you all the tools you need to go from LLM observability to action across your AI application footprint and dev cycle. Ship measurable improvements with gorgeous logs, annotation and scoring functions, pre-configured LLM-as-a-judge eval metrics, and even automated agent optimization algorithms to maximize performance.

End-to-End AI Engineering

<Frame> </Frame> <Tip> Opik is Open Source! You can find the full source code on [GitHub](https://github.com/comet-ml/opik) and the complete self-hosting guide can be found [here](/self-host/local_deployment). </Tip>

Core Functions

<CardGroup cols={2}> <Card title="Quickstart Guide" href="/quickstart" icon="fa-solid fa-rocket" iconPosition="left"> Opik integrates with your existing AI stack through your model provider or LLM framework. </Card> <Card title="LLM Observability - Log LLM Traces" href="/tracing/advanced/log_traces" icon="fa-solid fa-eye" iconPosition="left"> Traces give you instant visibility into what's working, what's not, and why and includes advanced analysis and debugging features built in. </Card> <Card title="Evaluation - Score Performance" href="/evaluation/overview" icon="fa-solid fa-chart-line" iconPosition="left"> Use LLM-as-a-judge and heuristic eval metrics to score your app or agent on hallucination, context recall, and more. </Card> <Card title="Agent Optimization" href="/development/optimization-runs/overview" icon="fa-solid fa-brain" iconPosition="left"> Choose from six advanced optimization algorithms to auto-generate and score the best prompts for the steps in your agentic system. </Card> <Card title="Prompt Engineering" href="/v1/prompt_engineering/prompt_management" icon="fa-solid fa-wand-magic-sparkles" iconPosition="left"> Store and version system prompts, compare results live in the [Prompt Playground](/v1/prompt_engineering/playground), and experiment with different models with our LLM proxy. </Card> <Card title="Self-hosting Opik" href="/self-host/overview" icon="fa-solid fa-server" iconPosition="left"> Deploy Opik on your own infrastructure with local or Kubernetes deployment options. </Card> </CardGroup>

Video Tutorials

Prefer a visual guide ? Follow along as we cover everything from basic setup and trace logging to LLM evaluation metrics, production monitoring, and more.

<Frame> <iframe width="100%" height="500px" src="https://www.youtube-nocookie.com/embed/TO9ar6-OJj4?rel=0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen ></iframe> </Frame> <Tip> You can find a full set of video tutorials in the [Opik University](/v1/opik-university/overview). </Tip>

Open-Source access meets enterprise performance

All Opik versions (cloud, open source, and enterprise) include the full AI engineering featureset and run on the Comet platform, with proven performance at scale supporting many of the world's largest organizations.

Compare Opik to other LLM observability tools and you'll find that traces populate faster, evaluations run smoother, and reliability comes standard — even for complex agentic systems serving millions of users in production.