Back to Opik

Conversation LLM Judges

apps/opik-documentation/python-sdk-docs/source/evaluation/metrics/ConversationLLMJudges.rst

2.0.22-6605-merge-20651.5 KB
Original Source

Conversation LLM Judges

.. currentmodule:: opik.evaluation.metrics

These evaluators wrap GEval-style LLM judges so you can score full conversations without manually extracting turns. They expect transcripts in the same format used by :class:~opik.evaluation.metrics.ConversationThreadMetric and typically rely on an OpenAI- or Azure-compatible backend. Refer to the relevant Fern guides for API keys, rate limits, and pricing considerations.

Core Conversation Judges

.. autoclass:: GEvalConversationMetric :special-members: init :members: score

.. autoclass:: ConversationalCoherenceMetric :special-members: init :members: score

.. autoclass:: SessionCompletenessQuality :special-members: init :members: score

.. autoclass:: UserFrustrationMetric :special-members: init :members: score

Specialized Variants

.. autoclass:: ConversationComplianceRiskMetric :special-members: init :members: score

.. autoclass:: ConversationDialogueHelpfulnessMetric :special-members: init :members: score

.. autoclass:: ConversationQARelevanceMetric :special-members: init :members: score

.. autoclass:: ConversationSummarizationCoherenceMetric :special-members: init :members: score

.. autoclass:: ConversationSummarizationConsistencyMetric :special-members: init :members: score

.. autoclass:: ConversationPromptUncertaintyMetric :special-members: init :members: score