apps/opik-documentation/python-sdk-docs/source/evaluation/metrics/LLMJudgePresets.rst
.. currentmodule:: opik.evaluation.metrics
These presets wrap common GEval prompt templates so you can instantiate judges with
one line of code. Provide an appropriate model (for example, "gpt-4o") and
ensure the backing provider is configured via opik.configure.
.. autoclass:: AgentTaskCompletionJudge :special-members: init :members: score
.. autoclass:: AgentToolCorrectnessJudge :special-members: init :members: score
.. autoclass:: DialogueHelpfulnessJudge :special-members: init :members: score
.. autoclass:: QARelevanceJudge :special-members: init :members: score
.. autoclass:: SummarizationCoherenceJudge :special-members: init :members: score
.. autoclass:: SummarizationConsistencyJudge :special-members: init :members: score
.. autoclass:: PromptUncertaintyJudge :special-members: init :members: score
.. autoclass:: ComplianceRiskJudge :special-members: init :members: score
.. autoclass:: DemographicBiasJudge :special-members: init :members: score
.. autoclass:: GenderBiasJudge :special-members: init :members: score
.. autoclass:: PoliticalBiasJudge :special-members: init :members: score
.. autoclass:: RegionalBiasJudge :special-members: init :members: score
.. autoclass:: ReligiousBiasJudge :special-members: init :members: score