Back to Localgpt

✅ Answer Verifier

Documentation/verifier.md

latest1.4 KB
Original Source

✅ Answer Verifier

File: rag_system/agent/verifier.py

Objective

Assess whether an answer produced by RAG is grounded in the retrieved context snippets.

Prompt (see prompt_inventory.md verifier.fact_check)

Strict JSON schema:

jsonc
{
  "verdict": "SUPPORTED" | "NOT_SUPPORTED" | "NEEDS_CLARIFICATION",
  "is_grounded": true | false,
  "reasoning": "< ≤30 words >",
  "confidence_score": 0-100
}

Sequence Diagram

mermaid
sequenceDiagram
    participant RP as Retrieval Pipeline
    participant V as Verifier
    participant LLM as Ollama

    RP->>V: query, context, answer
    V->>LLM: verification prompt
    LLM-->>V: JSON verdict
    V-->>RP: VerificationResult

Usage Sites

CallerCodeWhen
RetrievalPipeline.answer_stream()pipelines/retrieval_pipeline.pyIf verify=true flag from frontend.
Agent.loop.run()fallback pathExperimental for composed answers.

Config

FlagDefaultMeaning
verifyfalseFrontend toggle; if true verifier runs.
generation_modelqwen3:8bSame model as answer generation.

Failure Modes

  • If LLM returns invalid JSON → parse exception handled, result = NOT_SUPPORTED.
  • If verification call times out → pipeline logs but still returns answer (unverified).

Keep updated when schema or usage flags change.