Back to Llama Index

Guardrails Output Parsing

docs/examples/output_parsing/GuardrailsDemo.ipynb

0.14.213.1 KB
Original Source

<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/output_parsing/GuardrailsDemo.ipynb" target="_parent"></a>

Guardrails Output Parsing

First, set your openai api keys

python
# import os

# os.environ["OPENAI_API_KEY"] = "sk-..."

If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.

python
%pip install llama-index-llms-openai
%pip install llama-index-output-parsers-guardrails
python
%pip install guardrails-ai

Download Data

python
!mkdir -p 'data/paul_graham/'
!curl 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' > 'data/paul_graham/paul_graham_essay.txt'

Load documents, build the VectorStoreIndex

python
import logging
import sys

logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
python
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
python
index = VectorStoreIndex.from_documents(documents, chunk_size=512)

Define Query + Guardrails Spec

python
from llama_index.output_parsers.guardrails import GuardrailsOutputParser

Define custom QA and Refine Prompts

Define Guardrails Spec

python
# You can either define a RailSpec and initialise a Guard object from_rail_string()
# OR define Pydantic classes and initialise a Guard object from_pydantic()
# For more info: https://docs.guardrailsai.com/defining_guards/pydantic/
# Guardrails recommends Pydantic

from pydantic import BaseModel, Field
from typing import List
import guardrails as gd


class BulletPoints(BaseModel):
    # In all the fields below, you can define validators as well
    # Left out for brevity
    explanation: str = Field()
    explanation2: str = Field()
    explanation3: str = Field()


class Explanation(BaseModel):
    points: BulletPoints = Field(
        description="Bullet points regarding events in the author's life."
    )


# Define the prompt
prompt = """
Query string here.

${gr.xml_prefix_prompt}

${output_schema}

${gr.json_suffix_prompt_v2_wo_none}
"""
python
from llama_index.llms.openai import OpenAI

# Create a guard object
guard = gd.Guard.from_pydantic(output_class=Explanation, prompt=prompt)

# Create output parse object
output_parser = GuardrailsOutputParser(guard)

# attach to an llm object
llm = OpenAI(output_parser=output_parser)
python
from llama_index.core.prompts.default_prompts import (
    DEFAULT_TEXT_QA_PROMPT_TMPL,
)

# take a look at the new QA template!
fmt_qa_tmpl = output_parser.format(DEFAULT_TEXT_QA_PROMPT_TMPL)
print(fmt_qa_tmpl)

Query Index

python
query_engine = index.as_query_engine(
    llm=llm,
)
response = query_engine.query(
    "What are the three items the author did growing up?",
)
python
print(response)
python
# View a summary of what the guard did
guard.history.last.tree