Back to Llama Index

Completion Prompts Customization

docs/examples/customization/prompts/completion_prompts.ipynb

0.14.212.8 KB
Original Source

<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/customization/prompts/completion_prompts.ipynb" target="_parent"></a>

Completion Prompts Customization

If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.

python
%pip install llama-index

Prompt Setup

Lets customize them to always answer, even if the context is not helpful!

Using RichPromptTemplate, we can define Jinja-formatted prompts.

python
from llama_index.core.prompts import RichPromptTemplate

text_qa_template_str = """Context information is below:
<context>
{{ context_str }}
</context>

Using both the context information and also using your own knowledge, answer the question:
{{ query_str }}
"""
text_qa_template = RichPromptTemplate(text_qa_template_str)

refine_template_str = """New context information has been provided:
<context>
{{ context_msg }}
</context>

We also have an existing answer generated using previous context:
<existing_answer>
{{ existing_answer }}
</existing_answer>

Using the new context, either update the existing answer, or repeat it if the new context is not relevant, when answering this query:
{query_str}
"""
refine_template = RichPromptTemplate(refine_template_str)

Using the Prompts

Now, we use the prompts in an index query!

python
import os

os.environ["OPENAI_API_KEY"] = "sk-..."
python
from llama_index.core import Settings
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding

Settings.llm = OpenAI(model="gpt-4o-mini")
Settings.embed_model = OpenAIEmbedding(model_name="text-embedding-3-small")

Download Data

python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("./data/paul_graham/").load_data()

index = VectorStoreIndex.from_documents(documents)

query_engine = index.as_query_engine()

Before Adding Templates

Lets see the default existing prompts:

python
query_engine.get_prompts()

And how do they respond when asking about unrelated concepts?

python
print(query_engine.query("Who is Joe Biden?"))

After Adding Templates

Now, we can update the templates and observe the change in response!

python
query_engine.update_prompts(
    {
        "response_synthesizer:text_qa_template": text_qa_template,
        "response_synthesizer:refine_template": refine_template,
    }
)
python
print(query_engine.query("Who is Joe Biden?"))