Back to Llama Index

Chat Prompts Customization

docs/examples/customization/prompts/chat_prompts.ipynb

0.14.213.0 KB
Original Source

<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/customization/prompts/chat_prompts.ipynb" target="_parent"></a>

Chat Prompts Customization

If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.

python
%pip install llama-index

Prompt Setup

Lets customize them to always answer, even if the context is not helpful!

Using RichPromptTemplate, we can define Jinja-formatted prompts.

python
from llama_index.core.prompts import RichPromptTemplate

chat_text_qa_prompt_str = """
{% chat role="system" %}
Always answer the question, even if the context isn't helpful.
{% endchat %}

{% chat role="user" %}
The following is some retrieved context:

<context>
{{ context_str }}
</context>

Using the context, answer the provided question:
{{ query_str }}
{% endchat %}
"""
text_qa_template = RichPromptTemplate(chat_text_qa_prompt_str)

# Refine Prompt
chat_refine_prompt_str = """
{% chat role="system" %}
Always answer the question, even if the context isn't helpful.
{% endchat %}

{% chat role="user" %}
The following is some new retrieved context:

<context>
{{ context_msg }}
</context>

And here is an existing answer to the query:
<existing_answer>
{{ existing_answer }}
</existing_answer>

Using both the new retrieved context and the existing answer, either update or repeat the existing answer to this query:
{{ query_str }}
{% endchat %}
"""
refine_template = RichPromptTemplate(chat_refine_prompt_str)

Using the Prompts

Now, we use the prompts in an index query!

python
import os

os.environ["OPENAI_API_KEY"] = "sk-proj-..."
python
from llama_index.core import Settings
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding

Settings.llm = OpenAI(model="gpt-4o-mini")
Settings.embed_model = OpenAIEmbedding(model_name="text-embedding-3-small")

Download Data

python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("./data/paul_graham/").load_data()

index = VectorStoreIndex.from_documents(documents)

query_engine = index.as_query_engine()

Before Customizing Templates

Lets see the default existing prompts:

python
query_engine.get_prompts()

And how do they respond when asking about unrelated concepts?

python
print(query_engine.query("Who is Joe Biden?"))

After Customizing Templates

Now, we can update the templates and observe the change in response!

python
query_engine.update_prompts(
    {
        "response_synthesizer:text_qa_template": text_qa_template,
        "response_synthesizer:refine_template": refine_template,
    }
)
python
print(query_engine.query("Who is Joe Biden?"))