docs/examples/customization/prompts/chat_prompts.ipynb
<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/customization/prompts/chat_prompts.ipynb" target="_parent"></a>
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
%pip install llama-index
Lets customize them to always answer, even if the context is not helpful!
Using RichPromptTemplate, we can define Jinja-formatted prompts.
from llama_index.core.prompts import RichPromptTemplate
chat_text_qa_prompt_str = """
{% chat role="system" %}
Always answer the question, even if the context isn't helpful.
{% endchat %}
{% chat role="user" %}
The following is some retrieved context:
<context>
{{ context_str }}
</context>
Using the context, answer the provided question:
{{ query_str }}
{% endchat %}
"""
text_qa_template = RichPromptTemplate(chat_text_qa_prompt_str)
# Refine Prompt
chat_refine_prompt_str = """
{% chat role="system" %}
Always answer the question, even if the context isn't helpful.
{% endchat %}
{% chat role="user" %}
The following is some new retrieved context:
<context>
{{ context_msg }}
</context>
And here is an existing answer to the query:
<existing_answer>
{{ existing_answer }}
</existing_answer>
Using both the new retrieved context and the existing answer, either update or repeat the existing answer to this query:
{{ query_str }}
{% endchat %}
"""
refine_template = RichPromptTemplate(chat_refine_prompt_str)
Now, we use the prompts in an index query!
import os
os.environ["OPENAI_API_KEY"] = "sk-proj-..."
from llama_index.core import Settings
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
Settings.llm = OpenAI(model="gpt-4o-mini")
Settings.embed_model = OpenAIEmbedding(model_name="text-embedding-3-small")
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
Lets see the default existing prompts:
query_engine.get_prompts()
And how do they respond when asking about unrelated concepts?
print(query_engine.query("Who is Joe Biden?"))
Now, we can update the templates and observe the change in response!
query_engine.update_prompts(
{
"response_synthesizer:text_qa_template": text_qa_template,
"response_synthesizer:refine_template": refine_template,
}
)
print(query_engine.query("Who is Joe Biden?"))