docs/examples/chat_engine/chat_engine_condense_question.ipynb
<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/chat_engine/chat_engine_condense_question.ipynb" target="_parent"></a>
Condense question is a simple chat mode built on top of a query engine over your data.
For each chat interaction:
This approach is simple, and works for questions directly related to the knowledge base. Since it always queries the knowledge base, it can have difficulty answering meta questions like "what did I ask you before?"
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
%pip install llama-index-llms-openai
!pip install llama-index
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
Load data and build index
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
data = SimpleDirectoryReader(input_dir="./data/paul_graham/").load_data()
index = VectorStoreIndex.from_documents(data)
Configure chat engine
chat_engine = index.as_chat_engine(chat_mode="condense_question", verbose=True)
Chat with your data
response = chat_engine.chat("What did Paul Graham do after YC?")
print(response)
Ask a follow up question
response = chat_engine.chat("What about after that?")
print(response)
response = chat_engine.chat("Can you tell me more?")
print(response)
Reset conversation state
chat_engine.reset()
response = chat_engine.chat("What about after that?")
print(response)
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
data = SimpleDirectoryReader(input_dir="../data/paul_graham/").load_data()
index = VectorStoreIndex.from_documents(data)
chat_engine = index.as_chat_engine(
chat_mode="condense_question", llm=llm, verbose=True
)
response = chat_engine.stream_chat("What did Paul Graham do after YC?")
for token in response.response_gen:
print(token, end="")