Back to Llama Index

LongContextReorder

docs/examples/node_postprocessor/LongContextReorder.ipynb

0.14.213.0 KB
Original Source

<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/node_postprocessor/LongContextReorder.ipynb" target="_parent"></a>

LongContextReorder

Models struggle to access significant details found in the center of extended contexts. A study observed that the best performance typically arises when crucial data is positioned at the start or conclusion of the input context. Additionally, as the input context lengthens, performance drops notably, even in models designed for long contexts.

This module will re-order the retrieved nodes, which can be helpful in cases where a large top-k is needed. The reordering process works as follows:

  1. Input nodes are sorted based on their relevance scores.
  2. Sorted nodes are then reordered in an alternating pattern:
    • Even-indexed nodes are placed at the beginning of the new list.
    • Odd-indexed nodes are placed at the end of the new list.

This approach ensures that the highest-scored (most relevant) nodes are positioned at the beginning and end of the list, with lower-scored nodes in the middle.

Setup

If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.

python
%pip install llama-index-embeddings-huggingface
%pip install llama-index-llms-openai
python
!pip install llama-index
python
import os
import openai

os.environ["OPENAI_API_KEY"] = "sk-..."
python
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.llms.openai import OpenAI
from llama_index.core import Settings

Settings.llm = OpenAI(model="gpt-3.5-turbo-instruct", temperature=0.1)
Settings.embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-base-en-v1.5")

Download Data

python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
python
from llama_index.core import SimpleDirectoryReader

documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
python
from llama_index.core import VectorStoreIndex

index = VectorStoreIndex.from_documents(documents)

Run Query

python
from llama_index.core.postprocessor import LongContextReorder

reorder = LongContextReorder()

reorder_engine = index.as_query_engine(
    node_postprocessors=[reorder], similarity_top_k=5
)
base_engine = index.as_query_engine(similarity_top_k=5)
python
from llama_index.core.response.notebook_utils import display_response

base_response = base_engine.query("Did the author meet Sam Altman?")
display_response(base_response)
python
reorder_response = reorder_engine.query("Did the author meet Sam Altman?")
display_response(reorder_response)

Inspect Order Diffrences

python
print(base_response.get_formatted_sources())
python
print(reorder_response.get_formatted_sources())