Back to Llama Index

OceanBase Vector Store

docs/examples/vector_stores/OceanBaseVectorStore.ipynb

0.14.213.7 KB
Original Source

<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/vector_stores/OceanBaseVectorStore.ipynb" target="_parent"></a>

OceanBase Vector Store

OceanBase Database is a distributed relational database. It is developed entirely by Ant Group. The OceanBase Database is built on a common server cluster. Based on the Paxos protocol and its distributed structure, the OceanBase Database provides high availability and linear scalability. The OceanBase Database is not dependent on specific hardware architectures.

This notebook describes in detail how to use the OceanBase vector store functionality in LlamaIndex.

Setup

python
%pip install llama-index-vector-stores-oceanbase
%pip install llama-index
# choose dashscope as embedding and llm model, your can also use default openai or other model to test
%pip install llama-index-embeddings-dashscope
%pip install llama-index-llms-dashscope

Deploy a standalone OceanBase server with docker

python
%docker run --name=ob433 -e MODE=slim -p 2881:2881 -d oceanbase/oceanbase-ce:4.3.3.0-100000142024101215

Creating ObVecClient

python
from pyobvector import ObVecClient

client = ObVecClient()
client.perform_raw_text_sql(
    "ALTER SYSTEM ob_vector_memory_limit_percentage = 30"
)

Config dashscope embedding model and LLM.

python
# set Embbeding model
import os
from llama_index.core import Settings
from llama_index.embeddings.dashscope import DashScopeEmbedding

# Global Settings
Settings.embed_model = DashScopeEmbedding()

# config llm model
from llama_index.llms.dashscope import DashScope, DashScopeGenerationModels

dashscope_llm = DashScope(
    model_name=DashScopeGenerationModels.QWEN_MAX,
    api_key=os.environ.get("DASHSCOPE_API_KEY", ""),
)

Load documents

python
from llama_index.core import (
    SimpleDirectoryReader,
    load_index_from_storage,
    VectorStoreIndex,
    StorageContext,
)
from llama_index.vector_stores.oceanbase import OceanBaseVectorStore

Download Data & Load Data

python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
python
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
python
oceanbase = OceanBaseVectorStore(
    client=client,
    dim=1536,
    drop_old=True,
    normalize=True,
)

storage_context = StorageContext.from_defaults(vector_store=oceanbase)
index = VectorStoreIndex.from_documents(
    documents, storage_context=storage_context
)

Query Index

python
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine(llm=dashscope_llm)
res = query_engine.query("What did the author do growing up?")
res.response

Metadata Filtering

OceanBase Vector Store supports metadata filtering in the form of =><!=>=<=innot inlikeIS NULL at query time.

python
from llama_index.core.vector_stores import (
    MetadataFilters,
    MetadataFilter,
)

query_engine = index.as_query_engine(
    llm=dashscope_llm,
    filters=MetadataFilters(
        filters=[
            MetadataFilter(key="book", value="paul_graham", operator="!="),
        ]
    ),
    similarity_top_k=10,
)

res = query_engine.query("What did the author learn?")
res.response

Delete Documents

python
oceanbase.delete(documents[0].doc_id)

query_engine = index.as_query_engine(llm=dashscope_llm)
res = query_engine.query("What did the author do growing up?")
res.response