docs/examples/vector_stores/OceanBaseVectorStore.ipynb
<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/vector_stores/OceanBaseVectorStore.ipynb" target="_parent"></a>
OceanBase Database is a distributed relational database. It is developed entirely by Ant Group. The OceanBase Database is built on a common server cluster. Based on the Paxos protocol and its distributed structure, the OceanBase Database provides high availability and linear scalability. The OceanBase Database is not dependent on specific hardware architectures.
This notebook describes in detail how to use the OceanBase vector store functionality in LlamaIndex.
%pip install llama-index-vector-stores-oceanbase
%pip install llama-index
# choose dashscope as embedding and llm model, your can also use default openai or other model to test
%pip install llama-index-embeddings-dashscope
%pip install llama-index-llms-dashscope
%docker run --name=ob433 -e MODE=slim -p 2881:2881 -d oceanbase/oceanbase-ce:4.3.3.0-100000142024101215
from pyobvector import ObVecClient
client = ObVecClient()
client.perform_raw_text_sql(
"ALTER SYSTEM ob_vector_memory_limit_percentage = 30"
)
Config dashscope embedding model and LLM.
# set Embbeding model
import os
from llama_index.core import Settings
from llama_index.embeddings.dashscope import DashScopeEmbedding
# Global Settings
Settings.embed_model = DashScopeEmbedding()
# config llm model
from llama_index.llms.dashscope import DashScope, DashScopeGenerationModels
dashscope_llm = DashScope(
model_name=DashScopeGenerationModels.QWEN_MAX,
api_key=os.environ.get("DASHSCOPE_API_KEY", ""),
)
from llama_index.core import (
SimpleDirectoryReader,
load_index_from_storage,
VectorStoreIndex,
StorageContext,
)
from llama_index.vector_stores.oceanbase import OceanBaseVectorStore
Download Data & Load Data
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
oceanbase = OceanBaseVectorStore(
client=client,
dim=1536,
drop_old=True,
normalize=True,
)
storage_context = StorageContext.from_defaults(vector_store=oceanbase)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine(llm=dashscope_llm)
res = query_engine.query("What did the author do growing up?")
res.response
OceanBase Vector Store supports metadata filtering in the form of =、 >、<、!=、>=、<=、in、not in、like、IS NULL at query time.
from llama_index.core.vector_stores import (
MetadataFilters,
MetadataFilter,
)
query_engine = index.as_query_engine(
llm=dashscope_llm,
filters=MetadataFilters(
filters=[
MetadataFilter(key="book", value="paul_graham", operator="!="),
]
),
similarity_top_k=10,
)
res = query_engine.query("What did the author learn?")
res.response
oceanbase.delete(documents[0].doc_id)
query_engine = index.as_query_engine(llm=dashscope_llm)
res = query_engine.query("What did the author do growing up?")
res.response