docs/references/python_api_reference.md
A complete reference for RAGFlow's Python APIs. Before proceeding, please ensure you have your RAGFlow API key ready for authentication.
:::tip NOTE Run the following command to download the Python SDK:
pip install ragflow-sdk
:::
| Code | Message | Description |
|---|---|---|
| 400 | Bad Request | Invalid request parameters |
| 401 | Unauthorized | Unauthorized access |
| 403 | Forbidden | Access denied |
| 404 | Not Found | Resource not found |
| 500 | Internal Server Error | Server internal error |
| 1001 | Invalid Chunk ID | Invalid Chunk ID |
| 1002 | Chunk Update Failed | Chunk update failed |
Creates a model response for the given historical chat conversation via OpenAI's API.
string, RequiredExisting chat assistant ID. This value is part of the request path: /api/v1/openai/<chat_id>/chat/completions.
string, RequiredThe model used to generate the response. You may also use the legacy placeholder value "model" to keep using the chat assistant's configured model.
list[object], RequiredA list of historical chat messages used to generate the response. This must contain at least one message with the user role.
booleanWhether to receive the response as a stream. Set this to false explicitly if you prefer to receive the entire response in one go instead of as a stream.
Exceptionfrom openai import OpenAI
import json
model = "glm-4-flash@ZHIPU-AI"
client = OpenAI(api_key="ragflow-api-key", base_url="http://ragflow_address/api/v1/openai/<chat_id>/chat")
stream = True
reference = True
request_kwargs = dict(
model=model,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
{"role": "assistant", "content": "I am an AI assistant named..."},
{"role": "user", "content": "Can you tell me how to install neovim"},
],
extra_body={
"reference": reference,
"reference_metadata": {
"include": True,
"fields": ["author", "year", "source"],
},
},
)
if stream:
completion = client.chat.completions.create(stream=True, **request_kwargs)
for chunk in completion:
print(chunk)
else:
resp = client.chat.completions.with_raw_response.create(
stream=False, **request_kwargs
)
print("status:", resp.http_response.status_code)
raw_text = resp.http_response.text
print("raw:", raw_text)
data = json.loads(raw_text)
print("assistant:", data["choices"][0]["message"].get("content"))
print("reference:", data["choices"][0]["message"].get("reference"))
When extra_body.reference is true, the streamed final chunk may include choices[0].delta.reference, and the non-stream response may include choices[0].message.reference.
When extra_body.reference_metadata.include is true, each reference chunk may include a document_metadata object in both streaming and non-streaming responses.
RAGFlow.create_dataset(
name: str,
avatar: Optional[str] = None,
description: Optional[str] = None,
embedding_model: Optional[str] = "BAAI/bge-large-zh-v1.5@BAAI",
permission: str = "me",
chunk_method: str = "naive",
parser_config: DataSet.ParserConfig = None
) -> DataSet
Creates a dataset.
string, RequiredThe unique name of the dataset to create. It must adhere to the following requirements:
stringBase64 encoding of the avatar. Defaults to None
stringA brief description of the dataset to create. Defaults to None.
Specifies who can access the dataset to create. Available options:
"me": (Default) Only you can manage the dataset."team": All team members can manage the dataset.stringThe chunking method of the dataset to create. Available options:
"naive": General (default)"manual: Manual"qa": Q&A"table": Table"paper": Paper"book": Book"laws": Laws"presentation": Presentation"picture": Picture"one": One"email": EmailThe parser configuration of the dataset. A ParserConfig object's attributes vary based on the selected chunk_method:
chunk_method="naive":{"chunk_token_num":512,"delimiter":"\\n","html4excel":False,"layout_recognize":True,"raptor":{"use_raptor":False},"parent_child":{"use_parent_child":False,"children_delimiter":"\\n"}}.chunk_method="qa":{"raptor": {"use_raptor": False}}chunk_method="manuel":{"raptor": {"use_raptor": False}}chunk_method="table":Nonechunk_method="paper":{"raptor": {"use_raptor": False}}chunk_method="book":{"raptor": {"use_raptor": False}}chunk_method="laws":{"raptor": {"use_raptor": False}}chunk_method="picture":Nonechunk_method="presentation":{"raptor": {"use_raptor": False}}chunk_method="one":Nonechunk_method="knowledge-graph":{"chunk_token_num":128,"delimiter":"\\n","entity_types":["organization","person","location","event","time"]}chunk_method="email":Nonedataset object.Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.create_dataset(name="kb_1")
RAGFlow.delete_datasets(ids: list[str] | None = None, delete_all: bool = False)
Deletes datasets by ID.
list[str] or NoneThe IDs of the datasets to delete. Defaults to None.
null or an empty array, no datasets are deleted.boolWhether to delete all datasets owned by the current user when ids is omitted, or set to None or an empty list. Defaults to False.
Exceptionrag_object.delete_datasets(ids=["d94a8dc02c9711f0930f7fbc369eab6d","e94a8dc02c9711f0930f7fbc369eab6e"])
rag_object.delete_datasets(delete_all=True)
RAGFlow.list_datasets(
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
desc: bool = True,
id: str = None,
name: str = None,
include_parsing_status: bool = False
) -> list[DataSet]
Lists datasets.
intSpecifies the page on which the datasets will be displayed. Defaults to 1.
intThe number of datasets on each page. Defaults to 30.
stringThe field by which datasets should be sorted. Available options:
"create_time" (default)"update_time"boolIndicates whether the retrieved datasets should be sorted in descending order. Defaults to True.
stringThe ID of the dataset to retrieve. Defaults to None.
stringThe name of the dataset to retrieve. Defaults to None.
boolWhether to include document parsing status counts in each returned DataSet object. Defaults to False. When set to True, each DataSet object will include the following additional attributes:
unstart_count: int Number of documents not yet started parsing.running_count: int Number of documents currently being parsed.cancel_count: int Number of documents whose parsing was cancelled.done_count: int Number of documents that have been successfully parsed.fail_count: int Number of documents whose parsing failed.DataSet objects.Exception.for dataset in rag_object.list_datasets():
print(dataset)
dataset = rag_object.list_datasets(id = "id_1")
print(dataset[0])
for dataset in rag_object.list_datasets(include_parsing_status=True):
print(dataset.done_count, dataset.fail_count, dataset.running_count)
DataSet.update(update_message: dict)
Updates configurations for the current dataset.
dict[str, str|int], RequiredA dictionary representing the attributes to update, with the following keys:
"name": string The revised name of the dataset.
"avatar": (Body parameter), string"embedding_model": (Body parameter), string"chunk_count" is 0 before updating "embedding_model".model_name@model_factory format"permission": (Body parameter), string"me": (Default) Only you can manage the dataset."team": All team members can manage the dataset."pagerank": (Body parameter), int00100"chunk_method": (Body parameter), enum<string>"naive": General (default)"book": Book"email": Email"laws": Laws"manual": Manual"one": One"paper": Paper"picture": Picture"presentation": Presentation"qa": Q&A"table": Table"tag": TagExceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets(name="kb_name")
dataset = dataset[0]
dataset.update({"embedding_model":"BAAI/bge-zh-v1.5", "chunk_method":"manual"})
DataSet.upload_documents(document_list: list[dict])
Uploads documents to the current dataset.
list[dict], RequiredA list of dictionaries representing the documents to upload, each containing the following keys:
"display_name": (Optional) The file name to display in the dataset."blob": (Optional) The binary content of the file to upload.Exceptiondataset = rag_object.create_dataset(name="kb_name")
dataset.upload_documents([{"display_name": "1.txt", "blob": "<BINARY_CONTENT_OF_THE_DOC>"}, {"display_name": "2.pdf", "blob": "<BINARY_CONTENT_OF_THE_DOC>"}])
Document.update(update_message:dict)
Updates configurations for the current document.
dict[str, str|dict[]], RequiredA dictionary representing the attributes to update, with the following keys:
"display_name": string The name of the document to update."meta_fields": dict[str, Any] The meta fields of the document."chunk_method": string The parsing method to apply to the document.
"naive": General"manual: Manual"qa": Q&A"table": Table"paper": Paper"book": Book"laws": Laws"presentation": Presentation"picture": Picture"one": One"email": Email"parser_config": dict[str, Any] The parsing configuration for the document. Its attributes vary based on the selected "chunk_method":
"chunk_method"="naive":{"chunk_token_num":128,"delimiter":"\\n","html4excel":False,"layout_recognize":True,"raptor":{"use_raptor":False},"parent_child":{"use_parent_child":False,"children_delimiter":"\\n"}}.chunk_method="qa":{"raptor": {"use_raptor": False}}chunk_method="manuel":{"raptor": {"use_raptor": False}}chunk_method="table":Nonechunk_method="paper":{"raptor": {"use_raptor": False}}chunk_method="book":{"raptor": {"use_raptor": False}}chunk_method="laws":{"raptor": {"use_raptor": False}}chunk_method="presentation":{"raptor": {"use_raptor": False}}chunk_method="picture":Nonechunk_method="one":Nonechunk_method="knowledge-graph":{"chunk_token_num":128,"delimiter":"\\n","entity_types":["organization","person","location","event","time"]}chunk_method="email":NoneExceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets(id='id')
dataset = dataset[0]
doc = dataset.list_documents(id="wdfxb5t547d")
doc = doc[0]
doc.update([{"parser_config": {"chunk_token_num": 256}}, {"chunk_method": "manual"}])
Document.download() -> bytes
Downloads the current document.
The downloaded document in bytes.
from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets(id="id")
dataset = dataset[0]
doc = dataset.list_documents(id="wdfxb5t547d")
doc = doc[0]
open("~/ragflow.txt", "wb+").write(doc.download())
print(doc)
Dataset.list_documents(
id: str = None,
keywords: str = None,
page: int = 1,
page_size: int = 30,
order_by: str = "create_time",
desc: bool = True,
create_time_from: int = 0,
create_time_to: int = 0
) -> list[Document]
Lists documents in the current dataset.
stringThe ID of the document to retrieve. Defaults to None.
stringThe keywords used to match document titles. Defaults to None.
intSpecifies the page on which the documents will be displayed. Defaults to 1.
intThe maximum number of documents on each page. Defaults to 30.
stringThe field by which documents should be sorted. Available options:
"create_time" (default)"update_time"boolIndicates whether the retrieved documents should be sorted in descending order. Defaults to True.
intUnix timestamp for filtering documents created after this time. 0 means no filter. Defaults to 0.
intUnix timestamp for filtering documents created before this time. 0 means no filter. Defaults to 0.
Document objects.Exception.A Document object contains the following attributes:
id: The document ID. Defaults to "".name: The document name. Defaults to "".thumbnail: The thumbnail image of the document. Defaults to None.dataset_id: The dataset ID associated with the document. Defaults to None.chunk_method The chunking method name. Defaults to "naive".source_type: The source type of the document. Defaults to "local".type: Type or category of the document. Defaults to "". Reserved for future use.created_by: string The creator of the document. Defaults to "".size: int The document size in bytes. Defaults to 0.token_count: int The number of tokens in the document. Defaults to 0.chunk_count: int The number of chunks in the document. Defaults to 0.progress: float The current processing progress as a percentage. Defaults to 0.0.progress_msg: string A message indicating the current progress status. Defaults to "".process_begin_at: datetime The start time of document processing. Defaults to None.process_duration: float Duration of the processing in seconds. Defaults to 0.0.run: string The document's processing status:
"UNSTART" (default)"RUNNING""CANCEL""DONE""FAIL"status: string Reserved for future use.parser_config: ParserConfig Configuration object for the parser. Its attributes vary based on the selected chunk_method:
chunk_method="naive":{"chunk_token_num":128,"delimiter":"\\n","html4excel":False,"layout_recognize":True,"raptor":{"use_raptor":False}}.chunk_method="qa":{"raptor": {"use_raptor": False}}chunk_method="manuel":{"raptor": {"use_raptor": False}}chunk_method="table":Nonechunk_method="paper":{"raptor": {"use_raptor": False}}chunk_method="book":{"raptor": {"use_raptor": False}}chunk_method="laws":{"raptor": {"use_raptor": False}}chunk_method="presentation":{"raptor": {"use_raptor": False}}chunk_method="picure":Nonechunk_method="one":Nonechunk_method="email":Nonefrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.create_dataset(name="kb_1")
filename1 = "~/ragflow.txt"
blob = open(filename1 , "rb").read()
dataset.upload_documents([{"name":filename1,"blob":blob}])
for doc in dataset.list_documents(keywords="rag", page=0, page_size=12):
print(doc)
DataSet.delete_documents(ids: list[str] | None = None, delete_all: bool = False)
Deletes documents by ID.
list[str] or NoneThe IDs of the documents to delete. Defaults to None.
null or an empty array, no documents are deleted.boolWhether to delete all documents in the current dataset when ids is omitted, or set to None or an empty list. Defaults to False.
Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets(name="kb_1")
dataset = dataset[0]
dataset.delete_documents(ids=["id_1","id_2"])
dataset.delete_documents(delete_all=True)
DataSet.async_parse_documents(document_ids:list[str]) -> None
Parses documents in the current dataset.
list[str], RequiredThe IDs of the documents to parse.
Exceptionrag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.create_dataset(name="dataset_name")
documents = [
{'display_name': 'test1.txt', 'blob': open('./test_data/test1.txt',"rb").read()},
{'display_name': 'test2.txt', 'blob': open('./test_data/test2.txt',"rb").read()},
{'display_name': 'test3.txt', 'blob': open('./test_data/test3.txt',"rb").read()}
]
dataset.upload_documents(documents)
documents = dataset.list_documents(keywords="test")
ids = []
for document in documents:
ids.append(document.id)
dataset.async_parse_documents(ids)
print("Async bulk parsing initiated.")
DataSet.parse_documents(document_ids: list[str]) -> list[tuple[str, str, int, int]]
Asynchronously parses documents in the current dataset.
This method encapsulates async_parse_documents(). It awaits the completion of all parsing tasks before returning detailed results, including the parsing status and statistics for each document. If a keyboard interruption occurs (e.g., Ctrl+C), all pending parsing tasks will be cancelled gracefully.
list[str], RequiredThe IDs of the documents to parse.
A list of tuples with detailed parsing results:
[
(document_id: str, status: str, chunk_count: int, token_count: int),
...
]
status: The final parsing state (e.g., success, failed, cancelled).chunk_count: The number of content chunks created from the document.token_count: The total number of tokens processed.rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.create_dataset(name="dataset_name")
documents = dataset.list_documents(keywords="test")
ids = [doc.id for doc in documents]
try:
finished = dataset.parse_documents(ids)
for doc_id, status, chunk_count, token_count in finished:
print(f"Document {doc_id} parsing finished with status: {status}, chunks: {chunk_count}, tokens: {token_count}")
except KeyboardInterrupt:
print("\nParsing interrupted by user. All pending tasks have been cancelled.")
except Exception as e:
print(f"Parsing failed: {e}")
DataSet.async_cancel_parse_documents(document_ids:list[str])-> None
Stops parsing specified documents.
list[str], RequiredThe IDs of the documents for which parsing should be stopped.
Exceptionrag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.create_dataset(name="dataset_name")
documents = [
{'display_name': 'test1.txt', 'blob': open('./test_data/test1.txt',"rb").read()},
{'display_name': 'test2.txt', 'blob': open('./test_data/test2.txt',"rb").read()},
{'display_name': 'test3.txt', 'blob': open('./test_data/test3.txt',"rb").read()}
]
dataset.upload_documents(documents)
documents = dataset.list_documents(keywords="test")
ids = []
for document in documents:
ids.append(document.id)
dataset.async_parse_documents(ids)
print("Async bulk parsing initiated.")
dataset.async_cancel_parse_documents(ids)
print("Async bulk parsing cancelled.")
Document.add_chunk(content:str, important_keywords:list[str] = [], questions:list[str] = [], image_base64:str = None, *, tag_kwd:list[str] = []) -> Chunk
Adds a chunk to the current document.
string, RequiredThe text content of the chunk.
list[str]The key terms or phrases to tag with the chunk.
list[str]Optional questions to use when embedding the chunk.
stringA base64-encoded image to associate with the chunk. If the chunk already has an image, the new image will be vertically concatenated below the existing one.
list[str]Tag keywords to associate with the chunk.
Chunk object.Exception.A Chunk object contains the following attributes:
id: string: The chunk ID.content: string The text content of the chunk.important_keywords: list[str] A list of key terms or phrases tagged with the chunk.tag_kwd: list[str] A list of tag keywords associated with the chunk.questions: list[str] A list of questions associated with the chunk.image_id: string The image ID associated with the chunk (empty string if no image).create_time: string The time when the chunk was created (added to the document).create_timestamp: float The timestamp representing the creation time of the chunk, expressed in seconds since January 1, 1970.dataset_id: string The ID of the associated dataset.document_name: string The name of the associated document.document_id: string The ID of the associated document.available: bool The chunk's availability status in the dataset. Value options:
False: UnavailableTrue: Available (default)from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
datasets = rag_object.list_datasets(id="123")
dataset = datasets[0]
doc = dataset.list_documents(id="wdfxb5t547d")
doc = doc[0]
chunk = doc.add_chunk(content="xxxxxxx")
Adding a chunk with an image:
import base64
with open("image.jpg", "rb") as f:
img_b64 = base64.b64encode(f.read()).decode()
chunk = doc.add_chunk(content="description of image", image_base64=img_b64)
Document.list_chunks(keywords: str = None, page: int = 1, page_size: int = 30, id : str = None) -> list[Chunk]
Lists chunks in the current document.
stringThe keywords used to match chunk content. Defaults to None
intSpecifies the page on which the chunks will be displayed. Defaults to 1.
intThe maximum number of chunks on each page. Defaults to 30.
stringThe ID of the chunk to retrieve. Default: None
Chunk objects.Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets("123")
dataset = dataset[0]
docs = dataset.list_documents(keywords="test", page=1, page_size=12)
for chunk in docs[0].list_chunks(keywords="rag", page=0, page_size=12):
print(chunk)
Document.delete_chunks(ids: list[str] | None = None, delete_all: bool = False)
Deletes chunks by ID.
list[str] or NoneThe IDs of the chunks to delete. Defaults to None.
null or an empty array, no chunks are deleted.boolWhether to delete all chunks in the current document when ids is omitted, or set to None or an empty list. Defaults to False.
Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets(id="123")
dataset = dataset[0]
doc = dataset.list_documents(id="wdfxb5t547d")
doc = doc[0]
chunk = doc.add_chunk(content="xxxxxxx")
doc.delete_chunks(["id_1","id_2"])
doc.delete_chunks(delete_all=True)
Chunk.update(update_message: dict)
Updates content or configurations for the current chunk.
dict[str, str|list[str]|bool] RequiredA dictionary representing the attributes to update, with the following keys:
"content": string The text content of the chunk."important_keywords": list[str] A list of key terms or phrases to tag with the chunk."questions": list[str] A list of questions associated with the chunk."tag_kwd": list[str] A list of tag keywords to associate with the chunk."positions": list Updated source positions for the chunk."available": bool The chunk's availability status in the dataset. Value options:
False: UnavailableTrue: Available (default)"image_base64": string Base64-encoded image content to associate with the chunk.Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets(id="123")
dataset = dataset[0]
doc = dataset.list_documents(id="wdfxb5t547d")
doc = doc[0]
chunk = doc.add_chunk(content="xxxxxxx")
chunk.update({"content":"sdfx..."})
RAGFlow.retrieve(question:str="", dataset_ids:list[str]=None, document_ids=list[str]=None, page:int=1, page_size:int=30, similarity_threshold:float=0.2, vector_similarity_weight:float=0.3, top_k:int=1024,rerank_id:str=None,keyword:bool=False,cross_languages:list[str]=None,metadata_condition: dict=None) -> list[Chunk]
Retrieves chunks from specified datasets.
string, RequiredThe user query or query keywords. Defaults to "".
list[str], RequiredThe IDs of the datasets to search. Defaults to None.
list[str]The IDs of the documents to search. Defaults to None. You must ensure all selected documents use the same embedding model. Otherwise, an error will occur.
intThe starting index for the documents to retrieve. Defaults to 1.
intThe maximum number of chunks to retrieve. Defaults to 30.
floatThe minimum similarity score. Defaults to 0.2.
floatThe weight of vector cosine similarity. Defaults to 0.3. If x represents the vector cosine similarity, then (1 - x) is the term similarity weight.
intThe number of chunks engaged in vector cosine computation. Defaults to 1024.
stringThe ID of the rerank model. Defaults to None.
boolIndicates whether to enable keyword-based matching:
True: Enable keyword-based matching.False: Disable keyword-based matching (default).list[string]The languages that should be translated into, in order to achieve keywords retrievals in different languages.
dictfilter condition for meta_fields.
Chunk objects representing the document chunks.Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
dataset = rag_object.list_datasets(name="ragflow")
dataset = dataset[0]
name = 'ragflow_test.txt'
path = './test_data/ragflow_test.txt'
documents =[{"display_name":"test_retrieve_chunks.txt","blob":open(path, "rb").read()}]
docs = dataset.upload_documents(documents)
doc = docs[0]
doc.add_chunk(content="This is a chunk addition test")
for c in rag_object.retrieve(dataset_ids=[dataset.id],document_ids=[doc.id]):
print(c)
RAGFlow.create_chat(
name: str,
icon: str = "",
dataset_ids: list[str] | None = None,
llm_id: str | None = None,
llm_setting: dict | None = None,
prompt_config: dict | None = None,
**kwargs
) -> Chat
Creates a chat assistant.
string, RequiredThe name of the chat assistant.
stringBase64 encoding of the avatar. Defaults to "".
list[str]The IDs of the associated datasets. Defaults to []. When omitted or empty, the SDK creates an empty chat assistant and you can attach datasets later.
str | NoneThe LLM model name/ID to use. If None, the user’s default chat model is used. Defaults to None.
dict | NoneConfiguration for LLM generation parameters. Defaults to None (server-side defaults apply). Supported keys:
"temperature": float Controls the randomness of the model's output. Higher values increase creativity, while lower values make responses more deterministic. Defaults to 0.1."top_p": float Sets the nucleus sampling threshold. The model considers only the results of the tokens with top_p probability mass. Defaults to 0.3."presence_penalty": float Penalizes tokens based on whether they have appeared in the text so far, increasing the likelihood of the model talking about new topics. Defaults to 0.4."frequency_penalty": float Penalizes tokens based on their existing frequency in the text, decreasing the likelihood of repeating the same lines. Defaults to 0.7."max_token": int The maximum number of tokens to generate in the response. Defaults to 512.dict | NoneInstructions and behavioral settings for the LLM. Defaults to None (server-side defaults apply). Supported keys:
"system": string The core system prompt or instructions defining the assistant's persona."empty_response": string The specific message returned when no relevant information is retrieved. If left blank, the LLM will generate its own response. Defaults to None."prologue": string The initial greeting displayed to the user. Defaults to "Hi! I’m your assistant. What can I do for you?"."quote": boolean Determines whether the assistant should include citations or source references in its responses. Defaults to True."parameters": list[dict] A list of variables utilized within the system prompt. Each entry must include a "key" (string) and an "optional" (boolean) status. The knowledge key is reserved for retrieved context chunks. Default: [{"key": "knowledge", "optional": true}].Chat object representing the chat assistant.Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
datasets = rag_object.list_datasets(name="kb_1")
dataset_ids = []
for dataset in datasets:
dataset_ids.append(dataset.id)
assistant = rag_object.create_chat("Miss R", dataset_ids=dataset_ids)
Chat.update(update_message: dict)
Performs a partial update to the configuration settings for the current chat assistant.
Chat.update() utilizes the PATCH /api/v1/chats/{chat_id} endpoint. Only the specified keys are modified, while all other existing fields are preserved.
dict, RequiredA dictionary containing the attributes to be updated. Supported keys include:
"name": string The updated name of the chat assistant."icon": string A Base64-encoded string representing the assistant's avatar."dataset_ids": list[string] A list of unique identifiers for the datasets associated with the assistant."llm_id": string The unique identifier or name of the LLM to be used."llm_setting": dict Configuration for LLM generation parameters:
"temperature": float Controls the randomness of the model's output."top_p": float Sets the nucleus sampling threshold."presence_penalty": float Penalizes tokens based on whether they have already appeared in the text."frequency_penalty": float Penalizes tokens based on their existing frequency in the text."max_token": int The maximum number of tokens to generate in the response."prompt_config": dict Instructions and behavioral settings for the LLM:
"system": string The core system prompt or instructions defining the assistant's persona."empty_response": string The message returned when no relevant information is retrieved. Leave blank to allow the LLM to improvise."prologue": string The initial greeting displayed to the user."quote": boolean Determines whether the assistant should include citations or source references."parameters": list[dict] Variables used within the system prompt (e.g., the reserved knowledge key)."similarity_threshold": float The minimum similarity score required for retrieved context chunks. Defaults to 0.2."vector_similarity_weight": float The weight assigned to vector cosine similarity within the hybrid search score. Defaults to 0.3."top_n": int The number of top-ranked chunks provided to the LLM as context. Defaults to 6."top_k": int The size of the initial candidate pool retrieved for reranking. Defaults to 1024."rerank_id": string The unique identifier for the reranking model. If left empty, standard vector cosine similarity is used for ranking.Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
datasets = rag_object.list_datasets(name="kb_1")
dataset_id = datasets[0].id
assistant = rag_object.create_chat("Miss R", dataset_ids=[dataset_id])
assistant.update({"name": "Stefan", "llm_setting": {"temperature": 0.8}, "top_n": 8})
RAGFlow.delete_chats(ids: list[str] | None = None, delete_all: bool = False)
Deletes chat assistants by ID.
list[str] or NoneThe IDs of the chat assistants to delete. Defaults to None.
null or an empty array, no chat assistants are deleted.boolWhether to delete all chat assistants owned by the current user when ids is omitted, or set to None or an empty list. Defaults to False.
Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
rag_object.delete_chats(ids=["id_1","id_2"])
rag_object.delete_chats(delete_all=True)
RAGFlow.list_chats(
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
desc: bool = True,
id: str | None = None,
name: str | None = None,
keywords: str | None = None,
owner_ids: str | list[str] | None = None,
parser_id: str | None = None
) -> list[Chat]
Lists chat assistants.
intSpecifies the page on which the chat assistants will be displayed. Defaults to 1.
intThe number of chat assistants on each page. Defaults to 30.
stringThe attribute by which the results are sorted. Available options:
"create_time" (default)"update_time"boolIndicates whether the retrieved chat assistants should be sorted in descending order. Defaults to True.
string | NoneExact match on chat assistant ID. Defaults to None.
Filters results by the exact name of the chat assistant. Defaults to None.
string | NonePerforms a case-insensitive fuzzy search against chat assistant names. Defaults to None.
string | list[string] | NoneFilters results by one or more owner tenant IDs. Defaults to None.
string | NoneFilters results by a specific parser type identifier. Defaults to None.
If id or name is specified, exact filtering takes precedence over the fuzzy matching provided by keywords.
Chat objects.Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
for assistant in rag_object.list_chats():
print(assistant)
Chat.create_session(name: str = "New session") -> Session
Creates a session with the current chat assistant.
stringThe name of the chat session to create.
Session object containing the following attributes:
id: string The auto-generated unique identifier of the created session.name: string The name of the created session.message: list[Message] The opening message of the created session. Default: [{"role": "assistant", "content": "Hi! I am your assistant, can I help you?"}]chat_id: string The ID of the associated chat assistant.Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
assistant = rag_object.list_chats(name="Miss R")
assistant = assistant[0]
session = assistant.create_session()
Session.update(update_message: dict)
Updates the current session of the current chat assistant.
dict[str, Any], RequiredA dictionary representing the attributes to update, with only one key:
"name": string The revised name of the session.Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
assistant = rag_object.list_chats(name="Miss R")
assistant = assistant[0]
session = assistant.create_session("session_name")
session.update({"name": "updated_name"})
Chat.list_sessions(
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
desc: bool = True,
id: str = None,
name: str = None,
user_id: str = None
) -> list[Session]
Lists sessions associated with the current chat assistant.
intSpecifies the page on which the sessions will be displayed. Defaults to 1.
intThe number of sessions on each page. Defaults to 30.
stringThe field by which sessions should be sorted. Available options:
"create_time" (default)"update_time"boolIndicates whether the retrieved sessions should be sorted in descending order. Defaults to True.
stringThe ID of the chat session to retrieve. Defaults to None.
stringThe name of the chat session to retrieve. Defaults to None.
strThe optional user-defined ID to filter sessions by. Defaults to None.
Session objects associated with the current chat assistant.Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
assistant = rag_object.list_chats(name="Miss R")
assistant = assistant[0]
for session in assistant.list_sessions():
print(session)
Chat.delete_sessions(ids: list[str] | None = None, delete_all: bool = False)
Deletes sessions of the current chat assistant by ID.
list[str] or NoneThe IDs of the sessions to delete. Defaults to None.
null or an empty array, no sessions are deleted.boolWhether to delete all sessions of the current chat assistant when ids is omitted, or set to None or an empty list. Defaults to False.
Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
assistant = rag_object.list_chats(name="Miss R")
assistant = assistant[0]
assistant.delete_sessions(ids=["id_1","id_2"])
assistant.delete_sessions(delete_all=True)
Session.ask(question: str = "", stream: bool = False, **kwargs) -> Optional[Message, iter[Message]]
Asks a specified chat assistant a question to start an AI-powered conversation.
:::tip NOTE In streaming mode, not all responses include a reference, as this depends on the system's judgement. :::
string, RequiredThe question to start an AI-powered conversation. Default to ""
boolIndicates whether to output responses in a streaming way:
True: Enable streaming (default).False: Disable streaming.The parameters in prompt(system).
Message object containing the response to the question if stream is set to False.message objects (iter[Message]) if stream is set to TrueThe following shows the attributes of a Message object:
stringThe auto-generated message ID.
stringThe content of the message. Defaults to "Hi! I am your assistant, can I help you?".
list[Chunk]A list of Chunk objects representing references to the message, each containing the following attributes:
id stringcontent stringimg_id stringdocument_id stringdocument_name stringdocument_metadata dictextra_body.reference_metadata.include is true.position list[str]dataset_id stringsimilarity float0 to 1, with a higher value indicating greater similarity. It is the weighted sum of vector_similarity and term_similarity.vector_similarity float0 to 1, with a higher value indicating greater similarity between vector embeddings.term_similarity float0 to 1, with a higher value indicating greater similarity between keywords.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
assistant = rag_object.list_chats(name="Miss R")
assistant = assistant[0]
session = assistant.create_session()
print("\n==================== Miss R =====================\n")
print("Hello. What can I do for you?")
while True:
question = input("\n==================== User =====================\n> ")
print("\n==================== Miss R =====================\n")
cont = ""
for ans in session.ask(question, stream=True):
print(ans.content[len(cont):], end='', flush=True)
cont = ans.content
Agent.create_session(**kwargs) -> Session
Creates a session with the current agent.
The parameters in begin component.
Also supports:
release (bool | str, optional): When set to True (or "true"), creates a session with the published agent app only.Session object containing the following attributes:
id: string The auto-generated unique identifier of the created session.message: list[Message] The messages of the created session assistant. Default: [{"role": "assistant", "content": "Hi! I am your assistant, can I help you?"}]agent_id: string The ID of the associated agent.Exceptionfrom ragflow_sdk import RAGFlow, Agent
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
agent_id = "AGENT_ID"
agent = rag_object.get_agent(agent_id)
session = agent.create_session()
# Or create in release mode:
# session = agent.create_session(release=True)
Session.ask(question: str = "", stream: bool = False, **kwargs) -> Optional[Message | iter[Message]]
Asks a specified agent through the unified completion endpoint.
:::tip NOTE In streaming mode, not all responses include a reference, as this depends on the system's judgement. :::
stringThe user message sent to the agent. If the Begin component takes parameters, question can be an empty string.
boolIndicates whether to output responses in a streaming way:
True: Enable streaming.False: Disable streaming.dictAdditional request parameters forwarded to the completion API. Common options:
inputs: Variables defined in the Begin component.session_id: Continue an existing session instead of creating a new one.release: Use the latest published version of the agent.return_trace: Include execution trace information in the response.Message object containing the response to the question if stream is set to Falsemessage objects (iter[Message]) if stream is set to TrueThe following shows the attributes of a Message object:
stringThe auto-generated message ID.
stringThe content of the message. Defaults to "Hi! I am your assistant, can I help you?".
list[Chunk]A list of Chunk objects representing references to the message, each containing the following attributes:
id stringcontent stringimage_id stringdocument_id stringdocument_name stringdocument_metadata dictextra_body.reference_metadata.include is true.position list[str]dataset_id stringsimilarity float0 to 1, with a higher value indicating greater similarity. It is the weighted sum of vector_similarity and term_similarity.vector_similarity float0 to 1, with a higher value indicating greater similarity between vector embeddings.term_similarity float0 to 1, with a higher value indicating greater similarity between keywords.from ragflow_sdk import RAGFlow, Agent
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
AGENT_id = "AGENT_ID"
agent = rag_object.get_agent(AGENT_id)
session = agent.create_session()
print("\n===== Miss R ====\n")
print("Hello. What can I do for you?")
while True:
question = input("\n===== User ====\n> ")
print("\n==== Miss R ====\n")
cont = ""
for ans in session.ask(question, stream=True):
print(ans.content[len(cont):], end='', flush=True)
cont = ans.content
Use Begin inputs and request trace output:
from ragflow_sdk import RAGFlow, Agent
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
agent = rag_object.get_agent("AGENT_ID")
session = agent.create_session()
message = session.ask(
"",
stream=False,
inputs={
"line_var": {
"type": "line",
"value": "I am line_var",
}
},
return_trace=True,
)
print(message.content)
print(message.reference)
Agent.list_sessions(
page: int = 1,
page_size: int = 30,
orderby: str = "update_time",
desc: bool = True,
id: str = None
) -> List[Session]
Lists sessions associated with the current agent.
intSpecifies the page on which the sessions will be displayed. Defaults to 1.
intThe number of sessions on each page. Defaults to 30.
stringThe field by which sessions should be sorted. Available options:
"create_time""update_time"(default)boolIndicates whether the retrieved sessions should be sorted in descending order. Defaults to True.
stringThe ID of the agent session to retrieve. Defaults to None.
Session objects associated with the current agent.Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
AGENT_id = "AGENT_ID"
agent = rag_object.get_agent(AGENT_id)
sessons = agent.list_sessions()
for session in sessions:
print(session)
Agent.delete_sessions(ids: list[str] | None = None, delete_all: bool = False)
Deletes sessions of an agent by ID.
list[str] or NoneThe IDs of the sessions to delete. Defaults to None.
None or an empty array, no sessions are deleted.boolWhether to delete all sessions of the current agent when ids is omitted, or set to None or an empty list. Defaults to False.
Exceptionfrom ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
AGENT_id = "AGENT_ID"
agent = rag_object.get_agent(AGENT_id)
agent.delete_sessions(ids=["id_1","id_2"])
agent.delete_sessions(delete_all=True)
RAGFlow.list_agents(
page: int = 1,
page_size: int = 30,
orderby: str = "update_time",
desc: bool = True
) -> List[Agent]
Lists agents. This is a collection API and always returns a list.
intSpecifies the page on which the agents will be displayed. Defaults to 1.
intThe number of agents on each page. Defaults to 30.
stringThe attribute by which the results are sorted. Available options:
"create_time""update_time" (default)boolIndicates whether the retrieved agents should be sorted in descending order. Defaults to True.
Agent objects.Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
for agent in rag_object.list_agents():
print(agent)
RAGFlow.get_agent(agent_id: str) -> Agent
Gets a single agent by ID and returns the detailed agent payload.
stringThe ID of the agent to retrieve.
Agent object.Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
agent = rag_object.get_agent("AGENT_ID")
print(agent)
RAGFlow.create_agent(
title: str,
dsl: dict,
description: str | None = None
) -> None
Create an agent.
stringSpecifies the title of the agent.
dictSpecifies the canvas DSL of the agent.
stringThe description of the agent. Defaults to None.
Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
rag_object.create_agent(
title="Test Agent",
description="A test agent",
dsl={
# ... canvas DSL here ...
}
)
RAGFlow.update_agent(
agent_id: str,
title: str | None = None,
description: str | None = None,
dsl: dict | None = None
) -> None
Update an agent.
stringSpecifies the id of the agent to be updated.
stringSpecifies the new title of the agent. None if you do not want to update this.
dictSpecifies the new canvas DSL of the agent. None if you do not want to update this.
stringThe new description of the agent. None if you do not want to update this.
Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
rag_object.update_agent(
agent_id="58af890a2a8911f0a71a11b922ed82d6",
title="Test Agent",
description="A test agent",
dsl={
# ... canvas DSL here ...
}
)
RAGFlow.delete_agent(
agent_id: str
) -> None
Delete an agent.
stringSpecifies the id of the agent to be deleted.
Exception.from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
rag_object.delete_agent("58af890a2a8911f0a71a11b922ed82d6")
Ragflow.create_memory(
name: str,
memory_type: list[str],
embd_id: str,
llm_id: str
) -> Memory
Create a new memory.
string, RequiredThe unique name of the memory to create. It must adhere to the following requirements:
list[str], RequiredSpecifies the types of memory to extract. Available options:
raw: The raw dialogue content between the user and the agent . Required by default.semantic: General knowledge and facts about the user and world.episodic: Time-stamped records of specific events and experiences.procedural: Learned skills, habits, and automated procedures.string, RequiredThe name of the embedding model to use. For example: "BAAI/bge-large-zh-v1.5@BAAI"
model_name@model_factory formatstring, RequiredThe name of the chat model to use. For example: "glm-4-flash@ZHIPU-AI"
model_name@model_factory formatSuccess: A memory object.
Failure: Exception
from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
memory = rag_obj.create_memory("name", ["raw"], "BAAI/bge-large-zh-v1.5@SILICONFLOW", "glm-4-flash@ZHIPU-AI")
Memory.update(
update_dict: dict
) -> Memory
Updates configurations for a specified memory.
dict, RequiredConfigurations to update. Available configurations:
name: string, Optional
The revised name of the memory.
avatar: string, Optional
The updated base64 encoding of the avatar.
permission: enum<string>, Optional
The updated memory permission. Available options:
"me": (Default) Only you can manage the memory."team": All team members can manage the memory.llm_id: string, Optional
The name of the chat model to use. For example: "glm-4-flash@ZHIPU-AI"
model_name@model_factory formatdescription: string, Optional
The description of the memory. Defaults to None.
memory_size: int, Optional
Defaults to 5*1024*1024 Bytes. Accounts for each message's content + its embedding vector (≈ Content + Dimensions × 8 Bytes). Example: A 1 KB message with 1024-dim embedding uses ~9 KB. The 5 MB default limit holds ~500 such messages.
forgetting_policy: enum<string>, Optional
Evicts existing data based on the chosen policy when the size limit is reached, freeing up space for new messages. Available options:
"FIFO": (Default) Prioritize messages with the earliest forget_at time for removal. When the pool of messages that have forget_at set is insufficient, it falls back to selecting messages in ascending order of their valid_at (oldest first).temperature: (Body parameter), float, Optional
Adjusts output randomness. Lower = more deterministic; higher = more creative.
system_prompt: (Body parameter), string, Optional
Defines the system-level instructions and role for the AI assistant. It is automatically assembled based on the selected memory_type by PromptAssembler in memory/utils/prompt_util.py. This prompt sets the foundational behavior and context for the entire conversation.
OUTPUT REQUIREMENTS and OUTPUT FORMAT parts unchanged.user_prompt: (Body parameter), string, Optional
Represents the user's custom setting, which is the specific question or instruction the AI needs to respond to directly. Defaults to None.
Success: A memory object.
Failure: Exception
from ragflow_sdk import Ragflow, Memory
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
memory_obejct = Memory(rag_object, {"id": "your memory_id"})
memory_object.update({"name": "New_name"})
Ragflow.list_memory(
page: int = 1,
page_size: int = 50,
tenant_id: str | list[str] = None,
memory_type: str | list[str] = None,
storage_type: str = None,
keywords: str = None) -> dict
List memories.
int, OptionalSpecifies the page on which the datasets will be displayed. Defaults to 1
int, OptionalThe number of memories on each page. Defaults to 50.
string or list[str], OptionalThe owner's ID, supports search multiple IDs.
string or list[str], OptionalThe type of memory (as set during creation). A memory matches if its type is included in the provided value(s). Available options:
rawsemanticepisodicproceduralstring, OptionalThe storage format of messages. Available options:
table: (Default)string, OptionalThe name of memory to retrieve, supports fuzzy search.
Success: A dict of Memory object list and total count.
{"memory_list": list[Memory], "total_count": int}
Failure: Exception
from ragflow_sdk import Ragflow, Memory
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
rag_obejct.list_memory()
Memory.get_config()
Get the configuration of a specified memory.
None
Success: A Memory object.
Failure: Exception
from ragflow_sdk import Ragflow, Memory
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
memory_obejct = Memory(rag_object, {"id": "your memory_id"})
memory_obejct.get_config()
Ragflow.delete_memory(
memory_id: str
) -> None
Delete a specified memory.
string, RequiredThe ID of the memory.
Success: Nothing
Failure: Exception
from ragflow_sdk import Ragflow, Memory
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
rag_object.delete_memory("your memory_id")
Memory.list_memory_messages(
agent_id: str | list[str]=None,
keywords: str=None,
page: int=1,
page_size: int=50
) -> dict
List the messages of a specified memory.
string or list[str], OptionalFilters messages by the ID of their source agent. Supports multiple values.
string, OptionalFilters messages by their session ID. This field supports fuzzy search.
int, OptionalSpecifies the page on which the messages will be displayed. Defaults to 1.
int, OptionalThe number of messages on each page. Defaults to 50.
Success: a dict of messages and meta info.
{"messages": {"message_list": [{message dict}], "total_count": int}, "storage_type": "table"}
Failure: Exception
from ragflow_sdk import Ragflow, Memory
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
memory_obejct = Memory(rag_object, {"id": "your memory_id"})
memory_obejct.list_memory_messages()
Ragflow.add_message(
memory_id: list[str],
agent_id: str,
session_id: str,
user_input: str,
agent_response: str,
user_id: str = ""
) -> str
Add a message to specified memories.
list[str], RequiredThe IDs of the memories to save messages.
string, RequiredThe ID of the message's source agent.
string, RequiredThe ID of the message's session.
string, RequiredThe text input provided by the user.
string, RequiredThe text response generated by the AI agent.
string, OptionalThe user participating in the conversation with the agent. Defaults to "".
Success: A text "All add to task."
Failure: Exception
from ragflow_sdk import Ragflow, Memory
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
message_payload = {
"memory_id": memory_ids,
"agent_id": agent_id,
"session_id": session_id,
"user_id": "",
"user_input": "Your question here",
"agent_response": """
Your agent response here
"""
}
client.add_message(**message_payload)
Memory.forget_message(message_id: int) -> bool
Forget a specified message. After forgetting, this message will not be retrieved by agents, and it will also be prioritized for cleanup by the forgetting policy.
int, RequiredThe ID of the message to forget.
Success: True
Failure: Exception
from ragflow_sdk import Ragflow, Memory
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
memory_object = Memory(rag_object, {"id": "your memory_id"})
memory_object.forget_message(message_id)
Memory.update_message_status(message_id: int, status: bool) -> bool
Update message status, enable or disable a message. Once a message is disabled, it will not be retrieved by agents.
int, RequiredThe ID of the message to enable or disable.
bool, RequiredThe status of message. True = enabled, False = disabled.
Success: True
Failure: Exception
from ragflow_sdk import Ragflow, Memory
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
memory_object = Memory(rag_object, {"id": "your memory_id"})
memory_object.update_message_status(message_id, True)
Ragflow.search_message(
query: str,
memory_id: list[str],
agent_id: str=None,
session_id: str=None,
user_id: str=None,
similarity_threshold: float=0.2,
keywords_similarity_weight: float=0.7,
top_n: int=10
) -> list[dict]
Searches and retrieves messages from memory based on the provided query and other configuration parameters.
string, RequiredThe search term or natural language question used to find relevant messages.
list[str], RequiredThe IDs of the memories to search. Supports multiple values.
string, OptionalThe ID of the message's source agent. Defaults to None.
string, OptionalThe ID of the message's session. Defaults to None.
string, OptionalThe user participating in the conversation with the agent. Defaults to None.
float, OptionalThe minimum cosine similarity score required for a message to be considered a match. A higher value yields more precise but fewer results. Defaults to 0.2.
float, OptionalControls the influence of keyword matching versus semantic (embedding-based) matching in the final relevance score. A value of 0.5 gives them equal weight. Defaults to 0.7.
int, OptionalThe maximum number of most relevant messages to return. This limits the result set size for efficiency. Defaults to 10.
Success: A list of message dict.
Failure: Exception
from ragflow_sdk import Ragflow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
rag_object.search_message("your question", ["your memory_id"])
Ragflow.get_recent_messages(
memory_id: list[str],
agent_id: str=None,
session_id: str=None,
limit: int=10
) -> list[dict]
Retrieves the most recent messages from specified memories. Typically accepts a limit parameter to control the number of messages returned.
list[str], RequiredThe IDs of the memories to search. Supports multiple values.
string, OptionalThe ID of the message's source agent. Defaults to None.
string, OptionalThe ID of the message's session. Defaults to None.
int, OptionalControl the number of messages returned. Defaults to 10.
Success: A list of message dict.
Failure: Exception
from ragflow_sdk import Ragflow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
rag_object.get_recent_messages(["your memory_id"])
Memory.get_message_content(message_id: int)
Retrieves the full content and embed vector of a specific message using its unique message ID.
int, RequiredSuccess: A message dict.
Failure: Exception
from ragflow_sdk import Ragflow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
memory_object = Memory(rag_object, {"id": "your memory_id"})
memory_object.get_message_content(message_id)