docs/guides/dataset/configure_knowledge_base.md
Most of RAGFlow's chat assistants and Agents are based on datasets. Each of RAGFlow's datasets serves as a knowledge source, parsing files uploaded from your local machine and file references generated in RAGFlow's File system into the real 'knowledge' for future AI chats. This guide demonstrates some basic usages of the dataset feature, covering the following topics:
With multiple datasets, you can build more flexible, diversified question answering. To create your first dataset:
Each time a dataset is created, a folder with the same name is generated in the root/.knowledgebase directory.
The following screenshot shows the configuration page of a dataset. A proper configuration of your dataset is crucial for future AI chats. For example, choosing the wrong embedding model or chunking method would cause unexpected semantic loss or mismatched answers in chats.
This section covers the following topics:
RAGFlow offers multiple built-in chunking template to facilitate chunking files of different layouts and ensure semantic integrity. From the Built-in chunking method dropdown under Parse type, you can choose the default template that suits the layouts and formats of your files. The following table shows the descriptions and the compatible file formats of each supported chunk template:
| Template | Description | File format |
|---|---|---|
| General | Files are consecutively chunked based on a preset chunk token number. | MD, MDX, DOCX, XLSX, XLS (Excel 97-2003), PPT, PDF, TXT, JPEG, JPG, PNG, TIF, GIF, CSV, JSON, EML, HTML |
| Q&A | Retrieves relevant information and generates answers to respond to questions. | XLSX, XLS (Excel 97-2003), CSV/TXT |
| Resume | Enterprise edition only. You can also try it out on cloud.ragflow.io. | DOCX, PDF, TXT |
| Manual | ||
| Table | The table mode uses TSI technology for efficient data parsing. | XLSX, XLS (Excel 97-2003), CSV/TXT |
| Paper | ||
| Book | DOCX, PDF, TXT | |
| Laws | DOCX, PDF, TXT | |
| Presentation | PDF, PPTX | |
| Picture | JPEG, JPG, PNG, TIF, GIF | |
| One | Each document is chunked in its entirety (as one). | DOCX, XLSX, XLS (Excel 97-2003), PDF, TXT |
| Tag | The dataset functions as a tag set for the others. | XLSX, CSV/TXT |
You can also change a file's chunking method on the Files page.
<details> <summary>From v0.21.0 onward, RAGFlow supports ingestion pipeline for customized data ingestion and cleansing workflows.</summary>To use a customized data pipeline:
On the Agent page, click + Create agent > Create from blank.
Select Ingestion pipeline and name your data pipeline in the popup, then click Save to show the data pipeline canvas.
After updating your data pipeline, click Save on the top right of the canvas.
Navigate to the Configuration page of your dataset, select Choose pipeline in Ingestion pipeline.
Your saved data pipeline will appear in the dropdown menu below.
An embedding model converts chunks into embeddings. It cannot be changed once the dataset has chunks. To switch to a different embedding model, you must delete all existing chunks in the dataset. The obvious reason is that we must ensure that files in a specific dataset are converted to embeddings using the same embedding model (ensure that they are compared in the same embedding space).
:::danger IMPORTANT Some embedding models are optimized for specific languages, so performance may be compromised if you use them to embed documents in other languages. :::
While uploading files directly to a dataset seems more convenient, we highly recommend uploading files to RAGFlow's File system and then linking them to the target datasets. This way, you can avoid permanently deleting files uploaded to the dataset.
File parsing is a crucial topic in dataset configuration. The meaning of file parsing in RAGFlow is twofold: chunking files based on file layout and building embedding and full-text (keyword) indexes on these chunks. After having selected the chunking method and embedding model, you can start parsing a file:
RAGFlow features visibility and explainability, allowing you to view the chunking results and intervene where necessary. To do so:
Click on the file that completes file parsing to view the chunking results:
You are taken to the Chunk page:
Hover over each snapshot for a quick view of each chunk.
Double-click the chunked texts to add keywords, questions, tags, or make manual changes where necessary:
:::caution NOTE
You can add keywords to a file chunk to increase its ranking for queries containing those keywords. This action increases its keyword weight and can improve its position in search list.
:::
In Retrieval testing, ask a quick question in Test text to double-check if your configurations work:
As you can tell from the following, RAGFlow responds with truthful citations.
RAGFlow uses multiple recall of both full-text search and vector search in its chats. Prior to setting up an AI chat, consider adjusting the following parameters to ensure that the intended information always turns up in answers:
See Run retrieval test for details.
As of RAGFlow v0.25.1, the search feature is still in a rudimentary form, supporting only dataset search by name.
You are allowed to delete a dataset. Hover your mouse over the three dot of the intended dataset card and the Delete option appears. Once you delete a dataset, the associated folder under root/.knowledge directory is AUTOMATICALLY REMOVED. The consequence is: