docs/version3.x/module_usage/text_recognition.en.md
The text recognition module is the core part of the OCR (Optical Character Recognition) system, responsible for extracting text information from text regions in images. The performance of this module directly affects the accuracy and efficiency of the entire OCR system. The text recognition module usually receives the bounding boxes of text regions output by the text detection module as input, and then converts the text in the images into editable and searchable electronic text through complex image processing and deep learning algorithms. The accuracy of text recognition results is crucial for subsequent applications such as information extraction and data mining.
<table> <tr> <th>Model</th><th>Model Download Links</th> <th>Recognition Avg Accuracy(%)</th> <th>GPU Inference Time (ms) [Normal Mode / High-Performance Mode]</th> <th>CPU Inference Time (ms) [Normal Mode / High-Performance Mode]</th> <th>Model Storage Size (MB)</th> <th>Introduction</th> </tr> <tr> <td>PP-OCRv5_server_rec</td> <td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\ PP-OCRv5_server_rec_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv5_server_rec_pretrained.pdparams">Pretrained Model</a></td> <td>86.38</td> <td>8.46 / 2.36</td> <td>31.21 / 31.21</td> <td>81</td> <td rowspan="2">PP-OCRv5_rec is a new generation text recognition model. It is designed to efficiently and accurately support the recognition of Simplified Chinese, Traditional Chinese, English, Japanese, as well as complex text scenarios such as handwriting, vertical text, pinyin, and rare characters with a single model. While maintaining recognition performance, it also balances inference speed and model robustness, providing efficient and accurate technical support for document understanding in various scenarios.</td> </tr> <tr> <td>PP-OCRv5_mobile_rec</td> <td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\ PP-OCRv5_mobile_rec_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv5_mobile_rec_pretrained.pdparams">Pretrained Model</a></td> <td>81.29</td> <td>5.43 / 1.46</td> <td>21.20 / 5.32</td> <td>16</td> </tr> <tr> <td>PP-OCRv4_server_rec_doc</td> <td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\ PP-OCRv4_server_rec_doc_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_server_rec_doc_pretrained.pdparams">Pretrained Model</a></td> <td>86.58</td> <td>8.69 / 2.78</td> <td>37.93 / 37.93</td> <td>182</td> <td>PP-OCRv4_server_rec_doc is trained on a mixed dataset of more Chinese document data and PP-OCR training data, building upon PP-OCRv4_server_rec. It enhances the recognition capabilities for some Traditional Chinese characters, Japanese characters, and special symbols, supporting over 15,000 characters. In addition to improving document-related text recognition, it also enhances general text recognition capabilities.</td> </tr> <tr> <td>PP-OCRv4_mobile_rec</td> <td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-OCRv4_mobile_rec_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_mobile_rec_pretrained.pdparams">Pretrained Model</a></td> <td>78.74</td> <td>5.26 / 1.12</td> <td>17.48 / 3.61</td> <td>10.5</td> <td>A lightweight recognition model of PP-OCRv4 with high inference efficiency, suitable for deployment on various hardware devices, including edge devices.</td> </tr> <tr> <td>PP-OCRv4_server_rec</td> <td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-OCRv4_server_rec_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_server_rec_pretrained.pdparams">Pretrained Model</a></td> <td>85.19</td> <td>8.75 / 2.49</td> <td>36.93 / 36.93</td> <td>173</td> <td>The server-side model of PP-OCRv4, offering high inference accuracy and deployable on various servers.</td> </tr> <tr> <td>en_PP-OCRv4_mobile_rec</td> <td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\ en_PP-OCRv4_mobile_rec_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/en_PP-OCRv4_mobile_rec_pretrained.pdparams">Pretrained Model</a></td> <td>70.39</td> <td>4.81 / 1.23</td> <td>17.20 / 4.18</td> <td>7.5</td> <td>An ultra-lightweight English recognition model trained based on the PP-OCRv4 recognition model, supporting English and numeric character recognition.</td> </tr> </table>The inference time only includes the model inference time and does not include the time for pre- or post-processing. The "Normal Mode" values correspond to the local <code>paddle_static</code> inference engine.
<details><summary> 👉Model List Details</summary>❗ The above lists the <b>4 core models</b> mainly supported by the text recognition module. The module supports a total of <b>20 full models</b>, including multiple multilingual text recognition models. The complete model list is as follows:
<strong>Test Environment Description:</strong>
<ul> <li><b>Performance Test Environment</b> <ul> <li><strong>Test Dataset:</strong> <ul> <li> Chinese Recognition Models: A self-built Chinese dataset by PaddleOCR, covering street views, online images, documents, handwriting, with 11,000 images for text recognition. </li> <li> ch_SVTRv2_rec: <a href="https://aistudio.baidu.com/competition/detail/1131/0/introduction">PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition Task</a> Leaderboard A evaluation set. </li> <li> ch_RepSVTR_rec: <a href="https://aistudio.baidu.com/competition/detail/1131/0/introduction">PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition Task</a> Leaderboard B evaluation set. </li> <li> English Recognition Models: A self-built English dataset by PaddleOCR. </li> <li> Multilingual Recognition Models: A self-built multilingual dataset by PaddleOCR. </li> </ul> </li> <li><strong>Hardware Configuration:</strong> <ul> <li>GPU: NVIDIA Tesla T4</li> <li>CPU: Intel Xeon Gold 6271C @ 2.60GHz</li> </ul> </li> <li><strong>Software Environment:</strong> <ul> <li>Ubuntu 20.04 / CUDA 11.8 / cuDNN 8.9 / TensorRT 8.6.1.6</li> <li>paddlepaddle-gpu 3.0.0 / paddleocr 3.0.3</li> </ul> </li> </ul> </li> <li><b>Explanation of Inference Modes</b></li> </ul> <table border="1"> <thead> <tr> <th>Mode</th> <th>GPU Configuration</th> <th>CPU Configuration</th> <th>Acceleration Technology Combination</th> </tr> </thead> <tbody> <tr> <td>Normal Mode</td> <td>FP32 Precision / No TRT Acceleration</td> <td>FP32 Precision / 8 Threads</td> <td>PaddleInference</td> </tr> <tr> <td>High-Performance Mode</td> <td>Optimal combination of precision type and acceleration strategy</td> <td>FP32 Precision / 8 Threads</td> <td>Selection of the optimal backend (Paddle/OpenVINO/TRT, etc.)</td> </tr> </tbody> </table> </details>❗ Before starting, please install the PaddleOCR wheel package. For details, please refer to the Installation Guide.
You can quickly experience it with one command:
paddleocr text_recognition -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_rec_001.png
The example above uses the <code>paddle_static</code> inference engine by default. To run it, first install PaddlePaddle by following PaddlePaddle Framework Installation.
If you choose transformers as the inference engine, make sure the Transformers environment is configured, and then run the following command:
# Use the transformers engine for inference
paddleocr text_recognition -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_rec_001.png \
--engine transformers
In most scenarios, the default paddle_static inference engine delivers better inference performance and is the recommended first choice.
<b>Note:</b> The official PaddleOCR models are downloaded from HuggingFace by default. If you cannot access HuggingFace, you can change the model source to BOS by setting the environment variable PADDLE_PDX_MODEL_SOURCE="BOS". More mainstream model sources will be supported in the future.
You can also integrate the model inference of the text recognition module into your project. Before running the following code, please download the sample image to your local machine.
from paddleocr import TextRecognition
model = TextRecognition(model_name="PP-OCRv5_server_rec")
output = model.predict(input="general_ocr_rec_001.png", batch_size=1)
for res in output:
res.print()
res.save_to_img(save_path="./output/")
res.save_to_json(save_path="./output/res.json")
The example above uses the <code>paddle_static</code> inference engine by default. To run it, first install PaddlePaddle by following PaddlePaddle Framework Installation.
If you choose transformers as the inference engine, make sure the Transformers environment is configured, and then run the following code:
from paddleocr import TextRecognition
model = TextRecognition(
model_name="PP-OCRv5_server_rec",
engine="transformers",
)
output = model.predict(input="general_ocr_rec_001.png", batch_size=1)
for res in output:
res.print()
res.save_to_img(save_path="./output/")
res.save_to_json(save_path="./output/res.json")
In most scenarios, the default paddle_static inference engine delivers better inference performance and is the recommended first choice.
If you want to use the trained model with the paddle_dynamic or transformers engine, refer to the Weight Conversion section in the Inference Engine section below to convert the model from the pdparams format to the safetensors format using PaddleX.
After running, the result is as follows:
{'res': {'input_path': 'general_ocr_rec_001.png', 'page_index': None, 'rec_text': '绿洲仕格维花园公寓', 'rec_score': 0.9823867082595825}}
The meanings of the parameters in the result are as follows:
input_path: The path of the input text line image to be predictedpage_index: If the input is a PDF file, it indicates which page of the PDF the current text line is from; otherwise, it is Nonerec_text: The predicted text of the text line imagerec_score: The confidence score of the predicted text for the text line imageThe visualized image is as follows:
Descriptions of related methods and parameters are as follows:
<b>Description:</b> <b>Examples:</b> <code>"cpu"</code>, <code>"gpu"</code>, <code>"npu"</code>, <code>"gpu:0"</code>, <code>"gpu:0,1"</code>.
If multiple devices are specified, inference will be performed in parallel.
By default, GPU 0 is used; if unavailable, CPU is used.
</td> <td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>engine</code></td> <td><b>Meaning:</b> Inference engine. <b>Description:</b> Supports <code>None</code> (the default), <code>paddle</code>, <code>paddle_static</code>, <code>paddle_dynamic</code>, and <code>transformers</code>. When left as <code>None</code>, local inference uses the <code>paddle_static</code> engine by default. For detailed descriptions, supported values, compatibility rules, and examples, see <a href="../inference_engine.en.md">Inference Engine and Configuration</a>.</td> <td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>engine_config</code></td> <td><b>Meaning:</b> Inference-engine configuration. <b>Description:</b> Recommended together with <code>engine</code>. For supported fields, compatibility rules, and examples, see <a href="../inference_engine.en.md">Inference Engine and Configuration</a>.</td> <td><code>dict|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>enable_hpi</code></td> <td><b>Meaning:</b> Whether to enable high performance inference.</td> <td><code>bool</code></td> <td><code>False</code></td> </tr> <tr> <td><code>use_tensorrt</code></td> <td><b>Meaning:</b> Whether to enable the TensorRT subgraph engine of Paddle Inference.<b>Description:</b> For Paddle with CUDA 11.8, the compatible TensorRT version is 8.x (x>=6), recommended 8.6.1.6.
</td> <td><code>bool</code></td> <td><code>False</code></td> </tr> <tr> <td><code>precision</code></td> <td><b>Meaning:</b>Precision for TensorRT when using the Paddle Inference TensorRT subgraph engine.<b>Description:</b> <b>Options:</b> <code>"fp32"</code>, <code>"fp16"</code>.</td>
<td><code>str</code></td> <td><code>"fp32"</code></td> </tr> <tr> <td><code>enable_mkldnn</code></td> <td><b>Meaning:</b> Whether to enable MKL-DNN acceleration for inference.<b>Description:</b> If MKL-DNN is unavailable or the model does not support it, acceleration will not be used even if this flag is set.</td>
<td><code>bool</code></td> <td><code>True</code></td> </tr> <tr> <td><code>mkldnn_cache_capacity</code></td> <td><b>Meaning:</b> MKL-DNN cache capacity.</td> <td><code>int</code></td> <td><code>10</code></td> </tr> <tr> <td><code>cpu_threads</code></td> <td><b>Meaning:</b>Number of threads to use for inference on CPUs.</td> <td><code>int</code></td> <td><code>10</code></td> </tr> <tr> <td><code>input_shape</code></td> <td><b>Meaning:</b>Input image size for the model in the format <code>(C, H, W)</code>.</td> <td><code>tuple|None</code></td> <td><code>None</code></td> </tr> </tbody> </table><b>Description:</b>
<ul> <li><b>Python Var</b>: Image data represented by <code>numpy.ndarray</code></li> <li><b>str</b>: Local path of image file or PDF file: <code>/root/data/img.jpg</code>; <b>URL link</b>: Network URL of image file or PDF file: <a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_rec_001.png">Example</a>; <b>Local directory</b>: The directory should contain the images to be predicted, such as <code>/root/data/</code> (currently, prediction of PDF files in the directory is not supported, PDF files need to be specified to a specific file path)</li> <li><b>list</b>: The elements of the list should be data of the above types, such as <code>[numpy.ndarray, numpy.ndarray]</code>, <code>["/root/data/img1.jpg", "/root/data/img2.jpg"]</code>, <code>["/root/data1", "/root/data2"]</code></li> </ul> </td> <td><code>Python Var|str|list</code></td> <td></td> </tr> <tr> <td><code>batch_size</code></td> <td>Batch size, can be set to any positive integer.</td> <td><code>int</code></td> <td>1</td> </tr> </table>If the above models do not perform well in your scenario, you can try the following steps for secondary development. Here, we take training PP-OCRv5_server_rec as an example. For other models, just replace the corresponding configuration file. First, you need to prepare a dataset for text recognition. You can refer to the format of the Text Recognition Demo Data for preparation. After preparation, you can train and export the model as follows. After export, the model can be quickly integrated into the above API. This example uses the Text Recognition Demo Data. Before training the model, please make sure you have installed the dependencies required by PaddleOCR as described in the Installation Guide.
# Download the example dataset
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/data/ocr_rec_dataset_examples.tar
tar -xf ocr_rec_dataset_examples.tar
# Download the PP-OCRv5_server_rec pre-trained model
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv5_server_rec_pretrained.pdparams
PaddleOCR modularizes its code. To train the PP-OCRv5_server_rec recognition model, you need to use its configuration file.
The training commands are as follows:
# Single-GPU training (default training method)
python3 tools/train.py -c configs/rec/PP-OCRv5/PP-OCRv5_server_rec.yml \
-o Global.pretrained_model=./PP-OCRv5_server_rec_pretrained.pdparams
# Multi-GPU training, specify GPU IDs via the --gpus parameter
python3 -m paddle.distributed.launch --gpus '0,1,2,3' tools/train.py -c configs/rec/PP-OCRv5/PP-OCRv5_server_rec.yml \
-o Global.pretrained_model=./PP-OCRv5_server_rec_pretrained.pdparams
You can evaluate the trained weights, such as output/xxx/xxx.pdparams, using the following command:
# Note: Set the path of pretrained_model to a local path. If you use a model you trained and saved yourself, please modify the path and file name to {path/to/weights}/{model_name}.
# Demo test set evaluation
python3 tools/eval.py -c configs/rec/PP-OCRv5/PP-OCRv5_server_rec.yml -o \
Global.pretrained_model=output/xxx/xxx.pdparams
python3 tools/export_model.py -c configs/rec/PP-OCRv5/PP-OCRv5_server_rec.yml -o \
Global.pretrained_model=output/xxx/xxx.pdparams \
Global.save_inference_dir="./PP-OCRv5_server_rec_infer/"
After exporting the model, the static graph model will be stored in ./PP-OCRv5_server_rec_infer/ in the current directory. In this directory, you will see the following files:
./PP-OCRv5_server_rec_infer/
├── inference.json
├── inference.pdiparams
├── inference.yml
At this point, the secondary development is complete. This static graph model can be directly integrated into the PaddleOCR API.
If you want to use the paddle_dynamic or transformers engine with the trained model, please refer to the Weight Conversion section in Inference Engine later in this document to convert the model from the pdparams format to the safetensors format using PaddleX.
For detailed descriptions, values, compatibility rules, and examples of the inference engine, please refer to <a href="../inference_engine.en.md">Inference Engine and Configuration Description</a>.
<strong>Test Environment Description:</strong>
<ul> <li><strong>Test Data:</strong> [Sample Image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_rec_001.jpg)</li> <li><strong>Hardware Configuration:</strong> <ul> <li>GPU: NVIDIA A100 40G</li> <li>CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz</li> </ul> </li> <li><strong>Software Environment:</strong> <ul> <li>Ubuntu 22.04 / CUDA 12.6 / cuDNN 9.5</li> <li>paddlepaddle-gpu 3.2.1 / paddleocr 3.5 / transformers 5.4.0 / torch 2.10</li> </ul> </li> </ul>When using the inference engine, the system will automatically download the official pre-trained model. If you need to use a self-trained model with the paddle_dynamic or transformers engine, please refer to the PaddleX Text Image Orientation Classification Module Weight Conversion section to convert the model from the pdparams format to the safetensors format using PaddleX. This allows seamless integration into the PaddleOCR API for inference.