docs/version3.x/module_usage/seal_text_detection.en.md
The seal text detection module typically outputs multi-point bounding boxes around text regions, which are then passed as inputs to the distortion correction and text recognition modules for subsequent processing to identify the textual content of the seal. Recognizing seal text is an integral part of document processing and finds applications in various scenarios such as contract comparison, inventory access auditing, and invoice reimbursement verification. The seal text detection module serves as a subtask within OCR (Optical Character Recognition), responsible for locating and marking the regions containing seal text within an image. The performance of this module directly impacts the accuracy and efficiency of the entire seal text OCR system.
<table> <thead> <tr> <th>Model Name</th><th>Model Download Link</th> <th>Hmean(%)</th> <th>GPU Inference Time (ms) [Normal Mode / High-Performance Mode]</th> <th>CPU Inference Time (ms) [Normal Mode / High-Performance Mode]</th> <th>Model Storage Size (MB)</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>PP-OCRv4_server_seal_det</td> <td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-OCRv4_server_seal_det_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_server_seal_det_pretrained.pdparams">Training Model</a></td> <td>98.40</td> <td>124.64 / 91.57</td> <td>545.68 / 439.86</td> <td>109</td> <td>The server-side seal text detection model of PP-OCRv4 boasts higher accuracy and is suitable for deployment on better-equipped servers.</td> </tr> <tr> <td>PP-OCRv4_mobile_seal_det</td> <td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-OCRv4_mobile_seal_det_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_mobile_seal_det_pretrained.pdparams">Training Model</a></td> <td>96.36</td> <td>9.70 / 3.56</td> <td>50.38 / 19.64</td> <td>4.6</td> <td>The mobile-side seal text detection model of PP-OCRv4, on the other hand, offers greater efficiency and is suitable for deployment on end devices.</td> </tr> </tbody> </table>The inference time only includes the model inference time and does not include the time for pre- or post-processing. The "Normal Mode" values correspond to the local <code>paddle_static</code> inference engine.
<strong>Test Environment Description:</strong>
<ul> <li><b>Performance Test Environment</b> <ul> <li><strong>Test Dataset:</strong> A Self-built Internal Dataset, Containing 500 Images of Circular Stamps.</li> <li><strong>Hardware Configuration:</strong> <ul> <li>GPU: NVIDIA Tesla T4</li> <li>CPU: Intel Xeon Gold 6271C @ 2.60GHz</li> </ul> </li> <li><strong>Software Environment:</strong> <ul> <li>Ubuntu 20.04 / CUDA 11.8 / cuDNN 8.9 / TensorRT 8.6.1.6</li> <li>paddlepaddle-gpu 3.0.0 / paddleocr 3.0.3</li> </ul> </li> </ul> </li> <li><b>Inference Mode Description</b></li> </ul> <table border="1"> <thead> <tr> <th>Mode</th> <th>GPU Configuration </th> <th>CPU Configuration </th> <th>Acceleration Technology Combination</th> </tr> </thead> <tbody> <tr> <td>Normal Mode</td> <td>FP32 Precision / No TRT Acceleration</td> <td>FP32 Precision / 8 Threads</td> <td>PaddleInference</td> </tr> <tr> <td>High-Performance Mode</td> <td>Optimal combination of pre-selected precision types and acceleration strategies</td> <td>FP32 Precision / 8 Threads</td> <td>Pre-selected optimal backend (Paddle/OpenVINO/TRT, etc.)</td> </tr> </tbody> </table>❗ Before quick integration, please install the PaddleOCR wheel package. For detailed instructions, refer to PaddleOCR Local Installation Tutorial。
Quickly experience with just one command:
paddleocr seal_text_detection -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/seal_text_det.png
The example above uses the <code>paddle_static</code> inference engine by default. To run it, first install PaddlePaddle by following PaddlePaddle Framework Installation.
<b>Note: </b>The official models would be download from HuggingFace by default. If can't access to HuggingFace, please set the environment variable PADDLE_PDX_MODEL_SOURCE="BOS" to change the model source to BOS. In the future, more model sources will be supported.
You can also integrate the model inference from the layout area detection module into your project. Before running the following code, please download Example Image Go to the local area.
from paddleocr import SealTextDetection
model = SealTextDetection(model_name="PP-OCRv4_server_seal_det")
output = model.predict("seal_text_det.png", batch_size=1)
for res in output:
res.print()
res.save_to_img(save_path="./output/")
res.save_to_json(save_path="./output/res.json")
The example above uses the <code>paddle_static</code> inference engine by default. To run it, first install PaddlePaddle by following PaddlePaddle Framework Installation.
After running, the result is:
{'res': {'input_path': 'seal_text_det.png', 'page_index': None, 'dt_polys': [array([[463, 477],
...,
[428, 505]]), array([[297, 444],
...,
[230, 443]]), array([[457, 346],
...,
[267, 345]]), array([[325, 38],
...,
[322, 37]])], 'dt_scores': [0.9912680344777314, 0.9906849624837963, 0.9847219455533163, 0.9914791724153904]}}
The meanings of the parameters are as follows:
<ul> <li> <code>input_path</code>:represents the path of the input image to be predicted</li> <li> <code>dt_polys</code>:represents the predicted text detection boxes, where each text detection box contains multiple vertices of a polygon. Each vertex is a list of two elements, representing the x and y coordinates of the vertex respectively</li> <li> <code>dt_scores</code>:represents the confidence scores of the predicted text detection boxes</li> </ul>The visualization image is as follows:
The explanations of related methods and parameters are as follows:
<b>Description:</b> If set to <code>None</code>, <code>PP-OCRv4_mobile_seal_det</code> will be used.</td>
<td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>model_dir</code></td> <td><b>Meaning:</b>Model storage path.</td> <td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>device</code></td> <td><b>Meaning:</b>Device for inference.<b>Description:</b> <b>For example:</b> <code>"cpu"</code>, <code>"gpu"</code>, <code>"npu"</code>, <code>"gpu:0"</code>, <code>"gpu:0,1"</code>.
If multiple devices are specified, parallel inference will be performed.
By default, GPU 0 is used if available; otherwise, CPU is used.
</td> <td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>engine</code></td> <td><b>Meaning:</b> Inference engine. <b>Description:</b> Supports <code>None</code> (the default), <code>paddle</code>, <code>paddle_static</code>, <code>paddle_dynamic</code>, and <code>transformers</code>. When left as <code>None</code>, local inference uses the <code>paddle_static</code> engine by default. For detailed descriptions, supported values, compatibility rules, and examples, see <a href="../inference_engine.en.md">Inference Engine and Configuration</a>.</td> <td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>engine_config</code></td> <td><b>Meaning:</b> Inference-engine configuration. <b>Description:</b> Recommended together with <code>engine</code>. For supported fields, compatibility rules, and examples, see <a href="../inference_engine.en.md">Inference Engine and Configuration</a>.</td> <td><code>dict|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>enable_hpi</code></td> <td><b>Meaning:</b>Whether to enable high-performance inference.</td> <td><code>bool</code></td> <td><code>False</code></td> </tr> <tr> <td><code>use_tensorrt</code></td> <td><b>Meaning:</b>Whether to use the Paddle Inference TensorRT subgraph engine.<b>Description:</b> If the model does not support acceleration through TensorRT, setting this flag will not enable acceleration.
For Paddle with CUDA version 11.8, the compatible TensorRT version is 8.x (x>=6), and it is recommended to install TensorRT 8.6.1.6.
</td> <td><code>bool</code></td> <td><code>False</code></td> </tr> <tr> <td><code>precision</code></td> <td><b>Meaning:</b>Computation precision when using the TensorRT subgraph engine in Paddle Inference.<b>Description:</b> <b>Options:</b> <code>"fp32"</code>, <code>"fp16"</code>.</td>
<td><code>str</code></td> <td><code>"fp32"</code></td> </tr> <tr> <td><code>enable_mkldnn</code></td> <td> <b>Meaning:</b>Whether to enable MKL-DNN acceleration for inference.<b>Description:</b> If MKL-DNN is unavailable or the model does not support it, acceleration will not be used even if this flag is set.
</td> <td><code>bool</code></td> <td><code>True</code></td> </tr> <tr> <td><code>mkldnn_cache_capacity</code></td> <td> <b>Meaning:</b>MKL-DNN cache capacity. </td> <td><code>int</code></td> <td><code>10</code></td> </tr> <tr> <td><code>cpu_threads</code></td> <td><b>Meaning:</b>Number of threads to use for inference on CPUs.</td> <td><code>int</code></td> <td><code>10</code></td> </tr> <tr> <td><code>limit_side_len</code></td> <td><b>Meaning:</b>Limit on the side length of the input image for detection.<b>Description:</b> <code>int</code> specifies the value. If set to <code>None</code>, the model's default configuration will be used.</td>
<td><code>int|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>limit_type</code></td> <td><b>Meaning:</b>Type of image side length limitation.<b>Description:</b> <code>"min"</code> ensures the shortest side of the image is no less than <code>det_limit_side_len</code>; <code>"max"</code> ensures the longest side is no greater than <code>limit_side_len</code>. If set to <code>None</code>, the model's default configuration will be used.</td>
<td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>thresh</code></td> <td><b>Meaning:</b>Pixel score threshold.<b>Description:</b> Pixels in the output probability map with scores greater than this threshold are considered text pixels. Accepts any float value greater than 0. If set to <code>None</code>, the model's default configuration will be used.</td>
<td><code>float|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>box_thresh</code></td> <td><b>Meaning:</b>If the average score of all pixels inside the bounding box is greater than this threshold, the result is considered a text region.<b>Description:</b> Accepts any float value greater than 0. If set to <code>None</code>, the model's default configuration will be used.</td>
<td><code>float|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>unclip_ratio</code></td> <td><b>Meaning:</b>Expansion ratio for the Vatti clipping algorithm, used to expand the text region.<b>Description:</b>Accepts any float value greater than 0. If set to <code>None</code>, the model's default configuration will be used.</td>
<td><code>float|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>input_shape</code></td> <td><b>Meaning:</b>Input image size for the model in the format <code>(C, H, W)</code>.<b>Description:</b> If set to <code>None</code>, the model's default size will be used.</td>
<td><code>tuple|None</code></td> <td><code>None</code></td> </tr> </tbody> </table><b>Description:</b> Supports multiple input types:<ul>
<li><b>Python Var</b>: e.g., <code>numpy.ndarray</code> representing image data</li> <li><b>str</b>: <ul> <li>Local image or PDF file path: <code>/root/data/img.jpg</code>;</li> <li><b>URL</b> of image or PDF file: e.g., <a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/img_rot180_demo.jpg">example</a>;</li> <li><b>Local directory</b>: directory containing images for prediction, e.g., <code>/root/data/</code> (Note: directories containing PDF files are not supported; PDFs must be specified by exact file path)</li> </li> </ul> <li><b>list</b>: Elements must be of the above types, e.g., <code>[numpy.ndarray, numpy.ndarray]</code>, <code>["/root/data/img1.jpg", "/root/data/img2.jpg"]</code>, <code>["/root/data1", "/root/data2"]</code></li> </ul> </td> <td><code>Python Var|str|list</code></td> <td></td> </tr> <tr> <td><code>batch_size</code></td> <td><b>Meaning:</b>Batch size.<b>Description:</b> Can be set to any positive integer.</td>
<td><code>int</code></td> <td>1</td> </tr> <tr> <td><code>limit_side_len</code></td> <td><b>Meaning:</b>Same meaning as the instantiation parameters.<b>Description:</b> If set to <code>None</code>, the instantiation value is used; otherwise, this parameter takes precedence.</td>
<td><code>int|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>limit_type</code></td> <td><b>Meaning:</b>Same meaning as the instantiation parameters.<b>Description:</b> If set to <code>None</code>, the instantiation value is used; otherwise, this parameter takes precedence.</td>
<td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>thresh</code></td> <td><b>Meaning:</b>Same meaning as the instantiation parameters.<b>Description:</b> If set to <code>None</code>, the instantiation value is used; otherwise, this parameter takes precedence.</td>
<td><code>float|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>box_thresh</code></td> <td><b>Meaning:</b>Same meaning as the instantiation parameters.<b>Description:</b> If set to <code>None</code>, the instantiation value is used; otherwise, this parameter takes precedence.</td>
<td><code>float|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>unclip_ratio</code></td> <td><b>Meaning:</b>Same meaning as the instantiation parameters.<b>Description:</b> If set to <code>None</code>, the instantiation value is used; otherwise, this parameter takes precedence.</td>
<td><code>float|None</code></td> <td><code>None</code></td> </tr> </tbody> </table>If the above model is still not performing well in your scenario, you can try the following steps for secondary development. Here, we'll use training PP-OCRv4_server_seal_det as an example; you can replace it with the corresponding configuration files for other models. First, you need to prepare a text detection dataset. You can refer to the format of the seal text detection demo data for preparation. Once prepared, you can follow the steps below for model training and export. After export, you can quickly integrate the model into the above API. This example uses a seal text detection demo dataset. Before training the model, please ensure that you have installed the dependencies required by PaddleOCR as per the installation documentation.
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/data/ocr_curve_det_dataset_examples.tar -P ./dataset
tar -xf ./dataset/ocr_curve_det_dataset_examples.tar -C ./dataset/
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_server_seal_det_pretrained.pdparams
PaddleOCR has modularized the code, and when training the PP-OCRv4_server_seal_det model, you need to use the configuration file for PP-OCRv4_server_seal_det.
The training commands are as follows:
# Single GPU training (default training method)
python3 tools/train.py -c configs/det/PP-OCRv4/PP-OCRv4_server_seal_det.yml \
-o Global.pretrained_model=./PP-OCRv4_server_seal_det_pretrained.pdparams \
Train.dataset.data_dir=./dataset/ocr_curve_det_dataset_examples Train.dataset.label_file_list=./dataset/ocr_curve_det_dataset_examples/train.txt \
Eval.dataset.data_dir=./dataset/ocr_curve_det_dataset_examples Eval.dataset.label_file_list=./dataset/ocr_curve_det_dataset_examples/val.txt
# Multi-GPU training, specify GPU ids using the --gpus parameter
python3 -m paddle.distributed.launch --gpus '0,1,2,3' tools/train.py -c configs/det/PP-OCRv4/PP-OCRv4_server_seal_det.yml \
-o Global.pretrained_model=./PP-OCRv4_server_seal_det_pretrained.pdparams \
Train.dataset.data_dir=./dataset/ocr_curve_det_dataset_examples Train.dataset.label_file_list=./dataset/ocr_curve_det_dataset_examples/train.txt \
Eval.dataset.data_dir=./dataset/ocr_curve_det_dataset_examples Eval.dataset.label_file_list=./dataset/ocr_curve_det_dataset_examples/val.txt
You can evaluate the trained weights, such as output/xxx/xxx.pdparams, using the following command:
# Make sure to set the pretrained_model path to the local path. If using a model that was trained and saved by yourself, be sure to modify the path and filename to {path/to/weights}/{model_name}.
# Demo test set evaluation
python3 tools/eval.py -c configs/det/PP-OCRv4/PP-OCRv4_server_seal_det.yml -o \
Global.pretrained_model=output/xxx/xxx.pdparams
python3 tools/export_model.py -c configs/det/PP-OCRv4/PP-OCRv4_server_seal_det.yml -o \
Global.pretrained_model=output/xxx/xxx.pdparams \
Global.save_inference_dir="./PP-OCRv4_server_seal_det_infer/"
After exporting the model, the static graph model will be stored in the ./PP-OCRv4_server_seal_det_infer/ directory. In this directory, you will see the following files:
./PP-OCRv4_server_seal_det_infer/
├── inference.json
├── inference.pdiparams
├── inference.yml
With this, the secondary development is complete, and the static graph model can be directly integrated into PaddleOCR's API.