docs/version3.x/module_usage/text_image_unwarping.en.md
The primary purpose of text image rectification is to perform geometric transformations on images to correct distortions, inclinations, perspective deformations, etc., in the document images for more accurate subsequent text recognition.
<table> <thead> <tr> <th>Model</th><th>Model Download Link</th> <th>CER</th> <th>GPU Inference Time (ms) [Normal Mode / High-Performance Mode]</th> <th>CPU Inference Time (ms) [Normal Mode / High-Performance Mode]</th> <th>Model Storage Size (MB)</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>UVDoc</td> <td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/UVDoc_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/UVDoc_pretrained.pdparams">Training Model</a></td> <td>0.179</td> <td>19.05 / 19.05</td> <td>- / 869.82</td> <td>30.3</td> <td>High-accuracy text image rectification model</td> </tr> </tbody> </table>The inference time only includes the model inference time and does not include the time for pre- or post-processing. The "Normal Mode" values correspond to the local <code>paddle_static</code> inference engine.
<strong>Test Environment Description:</strong>
<ul> <li><b>Performance Test Environment</b> <ul> <li><strong>Test Dataset:</strong> <a href="https://www3.cs.stonybrook.edu/~cvl/docunet.html">DocUNet benchmark</a> dataset.</li> <li><strong>Hardware Configuration:</strong> <ul> <li>GPU: NVIDIA Tesla T4</li> <li>CPU: Intel Xeon Gold 6271C @ 2.60GHz</li> </ul> </li> <li><strong>Software Environment:</strong> <ul> <li>Ubuntu 20.04 / CUDA 11.8 / cuDNN 8.9 / TensorRT 8.6.1.6</li> <li>paddlepaddle-gpu 3.0.0 / paddleocr 3.0.3</li> </ul> </li> </ul> </li> <li><b>Inference Mode Explanation</b></li> </ul> <table border="1"> <thead> <tr> <th>Mode</th> <th>GPU Configuration</th> <th>CPU Configuration</th> <th>Acceleration Technology Combination</th> </tr> </thead> <tbody> <tr> <td>Regular Mode</td> <td>FP32 Precision / No TRT Acceleration</td> <td>FP32 Precision / 8 Threads</td> <td>PaddleInference</td> </tr> <tr> <td>High-Performance Mode</td> <td>Choose the optimal combination of prior precision type and acceleration strategy</td> <td>FP32 Precision / 8 Threads</td> <td>Choose the optimal prior backend (Paddle/OpenVINO/TRT, etc.)</td> </tr> </tbody> </table>❗ Before starting quickly, please first install the PaddleOCR wheel package. For details, please refer to the installation tutorial.
You can quickly experience it with one command:
paddleocr text_image_unwarping -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/doc_test.jpg
The example above uses the <code>paddle_static</code> inference engine by default. To run it, first install PaddlePaddle by following PaddlePaddle Framework Installation.
If you choose transformers as the inference engine, make sure the Transformers environment is configured, and then run the following command:
# Use the transformers engine for inference
paddleocr text_image_unwarping -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/doc_test.jpg \
--engine transformers
In most scenarios, the default paddle_static inference engine delivers better inference performance and is the recommended first choice.
<b>Note: </b>The official models would be download from HuggingFace by default. If can't access to HuggingFace, please set the environment variable PADDLE_PDX_MODEL_SOURCE="BOS" to change the model source to BOS. In the future, more model sources will be supported.
You can also integrate the model inference from the image rectification module into your project. Before running the following code, please download the sample image locally.
from paddleocr import TextImageUnwarping
model = TextImageUnwarping(model_name="UVDoc")
output = model.predict("doc_test.jpg", batch_size=1)
for res in output:
res.print()
res.save_to_img(save_path="./output/")
res.save_to_json(save_path="./output/res.json")
The example above uses the <code>paddle_static</code> inference engine by default. To run it, first install PaddlePaddle by following PaddlePaddle Framework Installation.
If you choose transformers as the inference engine, make sure the Transformers environment is configured, and then run the following code:
from paddleocr import TextImageUnwarping
model = TextImageUnwarping(
model_name="UVDoc",
engine="transformers",
)
output = model.predict("doc_test.jpg", batch_size=1)
for res in output:
res.print()
res.save_to_img(save_path="./output/")
res.save_to_json(save_path="./output/res.json")
In most scenarios, the default paddle_static inference engine delivers better inference performance and is the recommended first choice.
After running, the result obtained is:
{'res': {'input_path': 'doc_test.jpg', 'page_index': None, 'doctr_img': '...'}}
The meanings of the parameters in the result are as follows:
<ul> <li><code>input_path</code>:Indicates the path of the image to be rectified</li> <li><code>doctr_img</code>:Indicates the rectified image result. Due to the large amount of data, it is not convenient to print directly, so it is replaced here with<code>...</code>.You can use<code>res.save_to_img()</code>to save the prediction result as an image, and <code>res.save_to_json()</code> to save the prediction result as a json file.</li> </ul>The visualized image is as follows:
The relevant methods, parameters, etc., are described as follows:
<b>Description:</b> <b>Examples:</b> <code>cpu</code>, <code>gpu</code>, <code>npu</code>, <code>gpu:0</code>, <code>gpu:0,1</code>.
If multiple devices are specified, inference will be performed in parallel. Note that parallel inference is not always supported.
By default, GPU 0 will be used if available; otherwise, the CPU will be used.
</td> <td><code>str</code></td> <td><code>None</code></td> </tr> <tr> <td><code>engine</code></td> <td><b>Meaning:</b> Inference engine. <b>Description:</b> Supports <code>None</code> (the default), <code>paddle</code>, <code>paddle_static</code>, <code>paddle_dynamic</code>, and <code>transformers</code>. When left as <code>None</code>, local inference uses the <code>paddle_static</code> engine by default. For detailed descriptions, supported values, compatibility rules, and examples, see <a href="../inference_engine.en.md">Inference Engine and Configuration</a>.</td> <td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>engine_config</code></td> <td><b>Meaning:</b> Inference-engine configuration. <b>Description:</b> Recommended together with <code>engine</code>. For supported fields, compatibility rules, and examples, see <a href="../inference_engine.en.md">Inference Engine and Configuration</a>.</td> <td><code>dict|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>enable_hpi</code></td> <td><b>Meaning:</b> Whether to use the high performance inference.</td> <td><code>bool</code></td> <td><code>False</code></td> </tr> <tr> <td><code>use_tensorrt</code></td> <td><b>Meaning:</b> Whether to use the Paddle Inference TensorRT subgraph engine.<b>Description:</b> If the model does not support acceleration through TensorRT, setting this flag will not enable acceleration.
For Paddle with CUDA version 11.8, the compatible TensorRT version is 8.x (x>=6), and it is recommended to install TensorRT 8.6.1.6.
</td> <td><code>bool</code></td> <td><code>False</code></td> </tr> <tr> <td><code>precision</code></td> <td><b>Meaning:</b>Precision for TensorRT when using the Paddle Inference TensorRT subgraph engine.<b>Description:</b> <b>Options:</b> <code>"fp32"</code>, <code>"fp16"</code>, etc.</td>
<td><code>str</code></td> <td><code>"fp32"</code></td> </tr> <tr> <td><code>enable_mkldnn</code></td> <td> <b>Meaning:</b> Whether to enable MKL-DNN acceleration for inference.<b>Description:</b> If MKL-DNN is unavailable or the model does not support it, acceleration will not be used even if this flag is set.
</td> <td><code>bool</code></td> <td><code>True</code></td> </tr> <tr> <td><code>mkldnn_cache_capacity</code></td> <td> <b>Meaning:</b>MKL-DNN cache capacity. </td> <td><code>int</code></td> <td><code>10</code></td> </tr> <tr> <td><code>cpu_threads</code></td> <td><b>Meaning:</b> Number of threads to use for inference on CPUs.</td> <td><code>int</code></td> <td><code>10</code></td> </tr> </tbody> </table><b>Description:</b> Supports multiple input types:
<ul> <li><b>Python Var</b>: e.g., <code>numpy.ndarray</code> representing image data</li> <li><b>str</b>: <ul> <li>Local image or PDF file path: <code>/root/data/img.jpg</code>;</li> <li><b>URL</b> of image or PDF file: e.g., <a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_doc_preprocessor_002.png">example</a>;</li> <li><b>Local directory</b>: directory containing images for prediction, e.g., <code>/root/data/</code> (Note: directories containing PDF files are not supported; PDFs must be specified by exact file path)</li> </ul> </li> <li><b>list</b>: Elements must be of the above types, e.g., <code>[numpy.ndarray, numpy.ndarray]</code>, <code>["/root/data/img1.jpg", "/root/data/img2.jpg"]</code>, <code>["/root/data1", "/root/data2"]</code></li> </ul> </td> <td><code>Python Var|str|list</code></td> <td></td> </tr> <tr> <td><code>batch_size</code></td> <td><b>Meaning:</b> Batch size<b>Description:</b> Positive integer.</td>
<td><code>int</code></td> <td>1</td> </tr> </table>The current module does not support fine-tuning training and only supports inference integration. Concerning fine-tuning training for this module, there are plans to support it in the future.
For detailed descriptions, values, compatibility rules, and examples of the inference engine, please refer to <a href="../inference_engine.en.md">Inference Engine and Configuration Description</a>.
<strong>Test Environment Description:</strong>
<ul> <li><strong>Test Data:</strong> [Sample Image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/doc_test.jpg)</li> <li><strong>Hardware Configuration:</strong> <ul> <li>GPU: NVIDIA A100 40G</li> <li>CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz</li> </ul> </li> <li><strong>Software Environment:</strong> <ul> <li>Ubuntu 22.04 / CUDA 12.6 / cuDNN 9.5</li> <li>paddlepaddle-gpu 3.2.1 / paddleocr 3.5 / transformers 5.4.0 / torch 2.10</li> </ul> </li> </ul>