docs/version3.x/module_usage/doc_img_orientation_classification.en.md
The Document Image Orientation Classification Module is primarily designed to distinguish the orientation of document images and correct them through post-processing. During processes such as document scanning or ID photo capturing, the device might be rotated to achieve clearer images, resulting in images with various orientations. Standard OCR pipelines may not handle these images effectively. By leveraging image classification techniques, the orientation of documents or IDs containing text regions can be pre-determined and adjusted, thereby improving the accuracy of OCR processing.
<table> <thead> <tr> <th>Model</th><th>Model Download Links</th> <th>Top-1 Acc (%)</th> <th>GPU Inference Time (ms) [Normal Mode / High-Performance Mode]</th> <th>CPU Inference Time (ms) [Normal Mode / High-Performance Mode]</th> <th>Model Size (MB)</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>PP-LCNet_x1_0_doc_ori</td> <td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-LCNet_x1_0_doc_ori_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-LCNet_x1_0_doc_ori_pretrained.pdparams">Pretrained Model</a></td> <td>99.06</td> <td>2.62 / 0.59</td> <td>3.24 / 1.19</td> <td>7</td> <td>A document image classification model based on PP-LCNet_x1_0, with four categories: 0°, 90°, 180°, and 270°.</td> </tr> </tbody> </table>The inference time only includes the model inference time and does not include the time for pre- or post-processing. The "Normal Mode" values correspond to the local <code>paddle_static</code> inference engine.
<strong>Test Environment Description:</strong>
<ul> <li><b>Performance Test Environment</b> <ul> <li><strong>Test Dataset:</strong> Self-built multi-scenario dataset (1000 images, including ID/document scenarios)</li> <li><strong>Hardware Configuration:</strong> <ul> <li>GPU: NVIDIA Tesla T4</li> <li>CPU: Intel Xeon Gold 6271C @ 2.60GHz</li> </ul> </li> <li><strong>Software Environment:</strong> <ul> <li>Ubuntu 20.04 / CUDA 11.8 / cuDNN 8.9 / TensorRT 8.6.1.6</li> <li>paddlepaddle-gpu 3.0.0 / paddleocr 3.0.3</li> </ul> </li> </ul> </li> <li><b>Inference Mode Description</b></li> </ul> <table border="1"> <thead> <tr> <th>Mode</th> <th>GPU Configuration</th> <th>CPU Configuration</th> <th>Acceleration Technology Combination</th> </tr> </thead> <tbody> <tr> <td>Normal Mode</td> <td>FP32 Precision / No TRT Acceleration</td> <td>FP32 Precision / 8 Threads</td> <td>PaddleInference</td> </tr> <tr> <td>High-Performance Mode</td> <td>Optimal combination of precision type and acceleration strategy</td> <td>FP32 Precision / 8 Threads</td> <td>Optimal backend selected (Paddle/OpenVINO/TRT, etc.)</td> </tr> </tbody> </table>❗ Before starting, please install the PaddleOCR wheel package. For details, refer to the Installation Guide.
You can quickly experience it with one command:
paddleocr doc_img_orientation_classification -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/img_rot180_demo.jpg
The example above uses the <code>paddle_static</code> inference engine by default. To run it, first install PaddlePaddle by following PaddlePaddle Framework Installation.
If you choose transformers as the inference engine, make sure the Transformers environment is configured, and then run the following command:
# Use the transformers engine for inference
paddleocr doc_img_orientation_classification -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/img_rot180_demo.jpg \
--engine transformers
In most scenarios, the default paddle_static inference engine delivers better inference performance and is the recommended first choice.
<b>Note: </b>The official models would be download from HuggingFace by default. If can't access to HuggingFace, please set the environment variable PADDLE_PDX_MODEL_SOURCE="BOS" to change the model source to BOS. In the future, more model sources will be supported.
You can also integrate the model inference of the Document Image Orientation Classification Module into your project. Before running the following code, please download the sample image to your local machine.
from paddleocr import DocImgOrientationClassification
model = DocImgOrientationClassification(model_name="PP-LCNet_x1_0_doc_ori")
output = model.predict("img_rot180_demo.jpg", batch_size=1)
for res in output:
res.print(json_format=False)
res.save_to_img("./output/demo.png")
res.save_to_json("./output/res.json")
The example above uses the <code>paddle_static</code> inference engine by default. To run it, first install PaddlePaddle by following PaddlePaddle Framework Installation.
If you choose transformers as the inference engine, make sure the Transformers environment is configured, and then run the following code:
from paddleocr import DocImgOrientationClassification
model = DocImgOrientationClassification(
model_name="PP-LCNet_x1_0_doc_ori",
engine="transformers",
)
output = model.predict("img_rot180_demo.jpg", batch_size=1)
for res in output:
res.print(json_format=False)
res.save_to_img("./output/demo.png")
res.save_to_json("./output/res.json")
In most scenarios, the default paddle_static inference engine delivers better inference performance and is the recommended first choice.
If you want to use the trained model with the paddle_dynamic or transformers engine, refer to the Weight Conversion section in the Inference Engine section below to convert the model from the pdparams format to the safetensors format using PaddleX.
After running, the result will be:
{'res': {'input_path': 'img_rot180_demo.jpg', 'page_index': None, 'class_ids': array([2], dtype=int32), 'scores': array([0.88164], dtype=float32), 'label_names': ['180']}}
The meaning of the output parameters is as follows:
<ul> <li><code>input_path</code>:Represents the path of the input image.</li> <li><code>class_ids</code>:Represents the predicted class ID, with four categories: 0°, 90°, 180°, and 270°.</li> <li><code>scores</code>:Represents the confidence level of the prediction result.</li> <li><code>label_names</code>:Represents the category names of the prediction results.</li> </ul>Here is the visualization of the image:
The explanations of relevant methods and parameters are as follows:
<b>Description:</b> If set to <code>None</code>, <code>PP-LCNet_x1_0_doc_ori</code> will be used.</td>
<td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>model_dir</code></td> <td><b>Meaning:</b>Model storage path.</td> <td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>device</code></td> <td><b>Meaning:</b>Device for inference.<b>Description:</b> <b>For example:</b><code>"cpu"</code>, <code>"gpu"</code>, <code>"npu"</code>, <code>"gpu:0"</code>, <code>"gpu:0,1"</code>.
If multiple devices are specified, parallel inference will be performed.
By default, GPU 0 is used if available; otherwise, CPU is used.
</td> <td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>engine</code></td> <td><b>Meaning:</b> Inference engine. <b>Description:</b> Supports <code>None</code> (the default), <code>paddle</code>, <code>paddle_static</code>, <code>paddle_dynamic</code>, and <code>transformers</code>. When left as <code>None</code>, local inference uses the <code>paddle_static</code> engine by default. For detailed descriptions, supported values, compatibility rules, and examples, see <a href="../inference_engine.en.md">Inference Engine and Configuration</a>.</td> <td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>engine_config</code></td> <td><b>Meaning:</b> Inference-engine configuration. <b>Description:</b> Recommended together with <code>engine</code>. For supported fields, compatibility rules, and examples, see <a href="../inference_engine.en.md">Inference Engine and Configuration</a>.</td> <td><code>dict|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>enable_hpi</code></td> <td><b>Meaning:</b>Whether to enable high-performance inference.</td> <td><code>bool</code></td> <td><code>False</code></td> </tr> <tr> <td><code>use_tensorrt</code></td> <td><b>Meaning:</b>Whether to use the Paddle Inference TensorRT subgraph engine.<b>Description:</b> If the model does not support acceleration through TensorRT, setting this flag will not enable acceleration.
For Paddle with CUDA version 11.8, the compatible TensorRT version is 8.x (x>=6), and it is recommended to install TensorRT 8.6.1.6.
</td> <td><code>bool</code></td> <td><code>False</code></td> </tr> <tr> <td><code>precision</code></td> <td><b>Meaning:</b>Computation precision when using the TensorRT subgraph engine in Paddle Inference.<b>Description:</b> <b>Options:</b><code>"fp32"</code>, <code>"fp16"</code>.</td>
<td><code>str</code></td> <td><code>"fp32"</code></td> </tr> <tr> <td><code>enable_mkldnn</code></td> <td> <b>Meaning:</b>Whether to enable MKL-DNN acceleration for inference.<b>Description:</b> If MKL-DNN is unavailable or the model does not support it, acceleration will not be used even if this flag is set.
</td> <td><code>bool</code></td> <td><code>True</code></td> </tr> <tr> <td><code>mkldnn_cache_capacity</code></td> <td> <b>Meaning:</b>MKL-DNN cache capacity. </td> <td><code>int</code></td> <td><code>10</code></td> </tr> <tr> <td><code>cpu_threads</code></td> <td><b>Meaning:</b>Number of threads to use for inference on CPUs.</td> <td><code>int</code></td> <td><code>10</code></td> </tr> </tbody> </table><b>Description:</b> Supports multiple input types:
<ul> <li><b>Python Var</b>: e.g., <code>numpy.ndarray</code> representing image data</li> <li><b>str</b>:Local image or PDF file path: <code>/root/data/img.jpg</code>; <b>URL</b> of image or PDF file: e.g., <a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/img_rot180_demo.jpg">example</a>; <b>Local directory</b>: directory containing images for prediction, e.g., <code>/root/data/</code> (Note: directories containing PDF files are not supported; PDFs must be specified by exact file path)</li> <li><b>list</b>: Elements must be of the above types, e.g., <code>[numpy.ndarray, numpy.ndarray]</code>, <code>["/root/data/img1.jpg", "/root/data/img2.jpg"]</code>, <code>["/root/data1", "/root/data2"]</code></li> </ul> </td> <td><code>Python Var|str|list</code></td> <td></td> </tr> <tr> <td><code>batch_size</code></td> <td><b>Meaning:</b>Batch size.<b>Description:</b> Positive integer.</td>
<td><code>int</code></td> <td>1</td> </tr> </table>Since PaddleOCR does not directly provide training functionality for document image orientation classification, if you need to train a document image orientation classification model, you can refer to the PaddleX Secondary Development for Document Image Orientation Classification section for training guidance. The trained model can be seamlessly integrated into PaddleOCR's API for inference purposes.
If you want to use the paddle_dynamic or transformers engine with the trained model, please refer to the Weight Conversion section in Inference Engine later in this document to convert the model from the pdparams format to the safetensors format using PaddleX.
For detailed descriptions, values, compatibility rules, and examples of the inference engine, please refer to <a href="../inference_engine.en.md">Inference Engine and Configuration Description</a>.
<strong>Test Environment Description:</strong>
<ul> <li><strong>Test Data:</strong> [Sample Image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/img_rot180_demo.jpg)</li> <li><strong>Hardware Configuration:</strong> <ul> <li>GPU: NVIDIA A100 40G</li> <li>CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz</li> </ul> </li> <li><strong>Software Environment:</strong> <ul> <li>Ubuntu 22.04 / CUDA 12.6 / cuDNN 9.5</li> <li>paddlepaddle-gpu 3.2.1 / paddleocr 3.5 / transformers 5.4.0 / torch 2.10</li> </ul> </li> </ul>When using the inference engine, the system will automatically download the official pre-trained model. If you need to use a self-trained model with the paddle_dynamic or transformers engine, please refer to the PaddleX Text Image Orientation Classification Module Weight Conversion section to convert the model from the pdparams format to the safetensors format using PaddleX. This allows seamless integration into the PaddleOCR API for inference.