docs/version3.x/module_usage/layout_detection.en.md
The core task of structure analysis is to parse and segment the content of input document images. By identifying different elements in the image (such as text, charts, images, etc.), they are classified into predefined categories (e.g., pure text area, title area, table area, image area, list area, etc.), and the position and size of these regions in the document are determined.
The inference time only includes the model inference time and does not include the time for pre- or post-processing. The "Normal Mode" values correspond to the local <code>paddle_static</code> inference engine.
<details><summary> 👉 Details of Model List</summary>❗ The above list includes the <b>4 core models</b> that are key supported by the text recognition module. The module actually supports a total of <b>12 full models</b>, including several predefined models with different categories. The complete model list is as follows:
<strong>Test Environment Description:</strong>
<ul> <li><b>Performance Test Environment</b> <ul> <li><strong>Test Dataset:</strong> <ul> <li>20 types of layout detection models: PaddleOCR's self built layout area detection dataset, including Chinese and English papers, magazines, newspapers, research papers PPT、 1300 images of document types such as test papers and textbooks. </li> <li>Type 1 version face region detection model: PaddleOCR's self built version face region detection dataset, including Chinese and English papers, magazines, newspapers, research reports PPT、 1000 document type images such as test papers and textbooks. </li> <li>23 categories Layout Detection Model: A self-built layout area detection dataset by PaddleOCR, containing 500 common document type images such as Chinese and English papers, magazines, contracts, books, exam papers, and research reports.</li> <li>Table Layout Detection Model: A self-built table area detection dataset by PaddleOCR, including 7,835 Chinese and English paper document type images with tables.</li> <li> 3-Class Layout Detection Model: A self-built layout area detection dataset by PaddleOCR, comprising 1,154 common document type images such as Chinese and English papers, magazines, and research reports.</li> <li>5-Class English Document Area Detection Model: The evaluation dataset of <a href="https://developer.ibm.com/exchanges/data/all/publaynet">PubLayNet</a>, containing 11,245 images of English documents.</li> <li>17-Class Area Detection Model: A self-built layout area detection dataset by PaddleOCR, including 892 common document type images such as Chinese and English papers, magazines, and research reports.</li> </ul> </li> <li><strong>Hardware Configuration:</strong> <ul> <li>GPU: NVIDIA Tesla T4</li> <li>CPU: Intel Xeon Gold 6271C @ 2.60GHz</li> </ul> </li> <li><strong>Software Environment:</strong> <ul> <li>Ubuntu 20.04 / CUDA 11.8 / cuDNN 8.9 / TensorRT 8.6.1.6</li> <li>paddlepaddle-gpu 3.0.0 / paddleocr 3.0.3</li> </ul> </li> </ul> </li> <li><b>Inference Mode Description</b></li> </ul> <table border="1"> <thead> <tr> <th>Mode</th> <th>GPU Configuration </th> <th>CPU Configuration </th> <th>Acceleration Technology Combination</th> </tr> </thead> <tbody> <tr> <td>Normal Mode</td> <td>FP32 Precision / No TRT Acceleration</td> <td>FP32 Precision / 8 Threads</td> <td>PaddleInference</td> </tr> <tr> <td>High-Performance Mode</td> <td>Optimal combination of pre-selected precision types and acceleration strategies</td> <td>FP32 Precision / 8 Threads</td> <td>Pre-selected optimal backend (Paddle/OpenVINO/TRT, etc.)</td> </tr> </tbody> </table> </details>❗ Before quick integration, please install the PaddleOCR wheel package. For detailed instructions, refer to PaddleOCR Local Installation Tutorial。
Quickly experience with just one command:
paddleocr layout_detection -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/layout.jpg
The example above uses the <code>paddle_static</code> inference engine by default. To run it, first install PaddlePaddle by following PaddlePaddle Framework Installation.
If you choose transformers as the inference engine, make sure the Transformers environment is configured, and then run the following command:
# Use the transformers engine for inference
paddleocr layout_detection -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/layout.jpg \
--engine transformers
In most scenarios, the default paddle_static inference engine delivers better inference performance and is the recommended first choice.
<b>Note: </b>The official models would be download from HuggingFace by default. If can't access to HuggingFace, please set the environment variable <code>PADDLE_PDX_MODEL_SOURCE="BOS"</code> to change the model source to BOS. In the future, more model sources will be supported.
You can also integrate the model inference from the layout area detection module into your project. Before running the following code, please download Example Image Go to the local area.
from paddleocr import LayoutDetection
model = LayoutDetection(model_name="PP-DocLayout_plus-L")
output = model.predict("layout.jpg", batch_size=1, layout_nms=True)
for res in output:
res.print()
res.save_to_img(save_path="./output/")
res.save_to_json(save_path="./output/res.json")
The example above uses the <code>paddle_static</code> inference engine by default. To run it, first install PaddlePaddle by following PaddlePaddle Framework Installation.
If you choose transformers as the inference engine, make sure the Transformers environment is configured, and then run the following code:
from paddleocr import LayoutDetection
model = LayoutDetection(
model_name="PP-DocLayout_plus-L",
engine="transformers",
)
output = model.predict("layout.jpg", batch_size=1, layout_nms=True)
for res in output:
res.print()
res.save_to_img(save_path="./output/")
res.save_to_json(save_path="./output/res.json")
In most scenarios, the default paddle_static inference engine delivers better inference performance and is the recommended first choice.
If you want to use the trained model with the paddle_dynamic or transformers engine, refer to the Weight Conversion section in the Inference Engine section below to convert the model from the pdparams format to the safetensors format using PaddleX.
After running, the result obtained is:
{'res': {'input_path': 'layout.jpg', 'page_index': None, 'boxes': [{'cls_id': 2, 'label': 'text', 'score': 0.9870226979255676, 'coordinate': [34.101906, 349.85275, 358.59213, 611.0772]}, {'cls_id': 2, 'label': 'text', 'score': 0.9866003394126892, 'coordinate': [34.500324, 647.1585, 358.29367, 848.66797]}, {'cls_id': 2, 'label': 'text', 'score': 0.9846674203872681, 'coordinate': [385.71445, 497.40973, 711.2261, 697.84265]}, {'cls_id': 8, 'label': 'table', 'score': 0.984126091003418, 'coordinate': [73.76879, 105.94899, 321.95303, 298.84888]}, {'cls_id': 8, 'label': 'table', 'score': 0.9834211468696594, 'coordinate': [436.95642, 105.81531, 662.7168, 313.48462]}, {'cls_id': 2, 'label': 'text', 'score': 0.9832247495651245, 'coordinate': [385.62787, 346.2288, 710.10095, 458.77127]}, {'cls_id': 2, 'label': 'text', 'score': 0.9816061854362488, 'coordinate': [385.7802, 735.1931, 710.56134, 849.9764]}, {'cls_id': 6, 'label': 'figure_title', 'score': 0.9577341079711914, 'coordinate': [34.421448, 20.055151, 358.71283, 76.53663]}, {'cls_id': 6, 'label': 'figure_title', 'score': 0.9505634307861328, 'coordinate': [385.72278, 20.053688, 711.29333, 74.92744]}, {'cls_id': 0, 'label': 'paragraph_title', 'score': 0.9001723527908325, 'coordinate': [386.46344, 477.03488, 699.4023, 490.07474]}, {'cls_id': 0, 'label': 'paragraph_title', 'score': 0.8845751285552979, 'coordinate': [35.413048, 627.73596, 185.58383, 640.52264]}, {'cls_id': 0, 'label': 'paragraph_title', 'score': 0.8837394118309021, 'coordinate': [387.17603, 716.3423, 524.7841, 729.258]}, {'cls_id': 0, 'label': 'paragraph_title', 'score': 0.8508939743041992, 'coordinate': [35.50064, 331.18445, 141.6444, 344.81097]}]}}
The meanings of the parameters are as follows:
<ul> <li><code>input_path</code>:The path to the input image for prediction.</li> <li><code>page_index</code>:If the input is a PDF file, it indicates which page of the PDF it is; otherwise, it is <code>None</code>.</li> <li><code>boxes</code>:Information about the predicted bounding boxes, a list of dictionaries. Each dictionary represents a detected object and contains the following information: <ol start="1" type="1"> <li><code>cls_id</code>:Class ID, an integer.</li> <li><code>label</code>:Class label, a string.</li> <li><code>score</code>:Confidence score of the bounding box, a float.</li> <li><code>coordinate</code>:Coordinates of the bounding box, a list of floats in the format <code>[xmin, ymin, xmax, ymax]</code>.</li> </ol> </li> </ul>The visualized image is as follows:
Relevant methods, parameters, and explanations are as follows:
<b>Description:</b> If set to <code>None</code>, <code>PP-DocLayout-L</code> will be used.</td>
<td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>model_dir</code></td> <td><b>Meaning:</b>Model storage path.</td> <td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>device</code></td> <td><b>Meaning:</b>Device for inference.<b>Description:</b> <b>For example:</b> <code>"cpu"</code>, <code>"gpu"</code>, <code>"npu"</code>, <code>"gpu:0"</code>, <code>"gpu:0,1"</code>.
If multiple devices are specified, parallel inference will be performed.
By default, GPU 0 is used if available; otherwise, CPU is used.
</td> <td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>engine</code></td> <td><b>Meaning:</b> Inference engine. <b>Description:</b> Supports <code>None</code> (the default), <code>paddle</code>, <code>paddle_static</code>, <code>paddle_dynamic</code>, and <code>transformers</code>. When left as <code>None</code>, local inference uses the <code>paddle_static</code> engine by default. For detailed descriptions, supported values, compatibility rules, and examples, see <a href="../inference_engine.en.md">Inference Engine and Configuration</a>.</td> <td><code>str|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>engine_config</code></td> <td><b>Meaning:</b> Inference-engine configuration. <b>Description:</b> Recommended together with <code>engine</code>. For supported fields, compatibility rules, and examples, see <a href="../inference_engine.en.md">Inference Engine and Configuration</a>.</td> <td><code>dict|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>enable_hpi</code></td> <td><b>Meaning:</b>Whether to enable high-performance inference.</td> <td><code>bool</code></td> <td><code>False</code></td> </tr> <tr> <td><code>use_tensorrt</code></td> <td><b>Meaning:</b>Whether to use the Paddle Inference TensorRT subgraph engine.<b>Description:</b> If the model does not support acceleration through TensorRT, setting this flag will not enable acceleration.
For Paddle with CUDA version 11.8, the compatible TensorRT version is 8.x (x>=6), and it is recommended to install TensorRT 8.6.1.6.
</td> <td><code>bool</code></td> <td><code>False</code></td> </tr> <tr> <td><code>precision</code></td> <td><b>Meaning:</b>Computation precision when using the TensorRT subgraph engine in Paddle Inference.<b>Description:</b> <b>Options:</b> <code>"fp32"</code>, <code>"fp16"</code>.</td>
<td><code>str</code></td> <td><code>"fp32"</code></td> </tr> <tr> <td><code>enable_mkldnn</code></td> <td> <b>Meaning:</b>Whether to enable MKL-DNN acceleration for inference.<b>Description:</b> If MKL-DNN is unavailable or the model does not support it, acceleration will not be used even if this flag is set.
</td> <td><code>bool</code></td> <td><code>True</code></td> </tr> <tr> <td><code>mkldnn_cache_capacity</code></td> <td> <b>Meaning:</b>MKL-DNN cache capacity. </td> <td><code>int</code></td> <td><code>10</code></td> </tr> <tr> <td><code>cpu_threads</code></td> <td><b>Meaning:</b>Number of threads to use for inference on CPUs.</td> <td><code>int</code></td> <td><code>10</code></td> </tr> <tr> <td><code>img_size</code></td> <td><b>Meaning:</b>Input image size.<b>Description:</b>
<ul> <li><b>int</b>: e.g. <code>640</code>, resizes input image to 640x640.</li> <li><b>list</b>: e.g. <code>[640, 512]</code>, resizes input image to width 640 and height 512.</li> </ul> </td> <td><code>int|list|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>threshold</code></td> <td><b>Meaning:</b>Threshold for filtering low-confidence predictions.<b>Description:</b>
<ul> <li><b>float</b>: e.g. <code>0.2</code>, filters out all boxes with confidence below 0.2.</li> <li><b>dict</b>: The key is <code>int</code> (class id), the value is <code>float</code> (threshold). For example, <code>{0: 0.45, 2: 0.48, 7: 0.4}</code> means class 0 uses threshold 0.45, class 2 uses 0.48, class 7 uses 0.4.</li> <li><b>None</b>: uses the model's default configuration.</li> </ul> </td> <td><code>float|dict|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>layout_nms</code></td> <td><b>Meaning:</b>Whether to use NMS post-processing to filter overlapping boxes.<b>Description:</b>
<ul> <li><b>bool</b>: whether to use NMS for post-processing to filter overlapping boxes.</li> <li><b>None</b>: uses the model's default configuration.</li> </ul> </td> <td><code>bool|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>layout_unclip_ratio</code></td> <td><b>Meaning:</b>Scaling factor for the side length of the detection box.<b>Description:</b>
<ul> <li><b>float</b>: A float greater than 0, e.g. <code>1.1</code>, expands width and height by 1.1 times.</li> <li><b>list</b>: e.g. <code>[1.2, 1.5]</code>, expands width by 1.2x and height by 1.5x.</li> <li><b>dict</b>: The key is <code>int</code> (class id), the value is <code>tuple</code> of two floats (width ratio, height ratio). For example, <code>{0: (1.1, 2.0)}</code> means for class 0, width is expanded by 1.1x and height by 2.0x.</li> <li><b>None</b>: uses the model's default configuration.</li> </ul> </td> <td><code>float|list|dict|None</code></td> <td><code>None</code></td> </tr> <tr> <td><code>layout_merge_bboxes_mode</code></td> <td><b>Meaning:</b>Merge mode for model output bounding boxes.<b>Description:</b>
<ul> <li><b>"large"</b>: Only keep the largest outer box among overlapping boxes, remove inner boxes.</li> <li><b>"small"</b>: Only keep the smallest inner box among overlapping boxes, remove outer boxes.</li> <li><b>"union"</b>: Keep all boxes, no filtering.</li> <li><b>dict</b>: The key is <code>int</code> (class id), the value is <code>str</code> (mode). For example, <code>{0: "large", 2: "small"}</code> means class 0 uses "large" mode, class 2 uses "small" mode.</li> <li><b>None</b>: Use the model's default configuration.</li> </ul> </td> <td><code>str|dict|None</code></td> <td><code>None</code></td> </tr> </tbody> </table><b>Description:</b> Supports multiple input types:<ul>
<li><b>Python Var</b>: e.g., <code>numpy.ndarray</code> representing image data</li> <li><b>str</b>: <ul> <li>Local image or PDF file path: <code>/root/data/img.jpg</code>;</li> <li><b>URL</b> of image or PDF file: e.g., <a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/img_rot180_demo.jpg">example</a>;</li> <li><b>Local directory</b>: directory containing images for prediction, e.g., <code>/root/data/</code> (Note: directories containing PDF files are not supported; PDFs must be specified by exact file path)</li> </li> </ul> <li><b>list</b>: Elements must be of the above types, e.g., <code>[numpy.ndarray, numpy.ndarray]</code>, <code>["/root/data/img1.jpg", "/root/data/img2.jpg"]</code>, <code>["/root/data1", "/root/data2"]</code></li> </ul> </td> <td><code>Python Var|str|list</code></td> <td></td> </tr> <tr> <td><code>batch_size</code></td> <td><b>Meaning:</b>Batch size.<b>Description:</b> positive integer.</td>
<td><code>int</code></td> <td>1</td> </tr> <tr> <td><code>threshold</code></td> <td><b>Meaning:</b>Same meaning as the instantiation parameters.<b>Description:</b> If set to <code>None</code>, the instantiation value is used; otherwise, this parameter takes precedence.</td>
<td><code>float|dict|None</code></td> <td>None</td> </tr> <tr> <td><code>layout_nms</code></td> <td><b>Meaning:</b>Same meaning as the instantiation parameters.<b>Description:</b> If set to <code>None</code>, the instantiation value is used; otherwise, this parameter takes precedence.</td>
<td><code>bool|None</code></td> <td>None</td> </tr> <tr> <td><code>layout_unclip_ratio</code></td> <td><b>Meaning:</b>Same meaning as the instantiation parameters.<b>Description:</b> If set to <code>None</code>, the instantiation value is used; otherwise, this parameter takes precedence.</td>
<td><code>float|list|dict|None</code></td> <td>None</td> </tr> <tr> <td><code>layout_merge_bboxes_mode</code></td> <td><b>Meaning:</b>Same meaning as the instantiation parameters.<b>Description:</b> If set to <code>None</code>, the instantiation value is used; otherwise, this parameter takes precedence.</td>
<td><code>str|dict|None</code></td> <td>None</td> </tr> </tbody> </table>Since PaddleOCR does not directly provide training for the layout detection module, if you need to train the layout area detection model, you can refer to PaddleX Layout Detection Module Secondary DevelopmentPartially conduct training. The trained model can be seamlessly integrated into PaddleOCR's API for inference.
If you want to use the paddle_dynamic or transformers engine with the trained model, please refer to the Weight Conversion section in Inference Engine later in this document to convert the model from the pdparams format to the safetensors format using PaddleX.
For detailed descriptions, values, compatibility rules, and examples of the inference engine, please refer to <a href="../inference_engine.en.md">Inference Engine and Configuration Description</a>.
<strong>Test Environment Description:</strong>
<ul> <li><strong>Test Data:</strong> [Sample Image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/layout.jpg)</li> <li><strong>Hardware Configuration:</strong> <ul> <li>GPU: NVIDIA A100 40G</li> <li>CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz</li> </ul> </li> <li><strong>Software Environment:</strong> <ul> <li>Ubuntu 22.04 / CUDA 12.6 / cuDNN 9.5</li> <li>paddlepaddle-gpu 3.2.1 / paddleocr 3.5 / transformers 5.4.0 / torch 2.10</li> </ul> </li> </ul>When using the inference engine, the system will automatically download the official pre-trained model. If you need to use a self-trained model with the paddle_dynamic or transformers engine, please refer to the PaddleX Layout Detection Module Weight Conversion section to convert the model from the pdparams format to the safetensors format using PaddleX. This allows seamless integration into the PaddleOCR API for inference.