docs/source/en/model_doc/qianfan_ocr.md
This model was released on 2026-03-18 and added to Hugging Face Transformers on 2026-04-16.
Qianfan-OCR is a 4B-parameter end-to-end document intelligence model developed by the Baidu Qianfan Team. It was proposed in Qianfan-OCR: A Unified End-to-End Model for Document Intelligence by Daxiang Dong et al.
Unlike traditional multi-stage OCR pipelines, Qianfan-OCR performs direct image-to-text conversion and supports a broad range of prompt-driven tasks — from structured document parsing and table extraction to chart understanding, document question answering, and key information extraction — all within one model.
The model adopts a multimodal bridging architecture consisting of three components:
A key innovation is Layout-as-Thought: an optional thinking phase triggered by <think> tokens, where the model generates structured layout representations (bounding boxes, element types, reading order) before producing final outputs. This is particularly useful for heterogeneous pages with mixed element types (exam papers, technical reports, newspapers).
The model achieves state-of-the-art results on several benchmarks:
This model was contributed by the Baidu Qianfan Team.
from transformers import AutoModelForImageTextToText, AutoProcessor
model = AutoModelForImageTextToText.from_pretrained("baidu/Qianfan-OCR", device_map="auto")
processor = AutoProcessor.from_pretrained("baidu/Qianfan-OCR")
image = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/image_ocr.jpg"
messages = [{"role": "user", "content": [{"type": "image", "url": image}, {"type": "text", "text": "Parse this document to Markdown."}]}]
inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
generate_ids = model.generate(**inputs, max_new_tokens=64)
processor.decode(generate_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
For documents with complex layouts, cluttered elements, or non-standard reading orders, enable thinking mode by setting enable_thinking=True in apply_chat_template. The model will first generate structured layout analysis (bounding boxes, element types, reading order), then produce the final output.
from transformers import AutoModelForImageTextToText, AutoProcessor
model = AutoModelForImageTextToText.from_pretrained("baidu/Qianfan-OCR", device_map="auto")
processor = AutoProcessor.from_pretrained("baidu/Qianfan-OCR")
image = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/image_ocr.jpg"
messages = [{"role": "user", "content": [{"type": "image", "url": image}, {"type": "text", "text": "Parse this document to Markdown."}]}]
inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_tensors="pt", enable_thinking=True).to(model.device)
generate_ids = model.generate(**inputs, max_new_tokens=128)
processor.decode(generate_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
from transformers import AutoModelForImageTextToText, AutoProcessor
model = AutoModelForImageTextToText.from_pretrained("baidu/Qianfan-OCR", device_map="auto")
processor = AutoProcessor.from_pretrained("baidu/Qianfan-OCR")
image1 = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/image_ocr.jpg"
image2 = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/multi_box.png"
messages = [
[{"role": "user", "content": [{"type": "image", "url": image1}, {"type": "text", "text": "Parse this document to Markdown."}]}],
[{"role": "user", "content": [{"type": "image", "url": image2}, {"type": "text", "text": "OCR the text in the image."}]}],
]
inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_tensors="pt", padding=True).to(model.device)
generate_ids = model.generate(**inputs, max_new_tokens=64)
processor.batch_decode(generate_ids[:, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
[[autodoc]] QianfanOCRConfig
[[autodoc]] QianfanOCRVisionConfig
[[autodoc]] QianfanOCRProcessor - call
[[autodoc]] QianfanOCRVisionModel - forward
[[autodoc]] QianfanOCRModel - forward
[[autodoc]] QianfanOCRForConditionalGeneration - forward