docs/source/en/model_doc/pp_chart2table.md
This model was released on 2025-05-20 and added to Hugging Face Transformers on 2026-03-20.
PP-Chart2Table is a SOTA multimodal model developed by the PaddlePaddle team, specializing in chart parsing for both Chinese and English. Its high performance is driven by a novel "Shuffled Chart Data Retrieval" training task, which, combined with a refined token masking strategy, significantly improves its efficiency in converting charts to data tables. The model is further strengthened by an advanced data synthesis pipeline that uses high-quality seed data, RAG, and LLMs persona design to create a richer, more diverse training set. To address the challenge of large-scale unlabeled, out-of-distribution (OOD) data, the team implemented a two-stage distillation process, ensuring robust adaptability and generalization on real-world data.
PP-Chart2Table adopts a multimodal fusion architecture that combines a vision tower for chart feature extraction and a language model for table structure generation, enabling end-to-end chart-to-table conversion.
The example below demonstrates how to classify image with PP-Chart2Table using [Pipeline] or the [AutoModel].
from transformers import pipeline
pipe = pipeline("image-text-to-text", model="PaddlePaddle/PP-Chart2Table_safetensors")
# PPChart2TableProcessor uses hardcoded "Chart to table" instruction internally via chat template
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/chart_parsing_02.png",
},
],
},
]
result = pipe(text=conversation)
print(result[0]["generated_text"])
from transformers import AutoModelForImageTextToText, AutoProcessor
model_path = "PaddlePaddle/PP-Chart2Table_safetensors"
model = AutoModelForImageTextToText.from_pretrained(
model_path,
device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_path)
# PPChart2TableProcessor uses hardcoded "Chart to table" instruction internally via chat template
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/chart_parsing_02.png",
},
],
},
]
inputs = processor.apply_chat_template(
conversation,
tokenize=True,
add_generation_prompt=True,
truncation=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
generated_ids = model.generate(**inputs, do_sample=False, max_new_tokens=256)
generated_ids_trimmed = [out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]
result = processor.batch_decode(generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(result)
Here is how you can do it with PP-Chart2Table using [Pipeline] or the [AutoModel]:
from transformers import pipeline
pipe = pipeline("image-text-to-text", model="PaddlePaddle/PP-Chart2Table_safetensors")
# PPChart2TableProcessor uses hardcoded "Chart to table" instruction internally via chat template
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/chart_parsing_02.png",
},
],
},
]
result = pipe(text=[conversation, conversation])
print(result[0][0]["generated_text"])
from transformers import AutoModelForImageTextToText, AutoProcessor
model_path = "PaddlePaddle/PP-Chart2Table_safetensors"
model = AutoModelForImageTextToText.from_pretrained(
model_path,
device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_path)
# PPChart2TableProcessor uses hardcoded "Chart to table" instruction internally via chat template
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/chart_parsing_02.png",
},
],
},
]
batch_conversation = [conversation, conversation]
inputs = processor.apply_chat_template(
batch_conversation,
tokenize=True,
add_generation_prompt=True,
truncation=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
generated_ids = model.generate(**inputs, do_sample=False, max_new_tokens=256)
generated_ids_trimmed = [out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]
result = processor.batch_decode(generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(result)
[[autodoc]] PPChart2TableConfig
[[autodoc]] PPChart2TableImageProcessor
[[autodoc]] PPChart2TableImageProcessorPil
[[autodoc]] PPChart2TableProcessor