docs/source/en/model_doc/vit_mae.md
This model was released on 2021-11-11 and added to Hugging Face Transformers on 2022-01-18.
<div style="float: right;"> <div class="flex flex-wrap space-x-1"></div>
ViTMAE is a self-supervised vision model that is pretrained by masking large portions of an image (~75%). An encoder processes the visible image patches and a decoder reconstructs the missing pixels from the encoded patches and mask tokens. After pretraining, the encoder can be reused for downstream tasks like image classification or object detection — often outperforming models trained with supervised learning.
You can find all the original ViTMAE checkpoints under the AI at Meta organization.
[!TIP] Click on the ViTMAE models in the right sidebar for more examples of how to apply ViTMAE to vision tasks.
The example below demonstrates how to reconstruct the missing pixels with the [ViTMAEForPreTraining] class.
import requests
import torch
from PIL import Image
from transformers import ViTImageProcessor, ViTMAEForPreTraining
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
processor = ViTImageProcessor.from_pretrained("facebook/vit-mae-base")
inputs = processor(image, return_tensors="pt").to(model.device)
inputs = {k: v.to(model.device) for k, v in inputs.items()}
model = ViTMAEForPreTraining.from_pretrained("facebook/vit-mae-base", attn_implementation="sdpa", device_map="auto")
with torch.no_grad():
outputs = model(**inputs)
reconstruction = outputs.logits
ViTMAEForPreTraining], and then discarding the decoder and fine-tuning the encoder. After fine-tuning, the weights can be plugged into a model like [ViTForImageClassification].ViTImageProcessor] for input preparation.ViTMAEForPreTraining].[[autodoc]] ViTMAEConfig
[[autodoc]] ViTMAEModel - forward
[[autodoc]] transformers.ViTMAEForPreTraining - forward