docs/source/en/model_doc/dinov2.md
This model was released on 2023-04-14 and added to Hugging Face Transformers on 2023-07-18.
<div style="float: right;"> <div class="flex flex-wrap space-x-1"></div>
DINOv2 is a vision foundation model that uses ViT as a feature extractor for multiple downstream tasks like image classification and depth estimation. It focuses on stabilizing and accelerating training through techniques like a faster memory-efficient attention, sequence packing, improved stochastic depth, Fully Sharded Data Parallel (FSDP), and model distillation.
You can find all the original DINOv2 checkpoints under the Dinov2 collection.
[!TIP] Click on the DINOv2 models in the right sidebar for more examples of how to apply DINOv2 to different vision tasks.
The example below demonstrates how to obtain an image embedding with [Pipeline] or the [AutoModel] class.
from transformers import pipeline
pipe = pipeline(
task="image-classification",
model="facebook/dinov2-small-imagenet1k-1-layer",
device=0
)
pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg")
import requests
from PIL import Image
from transformers import AutoImageProcessor, AutoModelForImageClassification
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("facebook/dinov2-small-imagenet1k-1-layer")
model = AutoModelForImageClassification.from_pretrained(
"facebook/dinov2-small-imagenet1k-1-layer",
device_map="auto",
attn_implementation="sdpa"
)
inputs = processor(images=image, return_tensors="pt").to(model.device)
logits = model(**inputs).logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses torchao to only quantize the weights to int4.
# pip install torchao
import requests
from PIL import Image
from torchao.quantization import Int4WeightOnlyConfig
from transformers import AutoImageProcessor, AutoModelForImageClassification, TorchAoConfig
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-giant-imagenet1k-1-layer')
quant_config = Int4WeightOnlyConfig(group_size=128)
quantization_config = TorchAoConfig(quant_type=quant_config)
model = AutoModelForImageClassification.from_pretrained(
'facebook/dinov2-giant-imagenet1k-1-layer',
device_map="auto",
quantization_config=quantization_config
)
inputs = processor(images=image, return_tensors="pt").to(model.device)
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
The example below shows how to split the output tensor into:
CLS token,
useful for classification and retrieval14x14 patch of the input image,
useful for dense tasks, such as semantic segmentationfrom transformers import AutoImageProcessor, AutoModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
print(image.height, image.width) # [480, 640]
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base')
model = AutoModel.from_pretrained('facebook/dinov2-base', device_map="auto")
patch_size = model.config.patch_size
inputs = processor(images=image, return_tensors="pt").to(model.device)
print(inputs.pixel_values.shape) # [1, 3, 224, 224]
batch_size, rgb, img_height, img_width = inputs.pixel_values.shape
num_patches_height, num_patches_width = img_height // patch_size, img_width // patch_size
num_patches_flat = num_patches_height * num_patches_width
outputs = model(**inputs)
last_hidden_states = outputs[0]
print(last_hidden_states.shape) # [1, 1 + 256, 768]
assert last_hidden_states.shape == (batch_size, 1 + num_patches_flat, model.config.hidden_size)
cls_token = last_hidden_states[:, 0, :]
patch_features = last_hidden_states[:, 1:, :].unflatten(1, (num_patches_height, num_patches_width))
Use torch.jit.trace to speedup inference. However, it will produce some mismatched elements. The difference between the original and traced model is 1e-4.
import torch
from transformers import AutoImageProcessor, AutoModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base')
model = AutoModel.from_pretrained('facebook/dinov2-base', device_map="auto")
inputs = processor(images=image, return_tensors="pt").to(model.device)
outputs = model(**inputs)
last_hidden_states = outputs[0]
# We have to force return_dict=False for tracing
model.config.return_dict = False
with torch.no_grad():
traced_model = torch.jit.trace(model, [inputs.pixel_values])
traced_outputs = traced_model(inputs.pixel_values)
print((last_hidden_states - traced_outputs[0]).abs().max())
[[autodoc]] Dinov2Config
[[autodoc]] Dinov2Model - forward
[[autodoc]] Dinov2ForImageClassification - forward