Back to Transformers

DINOv2

docs/source/en/model_doc/dinov2.md

5.8.07.0 KB
Original Source
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->

This model was released on 2023-04-14 and added to Hugging Face Transformers on 2023-07-18.

<div style="float: right;"> <div class="flex flex-wrap space-x-1">
</div>
</div>

DINOv2

DINOv2 is a vision foundation model that uses ViT as a feature extractor for multiple downstream tasks like image classification and depth estimation. It focuses on stabilizing and accelerating training through techniques like a faster memory-efficient attention, sequence packing, improved stochastic depth, Fully Sharded Data Parallel (FSDP), and model distillation.

You can find all the original DINOv2 checkpoints under the Dinov2 collection.

[!TIP] Click on the DINOv2 models in the right sidebar for more examples of how to apply DINOv2 to different vision tasks.

The example below demonstrates how to obtain an image embedding with [Pipeline] or the [AutoModel] class.

<hfoptions id="usage"> <hfoption id="Pipeline">
python
from transformers import pipeline


pipe = pipeline(
    task="image-classification",
    model="facebook/dinov2-small-imagenet1k-1-layer",
    device=0
)

pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg")
</hfoption> <hfoption id="AutoModel">
python
import requests
from PIL import Image

from transformers import AutoImageProcessor, AutoModelForImageClassification


url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

processor = AutoImageProcessor.from_pretrained("facebook/dinov2-small-imagenet1k-1-layer")
model = AutoModelForImageClassification.from_pretrained(
    "facebook/dinov2-small-imagenet1k-1-layer",
    device_map="auto",
    attn_implementation="sdpa"
)

inputs = processor(images=image, return_tensors="pt").to(model.device)
logits = model(**inputs).logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
</hfoption> </hfoptions>

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.

The example below uses torchao to only quantize the weights to int4.

python
# pip install torchao
import requests
from PIL import Image
from torchao.quantization import Int4WeightOnlyConfig

from transformers import AutoImageProcessor, AutoModelForImageClassification, TorchAoConfig


url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)

processor = AutoImageProcessor.from_pretrained('facebook/dinov2-giant-imagenet1k-1-layer')

quant_config = Int4WeightOnlyConfig(group_size=128)
quantization_config = TorchAoConfig(quant_type=quant_config)

model = AutoModelForImageClassification.from_pretrained(
    'facebook/dinov2-giant-imagenet1k-1-layer',
    device_map="auto",
    quantization_config=quantization_config
)

inputs = processor(images=image, return_tensors="pt").to(model.device)
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])

Notes

  • The example below shows how to split the output tensor into:

    • one embedding for the whole image, commonly referred to as a CLS token, useful for classification and retrieval
    • a set of local embeddings, one for each 14x14 patch of the input image, useful for dense tasks, such as semantic segmentation
    py
    from transformers import AutoImageProcessor, AutoModel
    from PIL import Image
    import requests
    
    url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
    image = Image.open(requests.get(url, stream=True).raw)
    print(image.height, image.width)  # [480, 640]
    
    processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base')
    model = AutoModel.from_pretrained('facebook/dinov2-base', device_map="auto")
    patch_size = model.config.patch_size
    
    inputs = processor(images=image, return_tensors="pt").to(model.device)
    print(inputs.pixel_values.shape)  # [1, 3, 224, 224]
    batch_size, rgb, img_height, img_width = inputs.pixel_values.shape
    num_patches_height, num_patches_width = img_height // patch_size, img_width // patch_size
    num_patches_flat = num_patches_height * num_patches_width
    
    outputs = model(**inputs)
    last_hidden_states = outputs[0]
    print(last_hidden_states.shape)  # [1, 1 + 256, 768]
    assert last_hidden_states.shape == (batch_size, 1 + num_patches_flat, model.config.hidden_size)
    
    cls_token = last_hidden_states[:, 0, :]
    patch_features = last_hidden_states[:, 1:, :].unflatten(1, (num_patches_height, num_patches_width))
    
  • Use torch.jit.trace to speedup inference. However, it will produce some mismatched elements. The difference between the original and traced model is 1e-4.

    py
    import torch
    from transformers import AutoImageProcessor, AutoModel
    from PIL import Image
    import requests
    
    url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
    image = Image.open(requests.get(url, stream=True).raw)
    
    processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base')
    model = AutoModel.from_pretrained('facebook/dinov2-base', device_map="auto")
    
    inputs = processor(images=image, return_tensors="pt").to(model.device)
    outputs = model(**inputs)
    last_hidden_states = outputs[0]
    
    # We have to force return_dict=False for tracing
    model.config.return_dict = False
    
    with torch.no_grad():
        traced_model = torch.jit.trace(model, [inputs.pixel_values])
        traced_outputs = traced_model(inputs.pixel_values)
    
    print((last_hidden_states - traced_outputs[0]).abs().max())
    

Dinov2Config

[[autodoc]] Dinov2Config

Dinov2Model

[[autodoc]] Dinov2Model - forward

Dinov2ForImageClassification

[[autodoc]] Dinov2ForImageClassification - forward