docs/source/en/model_doc/eomt_dinov3.md
This model was released on 2025-09-09 and added to Hugging Face Transformers on 2026-02-01.
The EoMT-DINOv3 family extends the Encoder-only Mask Transformer architecture with Vision Transformers that are pre-trained using DINOv3. The update delivers stronger segmentation quality across ADE20K and COCO benchmarks while preserving the encoder-only design that made EoMT attractive for real-time applications.
Compared to the DINOv2-based models, the DINOv3 variants leverage rotary position embeddings, optional gated MLP blocks and the latest pre-training recipes from Meta AI. These changes yield measurable performance gains across semantic, instance and panoptic segmentation tasks, as highlighted in the DINOv3 model zoo.
The original EoMT architecture was introduced in the CVPR 2025 Highlight paper Your ViT is Secretly an Image Segmentation Model by Tommie Kerssies, Niccolò Cavagnero, Alexander Hermans, Narges Norouzi, Giuseppe Averta, Bastian Leibe, Gijs Dubbelman and Daan de Geus. The DINOv3 upgrade keeps the same lightweight segmentation head and query-based inference strategy while swapping the encoder for DINOv3 ViT checkpoints.
Tips:
rope_theta and use_gated_mlp. Large DINOv3 backbones
such as dinov3-vitg14 expect use_gated_mlp=True.AutoImageProcessor.This model was contributed by nielsr. The original code can be found here.
Below is a minimal example showing how to run panoptic segmentation with a DINOv3-backed EoMT model. The same image processor can be reused for semantic or instance segmentation simply by swapping the checkpoint.
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, AutoModelForUniversalSegmentation
model_id = "tue-mps/eomt-dinov3-coco-panoptic-base-640"
processor = AutoImageProcessor.from_pretrained(model_id)
model = AutoModelForUniversalSegmentation.from_pretrained(model_id).to("cuda" if torch.cuda.is_available() else "cpu", device_map="auto")
image = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw)
inputs = processor(images=image, return_tensors="pt").to(model.device)
with torch.inference_mode():
outputs = model(**inputs)
segmentation = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
list(segmentation.keys())
['segmentation', 'segments_info']
[[autodoc]] EomtDinov3Config
[[autodoc]] EomtDinov3PreTrainedModel - forward
[[autodoc]] EomtDinov3ForUniversalSegmentation