Back to Transformers

Deformable DETR

docs/source/en/model_doc/deformable_detr.md

5.8.04.1 KB
Original Source
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->

This model was released on 2020-10-08 and added to Hugging Face Transformers on 2022-09-14.

<div style="float: right;"> <div class="flex flex-wrap space-x-1"> </div> </div>

Deformable DETR

Deformable DETR improves on the original DETR by using a deformable attention module. This mechanism selectively attends to a small set of key sampling points around a reference. It improves training speed and improves accuracy.

<small> Deformable DETR architecture. Taken from the <a href="https://huggingface.co/papers/2010.04159">original paper</a>.</small>

You can find all the available Deformable DETR checkpoints under the SenseTime organization.

[!TIP] This model was contributed by nielsr.

Click on the Deformable DETR models in the right sidebar for more examples of how to apply Deformable DETR to different object detection and segmentation tasks.

The example below demonstrates how to perform object detection with the [Pipeline] and the [AutoModel] class.

<hfoptions id="usage"> <hfoption id="Pipeline">
python

from transformers import pipeline


pipeline = pipeline(
    "object-detection",
    model="SenseTime/deformable-detr",
    device_map=0
)

pipeline("http://images.cocodataset.org/val2017/000000039769.jpg")
</hfoption> <hfoption id="AutoModel">
python
import requests
import torch
from PIL import Image

from transformers import AutoImageProcessor, AutoModelForObjectDetection


url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

image_processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr")
model = AutoModelForObjectDetection.from_pretrained("SenseTime/deformable-detr", device_map="auto")

# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model(**inputs)

results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)

for result in results:
    for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
        score, label = score.item(), label_id.item()
        box = [round(i, 2) for i in box.tolist()]
        print(f"{model.config.id2label[label]}: {score:.2f} {box}")
</hfoption> </hfoptions>

Resources

  • Refer to this set of notebooks for inference and fine-tuning [DeformableDetrForObjectDetection] on a custom dataset.

DeformableDetrImageProcessor

[[autodoc]] DeformableDetrImageProcessor - preprocess - post_process_object_detection

DeformableDetrImageProcessorPil

[[autodoc]] DeformableDetrImageProcessorPil - preprocess - post_process_object_detection

DeformableDetrConfig

[[autodoc]] DeformableDetrConfig

DeformableDetrModel

[[autodoc]] DeformableDetrModel - forward

DeformableDetrForObjectDetection

[[autodoc]] DeformableDetrForObjectDetection - forward