Back to Mmdetection

DAB-DETR

configs/dab_detr/README.md

3.3.03.2 KB
Original Source

DAB-DETR

DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR

<!-- [ALGORITHM] -->

Abstract

We present in this paper a novel query formulation using dynamic anchor boxes for DETR (DEtection TRansformer) and offer a deeper understanding of the role of queries in DETR. This new formulation directly uses box coordinates as queries in Transformer decoders and dynamically updates them layer-by-layer. Using box coordinates not only helps using explicit positional priors to improve the query-to-feature similarity and eliminate the slow training convergence issue in DETR, but also allows us to modulate the positional attention map using the box width and height information. Such a design makes it clear that queries in DETR can be implemented as performing soft ROI pooling layer-by-layer in a cascade manner. As a result, it leads to the best performance on MS-COCO benchmark among the DETR-like detection models under the same setting, e.g., AP 45.7% using ResNet50-DC5 as backbone trained in 50 epochs. We also conducted extensive experiments to confirm our analysis and verify the effectiveness of our methods.

<div align=center> </div> <div align=center> </div> <div align=center> </div>

Results and Models

We provide the config files and models for DAB-DETR: DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR.

BackboneModelLr schdMem (GB)Inf time (fps)box APConfigDownload
R-50DAB-DETR50e42.3configmodel | log

Citation

latex
@inproceedings{
  liu2022dabdetr,
  title={{DAB}-{DETR}: Dynamic Anchor Boxes are Better Queries for {DETR}},
  author={Shilong Liu and Feng Li and Hao Zhang and Xiao Yang and Xianbiao Qi and Hang Su and Jun Zhu and Lei Zhang},
  booktitle={International Conference on Learning Representations},
  year={2022},
  url={https://openreview.net/forum?id=oMI9PjOb9Jl}
}