Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

D^2ETR: Decoder-Only DETR with Computationally Efficient Cross-Scale Attention (2203.00860v1)

Published 2 Mar 2022 in cs.CV

Abstract: DETR is the first fully end-to-end detector that predicts a final set of predictions without post-processing. However, it suffers from problems such as low performance and slow convergence. A series of works aim to tackle these issues in different ways, but the computational cost is yet expensive due to the sophisticated encoder-decoder architecture. To alleviate this issue, we propose a decoder-only detector called D2ETR. In the absence of encoder, the decoder directly attends to the fine-fused feature maps generated by the Transformer backbone with a novel computationally efficient cross-scale attention module. D2ETR demonstrates low computational complexity and high detection accuracy in evaluations on the COCO benchmark, outperforming DETR and its variants.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Junyu Lin (14 papers)
  2. Xiaofeng Mao (35 papers)
  3. Yuefeng Chen (44 papers)
  4. Lei Xu (172 papers)
  5. Yuan He (156 papers)
  6. Hui Xue (109 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.