Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dense Distinct Query for End-to-End Object Detection (2303.12776v2)

Published 22 Mar 2023 in cs.CV

Abstract: One-to-one label assignment in object detection has successfully obviated the need for non-maximum suppression (NMS) as postprocessing and makes the pipeline end-to-end. However, it triggers a new dilemma as the widely used sparse queries cannot guarantee a high recall, while dense queries inevitably bring more similar queries and encounter optimization difficulties. As both sparse and dense queries are problematic, then what are the expected queries in end-to-end object detection? This paper shows that the solution should be Dense Distinct Queries (DDQ). Concretely, we first lay dense queries like traditional detectors and then select distinct ones for one-to-one assignments. DDQ blends the advantages of traditional and recent end-to-end detectors and significantly improves the performance of various detectors including FCN, R-CNN, and DETRs. Most impressively, DDQ-DETR achieves 52.1 AP on MS-COCO dataset within 12 epochs using a ResNet-50 backbone, outperforming all existing detectors in the same setting. DDQ also shares the benefit of end-to-end detectors in crowded scenes and achieves 93.8 AP on CrowdHuman. We hope DDQ can inspire researchers to consider the complementarity between traditional methods and end-to-end detectors. The source code can be found at \url{https://github.com/jshilong/DDQ}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6154–6162, 2018.
  2. End-to-end object detection with transformers. In European Conference on Computer Vision, pages 213–229. Springer, 2020.
  3. Mmdetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019.
  4. Group detr: Fast training convergence with decoupled one-to-many label assignment. arXiv preprint arXiv:2207.13085, 2022.
  5. Diffusiondet: Diffusion model for object detection. arXiv preprint arXiv:2211.09788, 2022.
  6. Dynamic head: Unifying object detection heads with attentions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7373–7382, 2021.
  7. Dynamic detr: End-to-end object detection with dynamic attention. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2988–2997, 2021.
  8. Tood: Task-aligned one-stage object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3510–3519, 2021.
  9. Ota: Optimal transport assignment for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 303–312, 2021.
  10. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  11. Detrs with hybrid matching. arXiv preprint arXiv:2207.13080, 2022.
  12. Probabilistic anchor assignment with iou prediction for object detection. In European Conference on Computer Vision, pages 355–371. Springer, 2020.
  13. Foveabox: Beyound anchor-based object detection. IEEE Transactions on Image Processing, 29:7389–7398, 2020.
  14. Dn-detr: Accelerate detr training by introducing query denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13619–13627, June 2022.
  15. A dual weighting label assignment scheme for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9387–9396, 2022.
  16. Generalized focal loss v2: Learning reliable localization quality estimation for dense object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11632–11641, 2021.
  17. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988, 2017.
  18. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
  19. Dab-detr: Dynamic anchor boxes are better queries for detr. In International Conference on Learning Representations, 2021.
  20. Decoupled weight decay regularization. In International Conference on Learning Representations, 2018.
  21. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28:91–99, 2015.
  22. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 658–666, 2019.
  23. Crowdhuman: A benchmark for detecting human in a crowd. arXiv preprint arXiv:1805.00123, 2018.
  24. What makes for end-to-end object detection? In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 9934–9944. PMLR, 18–24 Jul 2021.
  25. What makes for end-to-end object detection? In International Conference on Machine Learning, pages 9934–9944. PMLR, 2021.
  26. Sparse r-cnn: End-to-end object detection with learnable proposals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14454–14463, 2021.
  27. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9627–9636, 2019.
  28. End-to-end object detection with fully convolutional network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15849–15858, 2021.
  29. Solo: Segmenting objects by locations. In European Conference on Computer Vision, pages 649–665. Springer, 2020.
  30. Scale-equalizing pyramid convolution for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  31. Anchor detr: Query design for transformer-based detector. arXiv preprint arXiv:2109.07107, 2021.
  32. Efficient detr: improving end-to-end object detector with dense prior. arXiv preprint arXiv:2104.01318, 2021.
  33. Dino: Detr with improved denoising anchor boxes for end-to-end object detection. In International Conference on Learning Representations.
  34. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9759–9768, 2020.
  35. Freeanchor: Learning to match anchors for visual object detection. Advances in neural information processing systems, 32, 2019.
  36. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6848–6856, 2018.
  37. Autoassign: Differentiable label assignment for dense object detection. arXiv preprint arXiv:2007.03496, 2020.
  38. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159, 2020.
Citations (78)

Summary

  • The paper introduces the DDQ method to balance query density and distinctiveness in end-to-end object detection.
  • It demonstrates improved performance with 52.1 AP on COCO and enhanced recall in crowded scenes using DDQ-DETR.
  • The method employs pyramid shuffle, class-agnostic NMS, and auxiliary loss to stabilize and optimize training.

Analysis of Dense Distinct Queries for End-to-End Object Detection

This paper presents a nuanced approach to object detection by introducing the concept of Dense Distinct Queries (DDQ) for end-to-end object detection models. The authors aim to address the limitations of sparse and dense queries in one-to-one label assignments by proposing a novel query formulation that balances the need for both density and distinctiveness.

Context and Methodology

Object detection models strive to detect and delineate objects in images accurately. Traditional methods rely on dense queries, leading to redundant predictions that necessitate non-maximum suppression (NMS). End-to-end detectors, such as DETR, attempt to simplify this pipeline by using sparse queries and one-to-one label assignments, eliminating NMS but introducing new challenges, such as low recall due to sparse queries and optimization difficulties.

The paper proposes Dense Distinct Queries (DDQ) as a solution. DDQ integrates dense query distribution typical of traditional detectors with the distinctiveness required for effective optimization in end-to-end models. This approach effectively enhances recall while maintaining query distinctiveness, addressing the inherent challenges faced by both traditional and more recent models.

Key Results

The introduction of DDQ improved the performance across multiple detector architectures, including FCN, R-CNN, and DETRs:

  • DDQ-DETR achieved 52.1 AP on the MS-COCO dataset within 12 epochs using a ResNet-50 backbone, outperforming existing models.
  • On the CrowdHuman dataset, DDQ improved object detection performance in crowded scenes, achieving 93.8 AP and significantly enhancing recall (98.7).
  • The paper also demonstrates the robustness of the DDQ approach by validating it across distinct model architectures and datasets.

Technical Evaluation

The authors employ several refinements to implement DDQ effectively:

  • Pyramid Shuffle Operation for FCN structures to ensure cross-level interaction among dense queries, improving training stability and performance.
  • The application of class-agnostic NMS across both training and inference to ensure distinctiveness in selected queries, setting it apart from traditional NMS.
  • Inclusion of auxiliary loss for dense queries to harness the potential of the filtered queries and stabilize training.

Implications and Future Work

The results suggest that DDQ effectively blends the benefits of traditional grid-based query systems with the novel end-to-end paradigms established by models like DETR. The concept of maintaining both density and distinctiveness in queries could inspire future explorations in object detection and broader AI application domains. Future work might further explore optimizing the balance between query density and training efficiency or extend the DDQ framework to other kinds of tasks, such as segmentation or tracking.

Conclusion

The proposed Dense Distinct Queries (DDQ) offer a promising direction for advancing object detection models by reconciling the conflicting needs of query density and optimization distinctiveness. This paper not only enhances understanding of query dynamics in object detection but also sets the stage for developing even more efficient and effective generative models in the computer vision domain.

Github Logo Streamline Icon: https://streamlinehq.com