Grounding DINO 1.5: Advance the "Edge" of Open-Set Object Detection (2405.10300v2)
Abstract: This paper introduces Grounding DINO 1.5, a suite of advanced open-set object detection models developed by IDEA Research, which aims to advance the "Edge" of open-set object detection. The suite encompasses two models: Grounding DINO 1.5 Pro, a high-performance model designed for stronger generalization capability across a wide range of scenarios, and Grounding DINO 1.5 Edge, an efficient model optimized for faster speed demanded in many applications requiring edge deployment. The Grounding DINO 1.5 Pro model advances its predecessor by scaling up the model architecture, integrating an enhanced vision backbone, and expanding the training dataset to over 20 million images with grounding annotations, thereby achieving a richer semantic understanding. The Grounding DINO 1.5 Edge model, while designed for efficiency with reduced feature scales, maintains robust detection capabilities by being trained on the same comprehensive dataset. Empirical results demonstrate the effectiveness of Grounding DINO 1.5, with the Grounding DINO 1.5 Pro model attaining a 54.3 AP on the COCO detection benchmark and a 55.7 AP on the LVIS-minival zero-shot transfer benchmark, setting new records for open-set object detection. Furthermore, the Grounding DINO 1.5 Edge model, when optimized with TensorRT, achieves a speed of 75.2 FPS while attaining a zero-shot performance of 36.2 AP on the LVIS-minival benchmark, making it more suitable for edge computing scenarios. Model examples and demos with API will be released at https://github.com/IDEA-Research/Grounding-DINO-1.5-API
- Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts. CVPR, 2021.
- YOLO-World: Real-Time Open-Vocabulary Object Detection. CVPR, 2024.
- Evaluating Large-Vocabulary Object Detectors: The Devil is in the Details. arXiv preprint arXiv:2102.01066, 2022.
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR, 2020.
- EVA-02: A Visual Representation for Neon Genesis. arXiv preprint arXiv:2303.11331, 2023.
- LVIS: A Dataset for Large Vocabulary Instance Segmentation, 2019.
- Deep Residual Learning for Image Recognition. CVPR, 2015.
- T-Rex2: Towards Generic Object Detection via Text-Visual Prompt Synergy. arXiv preprint arXiv:2403.14610, 2024.
- Ultralytics YOLOv8, January 2023.
- MDETR - Modulated Detection for End-to-End Multi-Modal Understanding. ICCV, 2021.
- Segment Anything. ICCV, 2023.
- The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale. IJCV, 2020.
- Visual In-Context Prompting. CVPR, 2024.
- Lite DETR : An Interleaved Multi-Scale Encoder for Efficient DETR. CVPR, 2023.
- Grounded Language-Image Pre-training. CVPR, 2022.
- Microsoft COCO: Common Objects in Context. ECCV, 2014.
- Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection. arXiv preprint arXiv:2303.05499, 2023.
- EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention. ICCV, 2023.
- Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. ICCV, 2021.
- A ConvNet for the 2020s. CVPR, 2022.
- Scaling Open-Vocabulary Object Detection. NeurIPS, 2023.
- Simple Open-Vocabulary Object Detection with Vision Transformers. ECCV, 2022.
- Learning Transferable Visual Models From Natural Language Supervision. ICML, 2021.
- Aligning and Prompting Everything All at Once for Universal Visual Perception. CVPR, 2024.
- V3Det: Vast Vocabulary Visual Detection Dataset. ICCV, 2023.
- Detecting Everything in the Open World: Towards Universal Object Detection. CVPR, 2023.
- General Object Foundation Model for Images and Videos at Scale. CVPR, 2024.
- Multi-modal Queried Object Detection in the Wild. NeurIPS, 2023.
- DetCLIPv2: Scalable Open-Vocabulary Object Detection Pre-training via Word-Region Alignment. CVPR, 2023.
- DetCLIP: Dictionary-Enriched Visual-Concept Paralleled Pre-training for Open-world Detection. NeurIPS, 2022.
- DetCLIPv3: Towards Versatile Generative Open-vocabulary Object Detection. CVPR, 2024.
- Florence: A New Foundation Model for Computer Vision. arXiv preprint arXiv:2111.11432, 2021.
- DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection. ICLR, 2023.
- A Simple Framework for Open-Vocabulary Segmentation and Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1020–1031, 2023.
- GLIPv2: Unifying Localization and Vision-Language Understanding. NeurIPS, 2022.
- Real-time Transformer-based Open-Vocabulary Detection with Efficient Fusion Head. arXiv preprint arXiv:2403.06892, 2024.
- DETRs Beat YOLOs on Real-time Object Detection. CVPR, 2023.
- Tianhe Ren (25 papers)
- Qing Jiang (30 papers)
- Shilong Liu (60 papers)
- Zhaoyang Zeng (29 papers)
- Wenlong Liu (12 papers)
- Han Gao (78 papers)
- Hongjie Huang (3 papers)
- Zhengyu Ma (25 papers)
- Xiaoke Jiang (11 papers)
- Yihao Chen (40 papers)
- Yuda Xiong (4 papers)
- Hao Zhang (948 papers)
- Feng Li (286 papers)
- Peijun Tang (6 papers)
- Kent Yu (3 papers)
- Lei Zhang (1691 papers)