Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Simple Open-Vocabulary Object Detection with Vision Transformers (2205.06230v2)

Published 12 May 2022 in cs.CV

Abstract: Combining simple architectures with large-scale pre-training has led to massive improvements in image classification. For object detection, pre-training and scaling approaches are less well established, especially in the long-tailed and open-vocabulary setting, where training data is relatively scarce. In this paper, we propose a strong recipe for transferring image-text models to open-vocabulary object detection. We use a standard Vision Transformer architecture with minimal modifications, contrastive image-text pre-training, and end-to-end detection fine-tuning. Our analysis of the scaling properties of this setup shows that increasing image-level pre-training and model size yield consistent improvements on the downstream detection task. We provide the adaptation strategies and regularizations needed to attain very strong performance on zero-shot text-conditioned and one-shot image-conditioned object detection. Code and models are available on GitHub.

Analysis of "Simple Open-Vocabulary Object Detection with Vision Transformers"

The paper "Simple Open-Vocabulary Object Detection with Vision Transformers" by Minderer et al. presents a refined methodology for transferring large-scale image-text models to the domain of open-vocabulary object detection using Vision Transformers (ViTs). The research bridges the gap between contrastive pre-training and effective open-vocabulary detection by leveraging a standard ViT architecture with minimal adjustments to accommodate object detection tasks.

Core Contributions and Methodological Approach

The central contribution of this paper is the development of a streamlined and efficient recipe that utilizes image-level pre-trained models for open-vocabulary object detection, addressing the complexity and computational demands commonly associated with such processes. The authors achieve this by integrating a Vision Transformer with an end-to-end detection pipeline that includes contrastive image-text pre-training followed by dedicated fine-tuning for detection tasks.

Key features of the approach involve adapting the ViT backbone with a simplistic addition of lightweight classification and bounding box heads, which decode outputs for each ViT token when applied to object detection. The innovation lies in its non-reliance on common methodologies like image-text fusion during forward passes, allowing for flexible query inputs either from text embeddings (for open-vocabulary classification) or image embeddings (for one-shot detection). This adaptability is specifically advantageous in text-conditioned tasks where detecting categories not seen during training is critical.

Experimental Evaluation and Results

The effectiveness of the proposed methodology is empirically validated through exhaustive experiments on long-tailed datasets such as LVIS, as well as cross-evaluations on COCO and Objects365. Notably, the performance of the model on unseen LVIS categories yielded a 31.2% Average Precision (AP), illustrating robust zero-shot generalization. This is achieved without specialized mechanisms like distillation from region proposals or multi-stage training, positioning the work as a compelling alternative to models like ViLD and GLIP.

Moreover, the paper highlights a substantial advancement in one-shot detection performance on COCO splits against prior benchmarks, with improvements exceeding 70% in AP50 scores under specific configurations. This demonstrates the model's applicability in detecting objects with unknown descriptors or complex images, which are challenging to articulate textually.

Implications and Future Directions

The implications of this research span both practical applications and theoretical advancements in computer vision and AI. Practically, the simplified architecture and training paradigms pave the way for more scalable and cost-effective solutions in scenarios with limited object-level annotated datasets. Theoretically, the findings underscore the importance of image-level contrastive representation learning and its transferable benefits to object detection.

This paper could serve as a foundation for future exploration into the optimization of large-scale pre-training regimes and architectural choices that enhance zero-shot and open-vocabulary detection capacity. The clear distinction between image-level and object-level improvements, as outlined in the scalability paper, could guide subsequent research aiming to bridge the gap further and extend the capabilities of foundation models in complex vision tasks.

In summary, the work by Minderer et al. signifies a competent step forward in the domain of open-vocabulary object detection, offering an efficient and simplified methodology that could influence both applied AI systems and academic inquiries into scalable detection architectures.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Matthias Minderer (19 papers)
  2. Alexey Gritsenko (16 papers)
  3. Austin Stone (17 papers)
  4. Maxim Neumann (12 papers)
  5. Dirk Weissenborn (17 papers)
  6. Alexey Dosovitskiy (49 papers)
  7. Aravindh Mahendran (17 papers)
  8. Anurag Arnab (56 papers)
  9. Mostafa Dehghani (64 papers)
  10. Zhuoran Shen (4 papers)
  11. Xiao Wang (507 papers)
  12. Xiaohua Zhai (51 papers)
  13. Thomas Kipf (43 papers)
  14. Neil Houlsby (62 papers)
Citations (268)
Youtube Logo Streamline Icon: https://streamlinehq.com