Papers
Topics
Authors
Recent
Search
2000 character limit reached

Detecting Twenty-thousand Classes using Image-level Supervision

Published 7 Jan 2022 in cs.CV | (2201.02605v3)

Abstract: Current object detectors are limited in vocabulary size due to the small scale of detection datasets. Image classifiers, on the other hand, reason about much larger vocabularies, as their datasets are larger and easier to collect. We propose Detic, which simply trains the classifiers of a detector on image classification data and thus expands the vocabulary of detectors to tens of thousands of concepts. Unlike prior work, Detic does not need complex assignment schemes to assign image labels to boxes based on model predictions, making it much easier to implement and compatible with a range of detection architectures and backbones. Our results show that Detic yields excellent detectors even for classes without box annotations. It outperforms prior work on both open-vocabulary and long-tail detection benchmarks. Detic provides a gain of 2.4 mAP for all classes and 8.3 mAP for novel classes on the open-vocabulary LVIS benchmark. On the standard LVIS benchmark, Detic obtains 41.7 mAP when evaluated on all classes, or only rare classes, hence closing the gap in performance for object categories with few samples. For the first time, we train a detector with all the twenty-one-thousand classes of the ImageNet dataset and show that it generalizes to new datasets without finetuning. Code is available at \url{https://github.com/facebookresearch/Detic}.

Citations (520)

Summary

  • The paper introduces Detic, which uses image-level supervision to expand object detector vocabularies without relying on bounding box annotations.
  • It decouples localization and classification by employing a novel classification loss and integrating large-scale datasets like ImageNet-21K.
  • Experimental results show significant benchmark improvements, including an 8.3-point mAP gain for novel classes on the LVIS dataset.

Detecting Twenty-thousand Classes Using Image-level Supervision

The paper "Detecting Twenty-thousand Classes Using Image-level Supervision" introduces Detic, a novel method aimed at extending the vocabulary of object detectors. By training detectors using image classification datasets, Detic addresses the limitations imposed by traditional detection datasets, which are typically smaller and less diverse in vocabulary.

Method Overview

Detic optimizes object detection by decoupling localization and classification. Traditional methods rely heavily on box annotations for both tasks, constraining vocabulary size. Detic leverages image-level supervision, significantly broadening the detector’s vocabulary to encompass tens of thousands of concepts without the need for box annotations.

The paper outlines a key innovation: the use of a simple classification loss which bypasses the need for complex assignment strategies that map image labels to bounding boxes. This approach not only simplifies implementation but also enhances compatibility across various detection architectures.

Technical Contributions

  1. Loss Function: The introduction of a non-prediction-based classification loss, such as the max-size loss, which applies image-level supervision based on proposal size rather than model predictions, is central to Detic’s performance.
  2. Compatibility and Implementation: Detic’s architecture is compatible with existing detection backbones, facilitating integration into current systems. It effectively utilizes both image-classification datasets like ImageNet-21K and Conceptual Captions for supervision.
  3. Benchmark Performance: Detic outperforms prior techniques on open-vocabulary and long-tail detection benchmarks. Notably, it improves mAP for novel classes by 8.3 points and achieves competitive performance on the LVIS benchmark.
  4. Cross-dataset Generalization: The paper highlights Detic's ability to train on all 21,000 classes in the ImageNet dataset and transfer effectively to new datasets without retraining or fine-tuning.

Experimental Results

Detic shows significant improvements across several benchmarks:

  • LVIS Benchmark: Achieves a robust increase in mAP, particularly for rare classes, showcasing Detic’s effectiveness in long-tail scenarios.
  • Open-vocabulary LVIS and COCO Benchmarks: Dramatically enhances detection accuracy for novel classes, outperforming state-of-the-art models like ViLD and OVR-CNN.

Implications and Future Work

Detic presents a scalable solution to expanding the vocabulary of object detectors without the prohibitive costs of wider annotation. It successfully demonstrates that robust detectors can be trained using image-level annotations alone, even for classes not explicitly labeled with bounding boxes. This work opens avenues for deploying detectors in real-world applications where annotation resources are limited.

The potential for further research lies in integrating Detic with other architectural innovations, possibly exploring its applicability to tasks such as open-set recognition or few-shot learning. Additionally, future studies could refine the loss strategies or incorporate more diverse datasets to enhance generalization further.

In conclusion, by leveraging image-level supervision, Detic makes significant strides in addressing the challenges posed by large vocabulary detection, generating valuable insights for both theoretical exploration and practical application in AI-driven object detection.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.