- The paper introduces Detic, which uses image-level supervision to expand object detector vocabularies without relying on bounding box annotations.
- It decouples localization and classification by employing a novel classification loss and integrating large-scale datasets like ImageNet-21K.
- Experimental results show significant benchmark improvements, including an 8.3-point mAP gain for novel classes on the LVIS dataset.
Detecting Twenty-thousand Classes Using Image-level Supervision
The paper "Detecting Twenty-thousand Classes Using Image-level Supervision" introduces Detic, a novel method aimed at extending the vocabulary of object detectors. By training detectors using image classification datasets, Detic addresses the limitations imposed by traditional detection datasets, which are typically smaller and less diverse in vocabulary.
Method Overview
Detic optimizes object detection by decoupling localization and classification. Traditional methods rely heavily on box annotations for both tasks, constraining vocabulary size. Detic leverages image-level supervision, significantly broadening the detector’s vocabulary to encompass tens of thousands of concepts without the need for box annotations.
The paper outlines a key innovation: the use of a simple classification loss which bypasses the need for complex assignment strategies that map image labels to bounding boxes. This approach not only simplifies implementation but also enhances compatibility across various detection architectures.
Technical Contributions
- Loss Function: The introduction of a non-prediction-based classification loss, such as the max-size loss, which applies image-level supervision based on proposal size rather than model predictions, is central to Detic’s performance.
- Compatibility and Implementation: Detic’s architecture is compatible with existing detection backbones, facilitating integration into current systems. It effectively utilizes both image-classification datasets like ImageNet-21K and Conceptual Captions for supervision.
- Benchmark Performance: Detic outperforms prior techniques on open-vocabulary and long-tail detection benchmarks. Notably, it improves mAP for novel classes by 8.3 points and achieves competitive performance on the LVIS benchmark.
- Cross-dataset Generalization: The paper highlights Detic's ability to train on all 21,000 classes in the ImageNet dataset and transfer effectively to new datasets without retraining or fine-tuning.
Experimental Results
Detic shows significant improvements across several benchmarks:
- LVIS Benchmark: Achieves a robust increase in mAP, particularly for rare classes, showcasing Detic’s effectiveness in long-tail scenarios.
- Open-vocabulary LVIS and COCO Benchmarks: Dramatically enhances detection accuracy for novel classes, outperforming state-of-the-art models like ViLD and OVR-CNN.
Implications and Future Work
Detic presents a scalable solution to expanding the vocabulary of object detectors without the prohibitive costs of wider annotation. It successfully demonstrates that robust detectors can be trained using image-level annotations alone, even for classes not explicitly labeled with bounding boxes. This work opens avenues for deploying detectors in real-world applications where annotation resources are limited.
The potential for further research lies in integrating Detic with other architectural innovations, possibly exploring its applicability to tasks such as open-set recognition or few-shot learning. Additionally, future studies could refine the loss strategies or incorporate more diverse datasets to enhance generalization further.
In conclusion, by leveraging image-level supervision, Detic makes significant strides in addressing the challenges posed by large vocabulary detection, generating valuable insights for both theoretical exploration and practical application in AI-driven object detection.