Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 175 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 180 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Many-shot from Low-shot: Learning to Annotate using Mixed Supervision for Object Detection (2008.09694v2)

Published 21 Aug 2020 in cs.CV and eess.IV

Abstract: Object detection has witnessed significant progress by relying on large, manually annotated datasets. Annotating such datasets is highly time consuming and expensive, which motivates the development of weakly supervised and few-shot object detection methods. However, these methods largely underperform with respect to their strongly supervised counterpart, as weak training signals \emph{often} result in partial or oversized detections. Towards solving this problem we introduce, for the first time, an online annotation module (OAM) that learns to generate a many-shot set of \emph{reliable} annotations from a larger volume of weakly labelled images. Our OAM can be jointly trained with any fully supervised two-stage object detection method, providing additional training annotations on the fly. This results in a fully end-to-end strategy that only requires a low-shot set of fully annotated images. The integration of the OAM with Fast(er) R-CNN improves their performance by $17\%$ mAP, $9\%$ AP50 on PASCAL VOC 2007 and MS-COCO benchmarks, and significantly outperforms competing methods using mixed supervision.

Citations (12)

Summary

  • The paper presents an online annotation module that boosts detection performance by 17% mAP and 9% AP50 on benchmarks like PASCAL VOC 2007 and MS-COCO.
  • It integrates weak and strong supervision in a mixed self-training framework, significantly reducing the need for exhaustive manual labeling.
  • The approach uses iterative pseudo-supervision and bounding box refinement to improve training efficiency and adapt to diverse object detection scenarios.

Online Annotation with Mixed Supervision: An Advancement in Object Detection

The paper "Many-shot from Low-shot: Learning to Annotate using Mixed Supervision for Object Detection" addresses a persistent challenge in the field of object detection, which has traditionally relied heavily on large datasets with exhaustive manual annotations. This research introduces an innovative mixed supervision approach to alleviate the substantial cost and labor involved in data annotation while maintaining high performance levels in object detection.

Summary and Core Contributions

Recent advancements in object detection have primarily leveraged fully supervised learning methods that require comprehensive bounding box annotations. However, the creation of such datasets entails significant expense and effort. Consequently, the research community has explored alternatives, such as Weakly Supervised Object Detection (WSOD) and Few-Shot Object Detection (FSOD). While these methods reduce data annotation requirements, they struggle to match the performance of fully supervised models due to limited training signals and lower-quality annotations.

This paper proposes an innovative approach incorporating an Online Annotation Module (OAM) that generates accurate annotations from a larger pool of weakly annotated images. This module can be seamlessly integrated with any two-stage fully supervised object detection model. By generating annotations on the fly, the OAM enhances training effectiveness with minimal reliance on fully annotated data, thus offering an end-to-end solution that only requires a small annotated dataset.

The integration of the OAM into object detection frameworks like Fast R-CNN and Faster R-CNN leads to a significant improvement in performance metrics, achieving a 17% increase in mean Average Precision (mAP) and a 9% improvement in AP50 on benchmarks such as PASCAL VOC 2007 and MS-COCO. This performance boost is a compelling indication of the efficacy of the mixed supervision paradigm, which combines the advantages of both FSOD and WSOD approaches.

Technical Approach

The proposed framework includes two branches: a joint detection module using mixed supervision and a fully supervised training component. In the former, weak and strong labels are utilized to predict bounding boxes and class scores simultaneously, employing both a MIL strategy and image-level classification tasks. Additionally, an augmentation process iteratively refines bounding box proposals to enhance the detection module's reliability.

A key innovation lies in the pseudo-supervision generation strategy. As training progresses, the OAM identifies and annotates images with high confidence, thus refining the pool of semi-strong images used for further training. This dynamic adaptation enhances the model's ability to derive high-quality annotations from the weakly labeled images, contributing to improved learner performance.

Evaluation and Results

Extensive experimentation on datasets like PASCAL VOC 2007 and MS-COCO indicates that the proposed framework not only surpasses WSOD and competitive mixed supervision models but also approaches the performance of fully supervised object detectors. Particularly in challenging scenarios like the COCO dataset, the model demonstrates a robust capacity to generalize and accurately detect objects across diverse scenes.

Implications and Future Directions

Strategically, this work suggests a meaningful shift in the object detection landscape by blending the accessibility of weak and few-shot supervision with the robustness typically reserved for fully annotated datasets. In practical terms, this development could substantially reduce the costs associated with data labeling, thereby accelerating the deployment and adaptation of object detection systems in real-world applications.

Future research directions could explore the incorporation of active learning strategies to select the most informative samples for annotation, further optimizing training efficiency. Additionally, refining the approach to improve bounding box regression accuracy in challenging shape or context scenarios remains an open area for further investigation. The implications of this research extend beyond object detection, potentially informing related domains that handle large-scale data with limited labeled samples.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Youtube Logo Streamline Icon: https://streamlinehq.com