Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Non-iterative optimization of pseudo-labeling thresholds for training object detection models from multiple datasets (2210.10221v1)

Published 19 Oct 2022 in cs.CV, cs.LG, and eess.IV

Abstract: We propose a non-iterative method to optimize pseudo-labeling thresholds for learning object detection from a collection of low-cost datasets, each of which is annotated for only a subset of all the object classes. A popular approach to this problem is first to train teacher models and then to use their confident predictions as pseudo ground-truth labels when training a student model. To obtain the best result, however, thresholds for prediction confidence must be adjusted. This process typically involves iterative search and repeated training of student models and is time-consuming. Therefore, we develop a method to optimize the thresholds without iterative optimization by maximizing the $F_\beta$-score on a validation dataset, which measures the quality of pseudo labels and can be measured without training a student model. We experimentally demonstrate that our proposed method achieves an mAP comparable to that of grid search on the COCO and VOC datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yuki Tanaka (11 papers)
  2. Shuhei M. Yoshida (9 papers)
  3. Makoto Terao (4 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.