Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised Learning (2301.10921v2)

Published 26 Jan 2023 in cs.LG, cs.AI, and cs.CV

Abstract: The critical challenge of Semi-Supervised Learning (SSL) is how to effectively leverage the limited labeled data and massive unlabeled data to improve the model's generalization performance. In this paper, we first revisit the popular pseudo-labeling methods via a unified sample weighting formulation and demonstrate the inherent quantity-quality trade-off problem of pseudo-labeling with thresholding, which may prohibit learning. To this end, we propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training, effectively exploiting the unlabeled data. We derive a truncated Gaussian function to weight samples based on their confidence, which can be viewed as a soft version of the confidence threshold. We further enhance the utilization of weakly-learned classes by proposing a uniform alignment approach. In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Hao Chen (1006 papers)
  2. Ran Tao (82 papers)
  3. Yue Fan (46 papers)
  4. Yidong Wang (43 papers)
  5. Jindong Wang (150 papers)
  6. Bernt Schiele (210 papers)
  7. Xing Xie (220 papers)
  8. Bhiksha Raj (180 papers)
  9. Marios Savvides (61 papers)
Citations (115)

Summary

We haven't generated a summary for this paper yet.