Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning (2205.07246v3)

Published 15 May 2022 in cs.LG and cs.CV

Abstract: Semi-supervised Learning (SSL) has witnessed great success owing to the impressive performances brought by various methods based on pseudo labeling and consistency regularization. However, we argue that existing methods might fail to utilize the unlabeled data more effectively since they either use a pre-defined / fixed threshold or an ad-hoc threshold adjusting scheme, resulting in inferior performance and slow convergence. We first analyze a motivating example to obtain intuitions on the relationship between the desirable threshold and model's learning status. Based on the analysis, we hence propose FreeMatch to adjust the confidence threshold in a self-adaptive manner according to the model's learning status. We further introduce a self-adaptive class fairness regularization penalty to encourage the model for diverse predictions during the early training stage. Extensive experiments indicate the superiority of FreeMatch especially when the labeled data are extremely rare. FreeMatch achieves 5.78%, 13.59%, and 1.28% error rate reduction over the latest state-of-the-art method FlexMatch on CIFAR-10 with 1 label per class, STL-10 with 4 labels per class, and ImageNet with 100 labels per class, respectively. Moreover, FreeMatch can also boost the performance of imbalanced SSL. The codes can be found at https://github.com/microsoft/Semi-supervised-learning.

Overview of FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning

The paper presents FreeMatch, a novel approach aimed at enhancing the performance of semi-supervised learning (SSL) by introducing self-adaptive thresholding mechanisms. This work critically addresses the limitations seen in current SSL methods that rely on pseudo labeling and consistency regularization but often suffer from ineffective utilization of unlabeled data due to static or ad-hoc thresholding strategies.

Core Contributions

FreeMatch introduces two significant innovations: Self-Adaptive Thresholding (SAT) and Self-Adaptive class Fairness regularization (SAF). These mechanisms are designed to dynamically adjust the confidence thresholds based on the model's current learning status and to promote diverse predictions amongst all classes, respectively.

  1. Self-Adaptive Thresholding (SAT):
    • SAT leverages the model's prediction confidence as a proxy for learning status, using exponential moving averages (EMA) to dynamically compute global (dataset-specific) and local (class-specific) thresholds.
    • The global threshold reflects the overall data confidence, increasing as the model gains confidence to filter out noisy pseudo labels, thus reducing confirmation bias.
    • Local thresholds adjust the global threshold by class, accounting for inter-class variations, leading to more effective sample utilization and improved training efficiency.
  2. Self-Adaptive Fairness (SAF):
    • SAF aims to ensure prediction diversity across classes, particularly in settings with scarce labeled data. It uses model predictions and histograms to normalize the expectations of class distributions, countering imbalances in mini-batch data.

Numerical Results

The empirical evaluation demonstrates that FreeMatch significantly reduces error rates across several standard datasets. Notably:

  • FreeMatch reduces error rates by 5.78% on CIFAR-10 with only one label per class compared to FlexMatch.
  • On STL-10, it shows a 13.59% improvement in accuracy, highlighting its efficacy in scenarios where labeled data are extremely sparse.
  • On ImageNet with 100 labels per class, FreeMatch surpasses state-of-the-art methods by 1.28%, emphasizing its scalability and robust performance in large-scale settings.

Implications and Future Directions

The introduction of FreeMatch presents important practical and theoretical implications for SSL. Practically, the self-adaptive mechanisms facilitate more efficient and effective training, particularly in environments with limited labeled data, making it highly applicable to real-world scenarios where data labeling is costly or impractical. Theoretically, the work contributes to the understanding of threshold mechanisms in SSL, offering a more dynamic and responsive framework that better aligns with model learning progresses.

Future research directions may include exploring further refinements of the threshold adaptation process or extending these concepts to other areas of machine learning where labeling constraints exist. Additionally, investigating the integration of self-supervised learning concepts with FreeMatch could provide even greater enhancements in model capabilities.

Overall, FreeMatch distinctly advances the field of SSL by providing a more nuanced and effective approach to utilizing unlabeled data, thereby setting a new standard for subsequent developments in this domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Yidong Wang (43 papers)
  2. Hao Chen (1006 papers)
  3. Qiang Heng (8 papers)
  4. Wenxin Hou (11 papers)
  5. Yue Fan (46 papers)
  6. Zhen Wu (79 papers)
  7. Jindong Wang (150 papers)
  8. Marios Savvides (61 papers)
  9. Takahiro Shinozaki (13 papers)
  10. Bhiksha Raj (180 papers)
  11. Bernt Schiele (210 papers)
  12. Xing Xie (220 papers)
Citations (212)
Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com