Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling (2110.08263v3)

Published 15 Oct 2021 in cs.LG and cs.CV

Abstract: The recently proposed FixMatch achieved state-of-the-art results on most semi-supervised learning (SSL) benchmarks. However, like other modern SSL algorithms, FixMatch uses a pre-defined constant threshold for all classes to select unlabeled data that contribute to the training, thus failing to consider different learning status and learning difficulties of different classes. To address this issue, we propose Curriculum Pseudo Labeling (CPL), a curriculum learning approach to leverage unlabeled data according to the model's learning status. The core of CPL is to flexibly adjust thresholds for different classes at each time step to let pass informative unlabeled data and their pseudo labels. CPL does not introduce additional parameters or computations (forward or backward propagation). We apply CPL to FixMatch and call our improved algorithm FlexMatch. FlexMatch achieves state-of-the-art performance on a variety of SSL benchmarks, with especially strong performances when the labeled data are extremely limited or when the task is challenging. For example, FlexMatch achieves 13.96% and 18.96% error rate reduction over FixMatch on CIFAR-100 and STL-10 datasets respectively, when there are only 4 labels per class. CPL also significantly boosts the convergence speed, e.g., FlexMatch can use only 1/5 training time of FixMatch to achieve even better performance. Furthermore, we show that CPL can be easily adapted to other SSL algorithms and remarkably improve their performances. We open-source our code at https://github.com/TorchSSL/TorchSSL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Bowen Zhang (161 papers)
  2. Yidong Wang (43 papers)
  3. Wenxin Hou (11 papers)
  4. Hao Wu (623 papers)
  5. Jindong Wang (150 papers)
  6. Manabu Okumura (41 papers)
  7. Takahiro Shinozaki (13 papers)
Citations (746)

Summary

FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling

The paper "FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling" proposes an enhancement to the FixMatch algorithm for semi-supervised learning (SSL). The primary contribution is the integration of Curriculum Pseudo Labeling (CPL), a strategy that dynamically adjusts the threshold for pseudo-labeling based on the learning status of the model. This approach seeks to address several limitations inherent in current SSL algorithms, particularly the use of a fixed threshold for all classes in pseudo-labeling.

Key Contributions and Methodology

  1. Curriculum Pseudo Labeling (CPL): CPL introduces dynamic thresholds that vary across different classes and training steps according to the current learning status of each class. The learning status is estimated using the number of unlabeled samples that surpass the predefined threshold. The flexible threshold is then adjusted proportionally to this estimated learning status. This approach ensures that informative unlabeled data are optimally utilized throughout the training process without introducing additional computational overhead or parameters.
  2. FlexMatch Algorithm: By integrating CPL into FixMatch, the authors create FlexMatch, an SSL algorithm that achieves considerable improvements in both performance and convergence speed. FlexMatch adjusts the thresholds dynamically at each iteration and leverages both weak and strong data augmentations. The unsupervised loss in FlexMatch is recalculated using the dynamic thresholds to ensure that it is suitably challenging and thereby improves model robustness.
  3. Experimental Results: Extensive experiments demonstrate FlexMatch's superior performance on several benchmark datasets, including CIFAR-10, CIFAR-100, SVHN, STL-10, and ImageNet. Notably, on the CIFAR-100 dataset with only 4 labels per class, FlexMatch achieves a relative error rate reduction of 13.96% over FixMatch. Moreover, FlexMatch significantly accelerates convergence speed, requiring only one-fifth the training time of FixMatch to achieve superior results. For instance, on the STL-10 dataset, FlexMatch outperforms FixMatch with an 18.96% relative error rate reduction when using only 40 labeled samples.
  4. Adaptability and Efficiency: The authors demonstrate that CPL can be readily applied to other SSL algorithms, yielding improved performance consistently. Importantly, FlexMatch maintains computational efficiency, as the dynamic threshold adjustment does not incur significant additional costs.

Implications and Future Directions

The practical implications of FlexMatch are considerable. By improving the utilization of unlabeled data, FlexMatch can reduce the dependency on large labeled datasets, which are often expensive and time-consuming to obtain. This is particularly beneficial for applications in domains where annotated data is scarce.

From a theoretical perspective, the introduction of dynamically adjusted thresholds challenges the current paradigm in SSL, suggesting that a more nuanced approach to pseudo-labeling can yield substantial benefits. It opens avenues for further research into adaptive learning mechanisms and their integration into various semi-supervised and unsupervised learning frameworks.

Future research may focus on: - Robustness to Noisy Labels: Further studies could investigate the performance of FlexMatch under conditions where the labeled data contains noise. - Long-tail Scenarios: Adapting CPL to scenarios with highly imbalanced class distributions in the unlabeled dataset could be another fruitful direction. - Application to Other Modalities: Exploring the effectiveness of CPL-based approaches in other domains, such as natural language processing and speech recognition.

In conclusion, the integration of CPL into SSL algorithms, as presented in FlexMatch, represents a significant step forward in semi-supervised learning. It offers both practical enhancements in model performance and efficiency and introduces a new dimension to the theoretical understanding of how models can effectively leverage unlabeled data. The code for FlexMatch and CPL-enabled SSL algorithms has been open-sourced, promoting reproducibility and further exploration by the research community.

Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com