Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning (1908.02983v5)

Published 8 Aug 2019 in cs.CV

Abstract: Semi-supervised learning, i.e. jointly learning from labeled and unlabeled samples, is an active research topic due to its key role on relaxing human supervision. In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples. We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions. We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that mixup augmentation and setting a minimum number of labeled samples per mini-batch are effective regularization techniques for reducing it. The proposed approach achieves state-of-the-art results in CIFAR-10/100, SVHN, and Mini-ImageNet despite being much simpler than other methods. These results demonstrate that pseudo-labeling alone can outperform consistency regularization methods, while the opposite was supposed in previous work. Source code is available at https://git.io/fjQsC.

Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning

The paper "Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning" by Arazo et al. investigates a semi-supervised learning (SSL) approach aimed at addressing the challenges inherent in image classification tasks. Unlike most contemporary methods that rely heavily on consistency regularization, the authors propose a strategy based on pseudo-labeling augmented with mixup techniques and specific regularization strategies to mitigate confirmation bias.

Key Contributions

  1. Pseudo-Labeling with Mixup Augmentation:
    • The central proposal involves generating soft pseudo-labels from network predictions for unlabeled data. This process is regularized using mixup augmentation, which linearly combines pairs of data samples and their corresponding labels.
    • Mixup serves the dual purpose of data augmentation and label smoothing, helping to regularize the training process and thus reduce overconfidence and the risk of confirmation bias.
  2. Mitigation of Confirmation Bias:
    • The naive pseudo-labeling approach can lead to confirmation bias, wherein the model's erroneous predictions on unlabeled data become reinforced over time.
    • The authors demonstrate that using mixup augmentation combined with a minimum number of labeled samples per mini-batch significantly reduces confirmation bias. This setup ensures better performance, especially with smaller amounts of labeled data.
  3. Extensive Experiments and Comparisons:
    • Experimental evaluation on datasets including CIFAR-10, CIFAR-100, SVHN, and Mini-ImageNet affirms the efficacy of the proposed pseudo-labeling approach.
    • Results show that the method achieves state-of-the-art performance across these datasets, notably outperforming several existing consistency regularization methods.
    • Further experiments with different network architectures (Wide ResNet, PreAct ResNet) validate the robustness and generalizability of the proposed approach.

Numerical Results

The empirical results substantiate the effectiveness of the proposed pseudo-labeling method:

  • CIFAR-10: For 500 labeled samples, the proposed approach yields an error rate of 8.80%, significantly lower than the 12.36% achieved by the Π\varPi model and the 10.61% by the MT-LP approach.
  • CIFAR-100: With 10,000 labeled samples, the approach achieves a 32.15% error rate, outperforming the 34.10% of MT-fast-SWA and the 36.08% of MT.
  • SVHN: The method produces a 3.64% error rate with 500 labeled samples, better than the previous state-of-the-art ICT approach's 4.23%.

Practical and Theoretical Implications

From a practical standpoint, this paper highlights a more straightforward yet highly effective alternative to consistency regularization for SSL. By focusing on pseudo-labeling augmented with mixup and strategic regularizations, the model not only simplifies implementation but also improves classification performance in various real-world scenarios characterized by limited labeled data.

Theoretically, the findings suggest that structured regularization (mixup) and ensuring a minimum number of labeled samples in mini-batches counteract the confirmation bias that hampers naive pseudo-labeling approaches. This insight opens avenues for further theoretical analyses and advancements in SSL methodologies, potentially integrating the benefits of both pseudo-labeling and consistency regularization.

Future Directions

Despite these promising results, further research is necessary to explore the method's scalability and performance in more complex, large-scale, and class-imbalanced datasets. Investigating the synergy between pseudo-labeling and consistency regularization approaches could yield even more robust SSL frameworks. Additionally, evaluating the method's applicability and adaptability in other domains beyond image classification, such as natural language processing or time-series forecasting, could broaden its impact.

In conclusion, the paper by Arazo et al. makes a valuable contribution to the field of semi-supervised learning by advancing our understanding of pseudo-labeling and providing a practical solution to mitigate confirmation bias. Their findings offer a strong foundation for future innovations and optimizations in SSL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Eric Arazo (18 papers)
  2. Diego Ortego (13 papers)
  3. Paul Albert (20 papers)
  4. Noel E. O'Connor (70 papers)
  5. Kevin McGuinness (76 papers)
Citations (752)