Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring (1911.09785v2)

Published 21 Nov 2019 in cs.LG, cs.CV, and stat.ML

Abstract: We improve the recently-proposed "MixMatch" semi-supervised learning algorithm by introducing two new techniques: distribution alignment and augmentation anchoring. Distribution alignment encourages the marginal distribution of predictions on unlabeled data to be close to the marginal distribution of ground-truth labels. Augmentation anchoring feeds multiple strongly augmented versions of an input into the model and encourages each output to be close to the prediction for a weakly-augmented version of the same input. To produce strong augmentations, we propose a variant of AutoAugment which learns the augmentation policy while the model is being trained. Our new algorithm, dubbed ReMixMatch, is significantly more data-efficient than prior work, requiring between $5\times$ and $16\times$ less data to reach the same accuracy. For example, on CIFAR-10 with 250 labeled examples we reach $93.73\%$ accuracy (compared to MixMatch's accuracy of $93.58\%$ with $4{,}000$ examples) and a median accuracy of $84.92\%$ with just four labels per class. We make our code and data open-source at https://github.com/google-research/remixmatch.

Citations (623)

Summary

  • The paper introduces distribution alignment to ensure model predictions on unlabeled data mirror true class distributions, improving semi-supervised learning efficiency.
  • The paper employs augmentation anchoring via a control theory-based AutoAugment (CTAugment) to stabilize training with strong data augmentations.
  • The method achieves remarkable data efficiency, reaching 93.73% accuracy on CIFAR-10 with just 250 labeled examples, significantly lowering labeling requirements.

ReMixMatch: Advancements in Semi-Supervised Learning through Distribution Alignment and Augmentation Anchoring

ReMixMatch presents notable developments in the domain of semi-supervised learning (SSL) by refining the previously established MixMatch algorithm. The authors introduce two primary innovations: distribution alignment and augmentation anchoring, both aimed at enhancing the efficacy of SSL by leveraging unlabeled data more effectively.

Core Contributions

1. Distribution Alignment: This technique aligns the prediction distribution of the model on unlabeled data with the marginal distribution of labeled data. The concept, previously known but underutilized in contemporary methods, involves feeding a running average of model predictions into the alignment process. This strategy ensures that class predictions on unlabeled data reflect the true class distribution, which is crucial in scenarios where class imbalance might skew SSL efforts.

2. Augmentation Anchoring: By employing stronger augmentations calculated through a control theory-based version of AutoAugment (dubbed CTAugment), this method addresses the limitations of consistency regularization in the original MixMatch framework. Instead of enforcing consistency between weakly augmented images, augmentation anchoring stabilizes learning by using predictions from a weakly augmented input as targets for multiple strongly augmented versions.

Empirical Performance

The ReMixMatch enhancements prove to be significantly data-efficient, achieving higher accuracy with considerably fewer labeled examples compared to prior methodologies. Notably, with only 250 labeled examples on CIFAR-10, ReMixMatch attains 93.73% accuracy — a performance level previously requiring 4,000 labeled examples using MixMatch.

Implementation Insights

  • CTAugment: This component dynamically learns augmentation policies during training, circumventing the supervised learning requirements of traditional AutoAugment approaches. It maintains performance without predefined policies, which is essential in low-label scenarios.
  • Loss Functions and Regularization: The algorithm employs cross-entropy losses for both labeled and unlabeled data, supplemented by pre-mixup and rotation losses. These modifications aid in further stabilizing training and enhancing performance.

Theoretical and Practical Implications

The theoretical underpinnings of ReMixMatch rest on maximizing mutual information between the input and output distributions. The practical implications are profound, as the improvements in prediction reliability with scant labeled data can substantially reduce costs associated with data labeling processes.

Future Directions

ReMixMatch opens several avenues for future research, particularly in applications requiring data-efficient learning solutions. Potential progressions include refining the connection between SSL and active learning frameworks and exploring extensions to other domains beyond image data, like text or time-series data.

By systematically refining SSL techniques, ReMixMatch contributes to a broader understanding and practical deployment of models that are robust even with limited labeled datasets, addressing one of the critical challenges in modern AI and machine learning applications.

Github Logo Streamline Icon: https://streamlinehq.com