Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rethinking Re-Sampling in Imbalanced Semi-Supervised Learning (2106.00209v2)

Published 1 Jun 2021 in cs.CV

Abstract: Semi-Supervised Learning (SSL) has shown its strong ability in utilizing unlabeled data when labeled data is scarce. However, most SSL algorithms work under the assumption that the class distributions are balanced in both training and test sets. In this work, we consider the problem of SSL on class-imbalanced data, which better reflects real-world situations. In particular, we decouple the training of the representation and the classifier, and systematically investigate the effects of different data re-sampling techniques when training the whole network including a classifier as well as fine-tuning the feature extractor only. We find that data re-sampling is of critical importance to learn a good classifier as it increases the accuracy of the pseudo-labels, in particular for the minority classes in the unlabeled data. Interestingly, we find that accurate pseudo-labels do not help when training the feature extractor, rather contrariwise, data re-sampling harms the training of the feature extractor. This finding is against the general intuition that wrong pseudo-labels always harm the model performance in SSL. Based on these findings, we suggest to re-think the current paradigm of having a single data re-sampling strategy and develop a simple yet highly effective Bi-Sampling (BiS) strategy for SSL on class-imbalanced data. BiS implements two different re-sampling strategies for training the feature extractor and the classifier and integrates this decoupled training into an end-to-end framework. In particular, BiS progressively changes the data distribution during training such that in the beginning the feature extractor is trained effectively, while towards the end of the training the data is re-balanced such that the classifier is trained reliably. We benchmark our proposed bi-sampling strategy extensively on popular datasets and achieve state-of-the-art performances.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ju He (24 papers)
  2. Adam Kortylewski (73 papers)
  3. Shaokang Yang (5 papers)
  4. Shuai Liu (215 papers)
  5. Cheng Yang (168 papers)
  6. Changhu Wang (54 papers)
  7. Alan Yuille (294 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.