Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distribution Aligning Refinery of Pseudo-label for Imbalanced Semi-supervised Learning (2007.08844v2)

Published 17 Jul 2020 in cs.LG and stat.ML

Abstract: While semi-supervised learning (SSL) has proven to be a promising way for leveraging unlabeled data when labeled data is scarce, the existing SSL algorithms typically assume that training class distributions are balanced. However, these SSL algorithms trained under imbalanced class distributions can severely suffer when generalizing to a balanced testing criterion, since they utilize biased pseudo-labels of unlabeled data toward majority classes. To alleviate this issue, we formulate a convex optimization problem to softly refine the pseudo-labels generated from the biased model, and develop a simple algorithm, named Distribution Aligning Refinery of Pseudo-label (DARP) that solves it provably and efficiently. Under various class-imbalanced semi-supervised scenarios, we demonstrate the effectiveness of DARP and its compatibility with state-of-the-art SSL schemes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jaehyung Kim (44 papers)
  2. Youngbum Hur (1 paper)
  3. Sejun Park (28 papers)
  4. Eunho Yang (89 papers)
  5. Sung Ju Hwang (178 papers)
  6. Jinwoo Shin (196 papers)
Citations (151)

Summary

We haven't generated a summary for this paper yet.