Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Imbalanced Deep Learning by Minority Class Incremental Rectification (1804.10851v1)

Published 28 Apr 2018 in cs.CV

Abstract: Model learning from class imbalanced training data is a long-standing and significant challenge for machine learning. In particular, existing deep learning methods consider mostly either class balanced data or moderately imbalanced data in model training, and ignore the challenge of learning from significantly imbalanced training data. To address this problem, we formulate a class imbalanced deep learning model based on batch-wise incremental minority (sparsely sampled) class rectification by hard sample mining in majority (frequently sampled) classes during model training. This model is designed to minimise the dominant effect of majority classes by discovering sparsely sampled boundaries of minority classes in an iterative batch-wise learning process. To that end, we introduce a Class Rectification Loss (CRL) function that can be deployed readily in deep network architectures. Extensive experimental evaluations are conducted on three imbalanced person attribute benchmark datasets (CelebA, X-Domain, DeepFashion) and one balanced object category benchmark dataset (CIFAR-100). These experimental results demonstrate the performance advantages and model scalability of the proposed batch-wise incremental minority class rectification model over the existing state-of-the-art models for addressing the problem of imbalanced data learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Qi Dong (21 papers)
  2. Shaogang Gong (94 papers)
  3. Xiatian Zhu (139 papers)
Citations (316)

Summary

Imbalanced Deep Learning by Minority Class Incremental Rectification

Imbalanced data distribution poses a significant challenge in deep learning, particularly when the dataset includes a disproportionally large number of samples for certain classes compared to others. This paper by Qi Dong, Shaogang Gong, and Xiatian Zhu presents a novel approach to tackling this problem through a method termed as "Class Rectification Loss" (CRL), aimed at improving the recognition of minority classes within imbalanced datasets.

The primary focus of the paper is on enhancing deep learning models to perform better on severely imbalanced data, which is common in many real-world applications. Existing approaches commonly rely on pre-processing techniques like data re-sampling or algorithmic modifications like cost-sensitive learning. However, these methods have limitations, such as potential overfitting with over-sampling or loss of valuable information with down-sampling.

The authors propose a batch-wise incremental approach where minority class rectification occurs through hard sample mining. This process involves selecting the hardest samples from the minority classes in each mini-batch during training. The effectiveness of this method rests on a technique that incrementally weeds out model learning bias towards majority classes by focusing computational resources on the harder aspects of minority class boundary discovery and delineation.

The authors introduce a Class Rectification Loss (CRL) function that can be integrated into existing deep network architectures. CRL works in tandem with the standard cross-entropy loss, interrogating models into learning more informative representations of minority class features. Crucially, it integrates into the batch-wise processing paradigm of modern deep networks, enabling scalability to large datasets without global data pre-processing burdens or assumptions.

Three core contributions of the paper stand out:

  1. It presents a scalable solution for class imbalanced deep learning by leveraging batch-wise optimisation.
  2. It innovatively includes hard sample mining within mini-batch training to rectify model bias, making significant headway over existing sampling or cost-sensitive methods.
  3. It demonstrates the CRL's applicability and improvements across datasets with different nature and degree of imbalance, showing significant advancements in facial and clothing attribute recognition benchmarks.

The empirical evaluation spans multiple datasets, including significantly imbalanced person attribute datasets like CelebA and X-Domain, as well as a balanced dataset, CIFAR-100, to underline the versatility and effectiveness of their approach. The experiments show notable improvements in class-balanced accuracy, demonstrating the higher sensitivity and precision of CRL-equipped models for minority classes compared to traditional methods and state-of-the-art imbalanced learning techniques.

The implications of these findings are both practical and theoretical. Practically, the proposed method can enhance the deployment of deep learning models in domains laden with naturally imbalanced data distributions, such as healthcare diagnostics or rare event detection. Theoretically, these results underscore the potential for better embedding learning that remains adaptive even under vastly skewed data conditions. CRL's design leveraging hard sample mining also presents an intriguing pathway for future research, particularly in the realms of online learning and active learning where data imbalance is inherently severe.

This paper provides foundational insights into the mechanics and potentials of addressing class imbalance through deep learning, emphasizing the importance of rectifying sample representation progressively and locally in model training. Future research directions illuminated by this work include extending CRL frameworks to integrate directly with online deep learning systems, enabling real-time adaptive learning with imbalanced streams of data.