Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Balanced-MixUp for Highly Imbalanced Medical Image Classification (2109.09850v1)

Published 20 Sep 2021 in cs.CV

Abstract: Highly imbalanced datasets are ubiquitous in medical image classification problems. In such problems, it is often the case that rare classes associated to less prevalent diseases are severely under-represented in labeled databases, typically resulting in poor performance of machine learning algorithms due to overfitting in the learning process. In this paper, we propose a novel mechanism for sampling training data based on the popular MixUp regularization technique, which we refer to as Balanced-MixUp. In short, Balanced-MixUp simultaneously performs regular (i.e., instance-based) and balanced (i.e., class-based) sampling of the training data. The resulting two sets of samples are then mixed-up to create a more balanced training distribution from which a neural network can effectively learn without incurring in heavily under-fitting the minority classes. We experiment with a highly imbalanced dataset of retinal images (55K samples, 5 classes) and a long-tail dataset of gastro-intestinal video frames (10K images, 23 classes), using two CNNs of varying representation capabilities. Experimental results demonstrate that applying Balanced-MixUp outperforms other conventional sampling schemes and loss functions specifically designed to deal with imbalanced data. Code is released at https://github.com/agaldran/balanced_mixup .

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Adrian Galdran (36 papers)
  2. Gustavo Carneiro (129 papers)
  3. Miguel A. González Ballester (18 papers)
Citations (94)

Summary

Balanced-MixUp for Highly Imbalanced Medical Image Classification: An Expert Overview

This paper presents an innovative approach to address the pervasive challenge of class imbalance in medical image classification: a method termed Balanced-MixUp. The core innovation lies in combining the MixUp regularization technique with a specialized data sampling strategy to enhance performance in learning scenarios with imbalanced datasets. The efficacy of this methodology is established through empirical evaluation on two distinct datasets with substantial imbalance: a retinal image dataset for Diabetic Retinopathy (DR) classification and a gastrointestinal image dataset.

Methodology and Contributions

In medical image classification, prevalent diseases are typically over-represented, while rare diseases suffer from under-representation, leading to potential overfitting on dominant classes. The paper introduces Balanced-MixUp, a strategy synthesizing MixUp—a regularization technique that creates synthetic training samples by mixing images and their labels—and a dual data sampling approach. This method samples data points both from instance-based distributions (mirroring the inherent data imbalance) and class-balanced distributions (emphasizing minority classes). The combinatorial mix of these datasets aims to form a more heterogeneous training set that better represents minority classes.

The authors employ Balanced-MixUp on two convolutional neural networks (CNNs) with differing capacities: MobileNet V2 and ResNeXt50. By tuning the hyperparameter α\alpha, responsible for regulating the mixing ratio of instance-based to class-based samples, the methodology shows versatility in enhancing class representation and generalization across architectures, thereby reducing the overfitting issue endemic in minority class representation.

Numerical Results and Claims

The paper conducts experiments on a heavily imbalanced diabetic retinopathy grading dataset with five classes and a long-tailed gastrointestinal image dataset with 23 classes. Key performance metrics include quadratic-weighted kappa (quad-κ\kappa), Matthews correlation coefficient (MCC), and Balanced Accuracy. For the DR grading task, the results indicate that Balanced-MixUp consistently yields superior quad-κ\kappa and MCC values, outperforming both simple class and instance-based sampling and specialized loss functions like Focal Loss and Class-Balanced Loss. In the GI image classification domain, Balanced-MixUp with finely-tuned α\alpha demonstrates improved handling of minority classes, as evidenced by higher scores in balanced accuracy and macro-F1 metrics.

Implications and Future Directions

Balanced-MixUp contributes to medical image analysis by offering a method that effectively combats class imbalance—a common barrier to deploying AI in clinical settings. The approach promises improved model generalization and reliability in rare disease detection, addressing a crucial gap in medical diagnostics. Moving forward, its application can be extended beyond image classification tasks to more complex data imbalance challenges, including segmentation tasks and other domains experiencing similar class distributions. Further research could explore integration with architectures leveraging attention mechanisms or semi-supervised learning to extend its benefits.

In conclusion, Balanced-MixUp provides a significant methodological contribution to the field of imbalanced learning in medical image classification by strategically leveraging synthetic data generation through MixUp combined with adaptive sampling. Future advancements should focus on refining this approach for broader applications and incorporating it into more sophisticated architectures for robust performance enhancement across diverse medical AI scenarios.