PRCBL: Prior Regularized Class Balanced Loss
- The paper introduces PRCBL, a novel loss function that combines prior logit regularization and effective number-based weighting to address class imbalance in class-incremental learning.
- It employs a methodology that integrates empirical class priors with loss reweighting to mitigate overfitting on majority classes and reduce catastrophic forgetting.
- The framework, applied in EndoCIL, demonstrates improved accuracy and balanced performance on endoscopic benchmarks through coordinated buffer-based replay and gradient calibration.
Prior Regularized Class Balanced Loss (PRCBL) is a loss function designed to address class imbalance in class-incremental learning scenarios, notably for endoscopic image classification. It achieves robustness against both intra-phase and inter-phase class imbalance by combining prior probability adjustments to logits and class-balanced weighting based on the effective number of samples. PRCBL was introduced as a core component of the EndoCIL framework and leverages theoretical advances from long-tailed recognition literature to maintain learning stability and plasticity across evolving classification tasks (Liu et al., 20 Oct 2025).
1. Motivation: Class Imbalance in Class-Incremental Learning
Class-incremental learning (CIL) presents unique challenges in real-world medical imaging, where new classes (representing novel disease states or anatomical observations) are introduced sequentially. This causes:
- Inter-phase imbalance: Disparity in sample counts between previously learned classes and newly-introduced ones during each incremental phase.
- Intra-phase imbalance: Uneven distribution of samples among classes within the current task.
Standard cross-entropy loss functions, if applied naively, tend to overfit head classes and induce catastrophic forgetting for tail classes, especially in the presence of distribution shifts. PRCBL seeks to mitigate these biases by integrating class prior information and class-balanced reweighting directly into the training objective (Liu et al., 20 Oct 2025).
2. Prior Regularization of Logits
PRCBL introduces prior correction to logits based on empirical class distributions. Formally, for a training dataset with classes and per-class sample counts , the prior probability vector is defined as:
The logits output by the classifier are regularized by addition of log-prior terms:
This modification shifts the softmax decision boundaries to reflect the observed class frequencies, directly colligating the model's predictions with the distributional sampling process, and counteracts bias toward majority classes during incremental updates (Liu et al., 20 Oct 2025).
3. Class-Balanced Loss Weights via Effective Number of Samples
Building on the theoretical framework of data overlap and coverage (Cui et al., 2019), PRCBL employs class weights derived from the effective number of samples per class:
where is a hyperparameter governing the redundancy among samples. The associated weight for class is computed as:
To preserve total loss scale, weights are normalized:
This approach ensures that classes with few samples exert proportionally greater influence in loss computation, while those with many samples are down-weighted in accordance with their diminished marginal information content (Liu et al., 20 Oct 2025, Cui et al., 2019).
4. Loss Formulation and Task Adaptation
PRCBL is deployed in a multi-phase CIL framework. For the initial task (), standard cross-entropy loss is used over prior-regularized logits:
For subsequent tasks (), class-balanced weights modulate the loss:
where denotes the ground-truth label. To preserve prior knowledge, a knowledge distillation (KD) term is incorporated:
where is a tunable parameter controlling the plasticity-stability trade-off. This composite objective enables resistance against catastrophic forgetting and encourages learning fidelity for both former and novel classes (Liu et al., 20 Oct 2025).
5. Empirical Performance and Framework Integration
Within the EndoCIL architecture, PRCBL is combined with buffer-based replay (Maximum Mean Discrepancy Based Replay, MDBR) and classifier gradient calibration (CFG). PRCBL is directly responsible for:
- Modifying decision boundaries via prior logit regularization.
- Re-balancing per-class contributions to the loss, thereby protecting rare old classes and boosting new minority classes.
- Maintaining overall class-wise fairness in incremental model adaptation.
Quantitative evaluations on four public endoscopic benchmarks demonstrate increased accuracy on the last task (), higher average performance (), and minimized average forgetting (AF) when PRCBL is utilized. The improved F1-scores and resilience to long-tail effects indicate that PRCBL is effective in mitigating both intra- and inter-phase imbalance, contributing to clinically relevant diagnostic performance (Liu et al., 20 Oct 2025).
6. Relationship to Related Approaches
PRCBL synthesizes prior modeling and class-balanced loss strategies:
- Prior2Posterior operates by estimating the "effective prior" from the model's a posteriori predictions and post-hoc logit adjustment based on learned priors, rather than empirical frequencies (Bhat et al., 21 Dec 2024).
- The class-balanced loss by Cui et al. employs effective number-based weighting, analogous to PRCBL's balancing mechanism, but does not necessarily incorporate prior logit regularization or integration with incremental learning frameworks (Cui et al., 2019).
In contrast to purely resampling or conventional loss reweighting, PRCBL's combination of prior adjustment and effective number-based weighting directly targets the two primary sources of class imbalance in CIL scenarios (Liu et al., 20 Oct 2025).
7. Significance and Prospective Implications
PRCBL enables scalable, balanced learning in environments characterized by frequent class distribution shifts and sparse labels, as in medical imaging workflows. Its principled integration of prior information and robust balancing strategies supports improved model stability, mitigates catastrophic forgetting, and enhances fairness across head and tail classes. A plausible implication is that the framework underlying PRCBL, when generalized, may be adaptable to other domains that suffer from distributional skew and require incremental adaptation, beyond clinical image analysis.
Empirical results in EndoCIL suggest that the adoption of PRCBL leads to measurable gains in both accuracy and robustness, and may offer a template for future research in lifelong class-incremental learning under severe imbalance conditions.