Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 39 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 118 tok/s Pro
Kimi K2 181 tok/s Pro
GPT OSS 120B 429 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

PRCBL: Prior Regularized Class Balanced Loss

Updated 27 October 2025
  • The paper introduces PRCBL, a novel loss function that combines prior logit regularization and effective number-based weighting to address class imbalance in class-incremental learning.
  • It employs a methodology that integrates empirical class priors with loss reweighting to mitigate overfitting on majority classes and reduce catastrophic forgetting.
  • The framework, applied in EndoCIL, demonstrates improved accuracy and balanced performance on endoscopic benchmarks through coordinated buffer-based replay and gradient calibration.

Prior Regularized Class Balanced Loss (PRCBL) is a loss function designed to address class imbalance in class-incremental learning scenarios, notably for endoscopic image classification. It achieves robustness against both intra-phase and inter-phase class imbalance by combining prior probability adjustments to logits and class-balanced weighting based on the effective number of samples. PRCBL was introduced as a core component of the EndoCIL framework and leverages theoretical advances from long-tailed recognition literature to maintain learning stability and plasticity across evolving classification tasks (Liu et al., 20 Oct 2025).

1. Motivation: Class Imbalance in Class-Incremental Learning

Class-incremental learning (CIL) presents unique challenges in real-world medical imaging, where new classes (representing novel disease states or anatomical observations) are introduced sequentially. This causes:

  • Inter-phase imbalance: Disparity in sample counts between previously learned classes and newly-introduced ones during each incremental phase.
  • Intra-phase imbalance: Uneven distribution of samples among classes within the current task.

Standard cross-entropy loss functions, if applied naively, tend to overfit head classes and induce catastrophic forgetting for tail classes, especially in the presence of distribution shifts. PRCBL seeks to mitigate these biases by integrating class prior information and class-balanced reweighting directly into the training objective (Liu et al., 20 Oct 2025).

2. Prior Regularization of Logits

PRCBL introduces prior correction to logits based on empirical class distributions. Formally, for a training dataset with CC classes and per-class sample counts nin_i, the prior probability vector pprior\mathbf{p}^{\text{prior}} is defined as:

piprior=nij=1Cnjp_i^{\text{prior}} = \frac{n_i}{\sum_{j=1}^C n_j}

The logits ziz_i output by the classifier are regularized by addition of log-prior terms:

pi=exp(zi+logpiprior)j=1Cexp(zj+logpjprior)p_i = \frac{\exp(z_i + \log p_i^{\text{prior}})}{\sum_{j=1}^C \exp(z_j + \log p_j^{\text{prior}})}

This modification shifts the softmax decision boundaries to reflect the observed class frequencies, directly colligating the model's predictions with the distributional sampling process, and counteracts bias toward majority classes during incremental updates (Liu et al., 20 Oct 2025).

3. Class-Balanced Loss Weights via Effective Number of Samples

Building on the theoretical framework of data overlap and coverage (Cui et al., 2019), PRCBL employs class weights derived from the effective number of samples per class:

Ei=1βni1βE_i = \frac{1 - \beta^{n_i}}{1 - \beta}

where β[0,1)\beta \in [0,1) is a hyperparameter governing the redundancy among samples. The associated weight for class ii is computed as:

αi=1β1βni\alpha_i = \frac{1 - \beta}{1 - \beta^{n_i}}

To preserve total loss scale, weights are normalized:

Wi(CB)=αiCj=1CαjW_i^{(\text{CB})} = \alpha_i \cdot \frac{C}{\sum_{j=1}^C \alpha_j}

This approach ensures that classes with few samples exert proportionally greater influence in loss computation, while those with many samples are down-weighted in accordance with their diminished marginal information content (Liu et al., 20 Oct 2025, Cui et al., 2019).

4. Loss Formulation and Task Adaptation

PRCBL is deployed in a multi-phase CIL framework. For the initial task (t=0t=0), standard cross-entropy loss is used over prior-regularized logits:

LCEprior=logpyL_{\text{CE}}^{\text{prior}} = -\log p_y

For subsequent tasks (t>0t>0), class-balanced weights modulate the loss:

LCEprior=Wy(CB)logpyL_{\text{CE}}^{\text{prior}} = -W_y^{(\text{CB})}\log p_y

where yy denotes the ground-truth label. To preserve prior knowledge, a knowledge distillation (KD) term is incorporated:

Lt=LCEprior+λLKDL_t = L_{\text{CE}}^{\text{prior}} + \lambda L_{\text{KD}}

where λ\lambda is a tunable parameter controlling the plasticity-stability trade-off. This composite objective enables resistance against catastrophic forgetting and encourages learning fidelity for both former and novel classes (Liu et al., 20 Oct 2025).

5. Empirical Performance and Framework Integration

Within the EndoCIL architecture, PRCBL is combined with buffer-based replay (Maximum Mean Discrepancy Based Replay, MDBR) and classifier gradient calibration (CFG). PRCBL is directly responsible for:

  • Modifying decision boundaries via prior logit regularization.
  • Re-balancing per-class contributions to the loss, thereby protecting rare old classes and boosting new minority classes.
  • Maintaining overall class-wise fairness in incremental model adaptation.

Quantitative evaluations on four public endoscopic benchmarks demonstrate increased accuracy on the last task (Acclast\text{Acc}_{\text{last}}), higher average performance (Accavg\text{Acc}_{\text{avg}}), and minimized average forgetting (AF) when PRCBL is utilized. The improved F1-scores and resilience to long-tail effects indicate that PRCBL is effective in mitigating both intra- and inter-phase imbalance, contributing to clinically relevant diagnostic performance (Liu et al., 20 Oct 2025).

PRCBL synthesizes prior modeling and class-balanced loss strategies:

  • Prior2Posterior operates by estimating the "effective prior" from the model's a posteriori predictions and post-hoc logit adjustment based on learned priors, rather than empirical frequencies (Bhat et al., 21 Dec 2024).
  • The class-balanced loss by Cui et al. employs effective number-based weighting, analogous to PRCBL's balancing mechanism, but does not necessarily incorporate prior logit regularization or integration with incremental learning frameworks (Cui et al., 2019).

In contrast to purely resampling or conventional loss reweighting, PRCBL's combination of prior adjustment and effective number-based weighting directly targets the two primary sources of class imbalance in CIL scenarios (Liu et al., 20 Oct 2025).

7. Significance and Prospective Implications

PRCBL enables scalable, balanced learning in environments characterized by frequent class distribution shifts and sparse labels, as in medical imaging workflows. Its principled integration of prior information and robust balancing strategies supports improved model stability, mitigates catastrophic forgetting, and enhances fairness across head and tail classes. A plausible implication is that the framework underlying PRCBL, when generalized, may be adaptable to other domains that suffer from distributional skew and require incremental adaptation, beyond clinical image analysis.

Empirical results in EndoCIL suggest that the adoption of PRCBL leads to measurable gains in both accuracy and robustness, and may offer a template for future research in lifelong class-incremental learning under severe imbalance conditions.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Prior Regularized Class Balanced Loss (PRCBL).