Papers
Topics
Authors
Recent
Search
2000 character limit reached

Exponentially Weighted IRFS Methods

Updated 27 March 2026
  • Exponentially Weighted IRFS comprises dual methodologies: one that amplifies rare-class sampling in long-tailed object detection and another that applies exponential forgetting in filtering and smoothing.
  • In object detection, the approach exponentially increases the sampling probability of rare classes, yielding significant mAP improvements and better performance on under-represented categories.
  • In state estimation, the method leverages exponential weighting to prioritize recent measurements, offering an efficient alternative to classical Kalman filters for time-correlated data.

Exponentially Weighted IRFS (E-IRFS) refers to two distinct, domain-specific methodologies that both leverage exponential weighting to address challenges of imbalanced information—one in the context of data sampling for long-tailed object detection, the other in recursive state estimation for filtering and smoothing. The first variant, Exponentially Weighted Instance-Aware Repeat Factor Sampling, amplifies the likelihood of rare-class image samples exponentially during deep learning training. The second, Exponentially Weighted Information Recursion for Filtering and Smoothing, employs an exponential weighting over time in state-space estimation, yielding an efficient alternative to classical Kalman filters for time-correlated or out-of-sequence measurements. Both approaches share the core principle of using exponential scaling to modulate the influence of rare or recent observations.

1. Exponentially Weighted Instance-Aware Repeat Factor Sampling (E-IRFS) for Long-Tailed Detection

Exponentially Weighted Instance-Aware Repeat Factor Sampling addresses severe class imbalance in object detection datasets, particularly relevant in UAV-based surveillance, where rare categories are frequently under-represented. E-IRFS generalizes previous sampling schemes—Repeat Factor Sampling (RFS) and Instance-Aware Repeat Factor Sampling (IRFS)—by replacing sublinear scaling with exponential scaling based on the geometric mean of image and instance class frequencies. This increases the sampling probability of images containing rare objects more aggressively than prior methods, making it particularly effective for long-tailed data distributions (Ahmed et al., 27 Mar 2025).

1.1. Mathematical Formulation

Let fi,cf_{i,c} denote the fraction of training images containing at least one object of class cc, and fb,cf_{b,c} the fraction of all bounding boxes labeled as cc. The geometric mean frequency is

gc=fi,cfb,c,gc(0,1].g_c = \sqrt{f_{i,c} \cdot f_{b,c}},\quad g_c\in(0,1].

The exponentially weighted repeat factor for class cc is then

rc=max(1,exp(αtgc)),r_c = \max\left(1,\,\exp\left(\alpha\sqrt{\frac{t}{g_c}}\right)\right),

where tt is a small threshold parameter (e.g., t=104t=10^{-4}), and α>0\alpha>0 controls the aggressiveness of up-weighting (e.g., α=2.0\alpha=2.0).

For image ii, the repeat factor is the maximum over its annotated classes,

ri=maxcclasses(i)rc.r_i = \max_{c \in \mathrm{classes}(i)} r_c.

Sampling probabilities across images are normalized:

pi=rij=1Nrj.p_i = \frac{r_i}{\sum_{j=1}^N r_j}.

Thus, images with rare classes receive exponentially higher sampling rates.

1.2. Integration and Algorithmic Workflow

The E-IRFS algorithm can be integrated within a typical deep learning training loop. Key steps involve:

  1. Computing per-class frequency statistics (fi,cf_{i,c}, fb,cf_{b,c}) and gcg_c across the training dataset.
  2. Calculating rcr_c for all classes, then determining rir_i for all images.
  3. Normalizing repeat factors to obtain pip_i.
  4. Sampling batches during training according to pip_i at each step, ensuring rare-class images dominate batch composition proportionally to their exponential rarity footprint.

Pseudocode formalizing these steps is provided in (Ahmed et al., 27 Mar 2025).

2. Empirical Evaluation in UAV Emergency Detection

Experimental validation demonstrates E-IRFS on a merged UAV emergency-response benchmark built from five public datasets, targeting classes {Fire, Smoke, Human, Lake}. Ground-truth statistics show strong long-tailed imbalance, e.g., the "Lake" class represents only 8.6% of training images.

Model architectures under evaluation include the lightweight YOLOv11-Nano (2.6M parameters) and the larger YOLOv11-Large (25.3M). Experiments utilize standard Ultralytics YOLOv11 hyperparameters and augmentations and are executed on NVIDIA A30 hardware (Ahmed et al., 27 Mar 2025).

2.1. Quantitative Results

Results for YOLOv11-Nano, comparing sampling methods with/without augmentation, are summarized below:

Method mAP₅₀ (no aug) Δ mAP₅₀–₉₅ (no aug) Δ mAP₅₀ (aug) Δ mAP₅₀–₉₅ (aug) Δ
Baseline 0.45 0.22 0.49 +8% 0.23 +4%
RFS 0.49 +8% 0.22 0% 0.50 +11% 0.24 +9%
IRFS 0.49 +8% 0.22 0% 0.50 +11% 0.24 +9%
E-IRFS (α=2) 0.53 +17% 0.25 +13% 0.55 +22% 0.25 +13%

Class-wise improvements with augmentation (YOLOv11-Nano):

Class Baseline Aug Aug+RFS Aug+IRFS Aug+E-IRFS
Fire 0.32 0.43 (+34%) 0.44 (+37%) 0.44 (+37%) 0.53 (+65%)
Smoke 0.80 0.81 (+1%) 0.82 (+2%) 0.82 (+2%) 0.82 (+2%)
Human 0.67 0.65 (–3%) 0.67 (+0%) 0.67 (+0%) 0.78 (+16%)
Lake 0.02 0.05 (+150%) 0.06 (+200%) 0.06 (+200%) 0.09 (+350%)

A high α\alpha (2.0) and small tt (10410^{-4}) maximize the rare-class sampling effect, yielding a +22% improvement in mAP₅₀ over the non-rebalanced baseline. Lower α\alpha or larger tt temper the exponential reweighting (Ahmed et al., 27 Mar 2025).

3. Impact on Lightweight and Resource-Constrained Models

E-IRFS substantially benefits low-capacity detectors such as YOLOv11-Nano, where severe class imbalance exacerbates rare-class underfitting. As model size decreases, reliance on data-level balancing grows since smaller models lack the representational power to learn robust rare-class features from few examples. E-IRFS, by exponentially increasing rare-class image exposure, ensures rare-category gradients dominate a larger share of updates, facilitating better rare-object recognition without architectural changes. This property is advantageous when real-time constraints enforce model compactness, as in UAV-based deployments, where compute and energy efficiency are paramount (Ahmed et al., 27 Mar 2025).

4. Key Theoretical Insights, Limitations, and Generalizations

E-IRFS demonstrates how exponential transformations of instance-aware class rarity induce greater inter-class separation in sampling than sublinear approaches. Sensitivity analyses confirm that hyperparameter selection (α\alpha, tt) is essential: overly aggressive up-weighting can trigger rare-class overfitting, whereas insufficient up-weighting yields negligible gains.

Current evaluations restrict E-IRFS to object detection. Extending the paradigm to segmentation or multi-label tasks remains open. Hybridization with adaptive online reweighting, or coupling with targeted augmentations (such as context-aware copy-paste), are proposed for augmenting rare-class diversity and improving stability (Ahmed et al., 27 Mar 2025).

5. Exponentially Weighted Information Recursion for Filtering and Smoothing (E-IRFS)

In a separate context, exponentially weighted information filters and smoothers (sometimes also termed E-IRFS but unrelated to instance-aware sampling) refer to state estimation algorithms in deterministic linear dynamical systems where recency is emphasized through exponential weighting of measurements (Shulami et al., 2020). In these filters:

  • No process noise enters dynamics; instead, measurement weights decay exponentially as W,k=exp(tkt2τ)W_{\ell,k} = \exp\left(-\frac{t_k-t_\ell}{2\tau}\right), with τ>0\tau>0 a time constant.
  • The recursive update rules in information form incorporate this decay via a forgetting parameter λ=exp(tktk1τ)\lambda = \exp\left(-\frac{t_k-t_{k-1}}{\tau}\right), replacing additive process noise in the Kalman filter.

The resulting structure handles filtering, prediction, smoothing, and out-of-sequence measurements by means of a unified, optimal (in WLS sense) recursion with exponential forgetting. Under Gaussian noise, the solution attains the weighted Cramér–Rao lower bound and is algebraically equivalent to batch WLS for each step, provided the weighted telescoping condition holds.

6. Summary and Outlook

Exponentially Weighted IRFS, in both object detection sampling and dynamical system filtering frameworks, provides a principled means of exponentially prioritizing rare or recent information. In deep learning for long-tailed detection, E-IRFS yields substantial gains for rare-category recognition, especially in resource-constrained real-time applications such as UAV surveillance, with empirical improvements up to +22% mAP₅₀ overall and +350% for the most under-represented class (Ahmed et al., 27 Mar 2025). In estimation theory, exponential weighting enables efficient, order-agnostic assimilation of temporally distributed observations and is operationally optimal under broad conditions (Shulami et al., 2020). Both methodologies highlight the utility of exponential weighting for manipulating the informational landscape—be it across classes or across time—for improved statistical performance. Future research will clarify the utility of exponential weighting in broader contexts, including segmentation, adaptive sampling, and task-specific data augmentation.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Exponentially Weighted IRFS (E-IRFS).