Papers
Topics
Authors
Recent
2000 character limit reached

Adaptive and Aggressive Rejection (AAR)

Updated 3 December 2025
  • AAR is a dynamic rejection framework that filters anomalous or adversarial contributions using robust statistical thresholds and a Gaussian mixture model for soft rejection.
  • It integrates a warm-up phase with hard rejection and a main phase with ternary weighting to optimize data retention and improve performance metrics like AUROC.
  • In adaptive control, AAR employs disturbance observers and finite-time controllers to aggressively cancel perturbations, ensuring rapid convergence under uncertainty.

Adaptive and Aggressive Rejection (AAR) encompasses a family of algorithmic mechanisms for robustly filtering out undesirable or adversarial contributions during inference or learning. In anomaly detection, AAR refers to a dynamic, data-driven rejection framework that adaptively identifies and excludes contaminated samples by jointly leveraging robust statistical thresholds and probabilistic modeling. In nonlinear control, AAR describes the coordinated use of adaptive, experience-accelerated disturbance estimators and finite-time controllers to aggressively cancel exogenous perturbations. Across both contexts, AAR is characterized by its principled, multi-phase rejection logic and its capacity to dynamically optimize the trade-off between retention and exclusion under uncertainty.

1. Mathematical Foundations of AAR for Anomaly Detection

AAR for anomaly detection operates on a contaminated dataset D={xi}i=1N\mathcal{D} = \{x_i\}_{i=1}^N, using anomaly scores si=s(xi)s_i = s(x_i). For reconstruction-based models, %%%%2%%%%, where ff is typically an autoencoder. The framework dynamically rejects anomalies in each mini-batch via a tiered thresholding procedure:

  • Modified z–score (hard rejection): For batch size BB, compute

s^=median{si},MAD=mediansis^,mi=0.6745(sis^)MAD.\hat s = \mathrm{median}\{s_i\}, \quad \mathrm{MAD} = \mathrm{median}|s_i - \hat s|, \quad m_i = \frac{0.6745 (s_i - \hat s)}{\mathrm{MAD}}.

Samples with mi>3.5m_i > 3.5 (i.e., si>τNs_i > \tau_N) are hard rejected with threshold

τN=s^+3.50.6745MAD.\tau_N = \hat s + \frac{3.5}{0.6745} \mathrm{MAD}.

p(s)=π1N(sμ1,σ12)+π2N(sμ2,σ22),μ1<μ2.p(s) = \pi_1 \mathcal{N}(s\,|\,\mu_1,\sigma_1^2) + \pi_2 \mathcal{N}(s\,|\,\mu_2,\sigma_2^2), \quad \mu_1 < \mu_2.

The intersection threshold τI\tau_I solves

N(τIμ1,σ12)=N(τIμ2,σ22).\mathcal{N}(\tau_I\,|\,\mu_1,\sigma_1^2) = \mathcal{N}(\tau_I\,|\,\mu_2,\sigma_2^2).

Explicitly,

τI=b±b24ac2a,\tau_{I} = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a},

with

a=1σ121σ22,b=μ2σ22μ1σ12,c=μ12σ12μ22σ222lnσ2σ1.a = \frac{1}{\sigma_1^2} - \frac{1}{\sigma_2^2}, \quad b = \frac{\mu_2}{\sigma_2^2} - \frac{\mu_1}{\sigma_1^2}, \quad c = \frac{\mu_1^2}{\sigma_1^2} - \frac{\mu_2^2}{\sigma_2^2} - 2 \ln \frac{\sigma_2}{\sigma_1}.

  • zzσ\sigma (stability guard): Compute

τσ=μn+zσn\tau_\sigma = \mu_n + z\,\sigma_n

with (μn,σn)(\mu_n, \sigma_n) the mean/std of the “normal” GMM component, z[2,3]z\in[2,3].

The final soft rejection threshold is τ=max{τI,τσ}\tau = \max\{\tau_I,\tau_\sigma\}.

2. Integrated Hard and Soft Rejection Strategies

AAR integrates these thresholds into a phased rejection weighting scheme:

  • Warm-up (first EE epochs): Only the hard cutoff τN\tau_N is active; wi=0w_i=0 for si>τNs_i>\tau_N, wi=1w_i=1 otherwise.
  • Main phase (e>Ee>E): Use weights

wi={0,si>τN ts,τ<siτN 1,siτw_i = \begin{cases} 0,& s_i > \tau_N \ t_s, & \tau < s_i \leq \tau_N \ 1, & s_i \leq \tau \end{cases}

with ts(0,1)t_s\in(0,1) (typically $0.1$).

This approach transforms sample selection from binary (keep/discard) into a ternary regime (1,ts,0)(1, t_s, 0), allowing ambiguous samples to influence training with attenuated impact. This aggressive rejection—removing 510%5\text{–}10\% more than the nominal contamination—empirically yields heightened robustness and improved AUROC, particularly when normal and anomaly score distributions overlap (Lee et al., 26 Nov 2025).

3. AAR Algorithm and Computational Complexity

The AAR training protocol proceeds as follows for each mini-batch and epoch up to TT:

  1. Compute anomaly scores sis_i.
  2. Determine τN\tau_N for all epochs.
  3. If e>Ee>E:
    • Fit GMM and derive τI\tau_I, τσ\tau_\sigma, τ\tau.
  4. Assign weights wiw_i according to the current phase.
  5. Compute the weighted loss,

L=1Bi=1Bwixif(xi)22,L = \frac{1}{B} \sum_{i=1}^B w_i \|x_i - f(x_i)\|_2^2,

and update the model.

Computationally, for mini-batch size BB and feature dimensionality dd, the per-step cost is dominated by the forward/backward pass O(Bd)\mathcal{O}(B\,d); thresholding and EM for GMM fitting add O(B)\mathcal{O}(B) overhead, rendering AAR scalable for large NN and TT.

4. AAR in Adaptive Control: Disturbance Rejection

In adaptive nonlinear control, AAR is exemplified by architectures that combine online disturbance identification with aggressive, finite-time error suppression (Li et al., 2020). Consider a nonlinear plant,

x˙=f(x)+g(x)u+DϵT,\dot{x} = f(x) + g(x)u + D\epsilon_T,

subject to exosystem-generated disturbance ϵ˙T=SϵT\dot{\epsilon}_T = S\epsilon_T. The core components are:

  • Adaptive disturbance observer: State-derivative-free estimation using a filtered regressor, adaptive update of SS via Lyapunov-stable adaptation,

S^˙vec=Γ(ϵˉTFTD)Te~+κΓYiT(exp. replay residuals),\dot{\hat{S}}_{\mathrm{vec}} = \Gamma(\bar{\epsilon}^T F^T \otimes D)^T \tilde{e} + \kappa\Gamma\sum Y_i^T(\text{exp. replay residuals}),

where experience replay (κ>0\kappa>0) accelerates convergence.

  • Aggressive (finite-time) controller: Integral-type terminal sliding mode with adaptive gain,

u=g+(x)[f(x)+x˙dDϵ^Tsign(ex)k(t)sign(σ)],u = g^+(x)[-f(x) + \dot{x}_d - D\hat{\epsilon}_T - \operatorname{sign}(e_x) - k(t)\operatorname{sign}(\sigma)],

enforcing ex0e_x \to 0 in finite time provided certain rank/richness conditions are met.

The “adaptive” aspect derives from online parameter learning, while “aggressive rejection” is realized through high-bandwidth feed-forward cancellation and non-asymptotic convergence guarantees.

5. Empirical Evaluation and Performance

In anomaly detection benchmarks (Lee et al., 26 Nov 2025), AAR demonstrates:

  • On MNIST/Fashion-MNIST with up to 20%20\% synthetic contamination, AAR achieves average AUROC increases of $0.006$ (MNIST) and $0.016$ (F-MNIST) over the prior best latent outlier exposure (LOE).
  • On $30$ UCI-type tabular datasets contaminated at 20%20\%, AAR lifts average AUROC by 0.0330.0330.033{-}0.033 across AE/MemAE/DSVDD backbones relative to robust statistics (MZ).
  • Overall, AAR’s average AUROC gain over all prior methods is +0.041+0.041.

Ablations confirm that slightly over-estimating contamination (by 510%5\text{–}10\%) enhances robustness; increasing zz in the zzσ\sigma cutoff improves stability with negligible loss; soft rejection with ts0.1t_s\approx 0.1 optimizes the bias-variance trade-off.

In adaptive control (Li et al., 2020), experience replay reduces disturbance estimation time from 8\sim8 s to 2\sim2 s and ensures finite-time tracking in <3<3 s (in nonlinear benchmarks), contrasting with much slower convergence in experience-free observers.

6. Practical Tuning, Limitations, and Outlook

Tuning recommendations for anomaly detection include E=1020E=10{-}20 (warm-up epochs), z=23z=2{-}3 (stability guard), and ts=0.050.2t_s=0.05{-}0.2 (soft rejection weight). For adaptive control, filter and adaptation gains are selected to balance estimation speed and sensitivity to noise, while the replay window is optimized against memory and numerical stability.

Notable limitations:

  • The univariate GMM used in AAR assumes a bimodal, near-Gaussian score distribution, which can be violated in highly skewed or multi-modal cases.
  • Hyperparameters (E,z,ts)(E,z,t_s) still require domain-specific tuning.

Open research directions involve meta-learning for automatic parameter adaptation, integrating limited anomaly labels (semi-supervised AAR), extending to high-dimensional or non-Gaussian score spaces, and adapting AAR for real-time data streams in cyber-physical systems and IoT scenarios (Lee et al., 26 Nov 2025, Li et al., 2020).

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Adaptive and Aggressive Rejection (AAR).