Rescaled Influence Functions (RIF)
- Rescaled Influence Functions (RIF) are advanced techniques that modify classical influence functions to improve effect estimation in high-dimensional settings.
- They enhance leave-one-out approximations in machine learning and econometrics by using leverage scores and sensitivity curves for more accurate parameter adjustments.
- Applications include data poisoning detection, machine unlearning, and robust marginal effect estimation, offering improved computational efficiency and precision.
Rescaled Influence Functions (RIF) constitute a class of techniques for data attribution and robust statistical inference that modify classical influence functions to address their empirical and theoretical shortcomings in high-dimensional settings and for complex statistical functionals. The two major contexts for RIF methodology are (1) machine learning model parameter sensitivity in the overparameterized regime and (2) econometric analysis of functional outcomes beyond means, such as quantiles and indices of inequality. RIFs enhance the fidelity of effect estimation from individual observations or groups of observations both in analytic and empirical tasks.
1. Foundations of Influence Functions and their Rescaling
Classical influence functions (IF) quantify the infinitesimal effect of perturbing a dataset by introducing an outlier or removing an observation. For estimator obtained by minimizing the empirical loss , the IF for observation is
where is the empirical Hessian and the gradient for . This first-order Taylor approximation is accurate in low-dimensional, regularized settings but degrades substantially in the high-dimensional regime (number of parameters ), where curvature becomes sample-sensitive and IFs systematically underestimate removal effects (Rubinstein et al., 7 Jun 2025). For statistical functionals , such as quantiles or inequality indices, the classical IF is defined as the Gâteaux derivative at the point mass,
with the distribution function of (Alejo et al., 2021).
2. Mathematical Formulation and Variants of RIF
2.1 Machine Learning Data Attribution
Rescaled Influence Functions enhance data attribution in high-dimensional models, notably generalized linear models (GLMs). RIF approximates the leave-one-out effect by:
with and . For GLMs (rank-one ), Sherman–Morrison formula yields the closed form:
where is the “leverage score”. For logistic regression, with (Rubinstein et al., 7 Jun 2025). The scaling term corrects underestimation: for small , RIF IF; for large , the correction can be considerable.
2.2 Econometric and Statistical Applications
In statistical inference, the recentered influence function (also abbreviated RIF in the econometrics tradition but conceptually distinct from the ML RIF) is defined as
ensuring . RIF regression models the conditional expectation to estimate the marginal effects of covariates on , applicable to quantiles, Gini, polarization, and other complex indices (Alejo et al., 2021).
3. Theoretical Properties and Accuracy Regimes
The accuracy of RIF in machine learning settings is established via comparison to the “single Newton step” (NS) leave-one-out approximation:
RIF is additive across removals and, under positive semidefiniteness, limited sample dominance, and incoherence conditions, achieves tight signal-to-noise bounds in high dimensions: SNR for batch removal size (Rubinstein et al., 7 Jun 2025). In contrast, standard IFs lose accuracy as decreases or regularization weakens.
For statistical functionals, sensitivity curve (SC) approximations converge in probability to the true IF under Fréchet-differentiability, scale invariance, and quadratic von Mises remainder. Thus, empirical SCs provide valid approximations for RIF regression estimation in large samples (Alejo et al., 2021).
4. Algorithmic Implementation and Computational Aspects
RIF for GLMs can be computed efficiently:
1 2 3 4 5 6 |
Input: {(x_i, y_i)}, loss ℓ, minimizer θ̂, Hessian inverse H⁻¹
For i=1 to n:
g_i ← ∇ℓ(x_i, y_i; θ̂)
h_i ← x_i^T H⁻¹ x_i × weight factor (e.g., σ_i(1−σ_i) for logistic regression)
IF_i ← -H⁻¹ g_i
RIF_i ← IF_i / (1 - h_i) |
In RIF regression for complex statistics, leave-one-out functionals are approximated using subsample-based sensitivity curve estimation and regression splines, reducing cost to approximately with negligible loss in precision for large (Alejo et al., 2021).
5. Empirical Findings Across Applications
5.1 High-Dimensional Data Attribution
Rescaled IFs provide accurate predictions of leave--out shifts in test loss, prediction probabilities, and self-loss across vision (e.g., ImageNet-derived binary tasks with ResNet or Inception embeddings), audio (ESC-50 with OpenL3), and textual datasets (IMDB, Enron) when compared to ground-truth retraining. Conventional IFs systematically underpredict effect sizes in or regimes, but RIFs remain on the retrain-diagonal (Rubinstein et al., 7 Jun 2025). Poisoning detection is significantly improved: IFs assign low influence to adversarially flipped instances, while RIFs identify them with high removal effect.
5.2 RIF Regression with Sensitivity Curves
Empirically, RIF regression coefficients estimated with restricted spline-based sensitivity curves closely match those from analytical RIFs for variance and Gini, both in simulation and in large-scale wage data for the Duclos–Esteban–Ray polarization index. The approach remains accurate for functionals without closed-form IF, and the computational gains are an order of magnitude for large samples (Alejo et al., 2021).
6. Methodological Extensions and Applications
Rescaled Influence Functions are leveraged for:
- Data Poisoning Detection: RIFs flag points whose removal produces disproportionately large parameter or prediction shifts—critical for identifying adversarial contamination (Rubinstein et al., 7 Jun 2025).
- Machine Unlearning: Efficiently estimating parameter adjustment when removing user data from models.
- Dataset Auditing and Curation: Prioritizing instances for removal or review based on their corrected influence metric.
- Bias Detection and Debiasing: Detecting group effects and harmful biases that might be underestimated by classical IFs.
- Robust Marginal Effect Estimation: In econometrics, RIF regression enables marginal effect estimation for quantiles, Gini, and polarization indices (Alejo et al., 2021).
A notable distinction is that RIF in machine learning primarily addresses parameter sensitivity in continuous parameter spaces, while RIF in statistical modeling addresses distributional functionals and marginal effect estimation.
7. Limitations and Interpretive Considerations
While rescaled influence functions improve pointwise removal estimates, their validity relies on sufficient regularity (rank-deficiency, sample incoherence, positive semidefiniteness), and the direct analogy between ML RIF and econometric RIF is limited—terminological precision is critical. In both cases, the approximations are asymptotic: for very large-rank Hessians or nonlinear models without Hessian access, performance may degrade. For complex statistics without analytic IF, empirical SC-based estimation is justified asymptotically, but finite-sample behavior should be validated as in (Alejo et al., 2021).
Rescaled Influence Functions restore much of the higher-order accuracy of Newton-based methods without sacrificing additive structure or computational tractability, making them a robust methodological advance for attribution, unlearning, auditing, and effect estimation in both high-dimensional machine learning and semiparametric statistical inference (Rubinstein et al., 7 Jun 2025, Alejo et al., 2021).