Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 158 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 106 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 434 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Stability of Error Bounds

Updated 26 September 2025
  • Stability of error bounds is the property that guarantees an inequality controls the distance to a solution set even when data, functions, or operators are slightly perturbed.
  • It utilizes directional derivatives, minimax formulations, and stability moduli to provide quantitative criteria valid in both finite- and infinite-dimensional spaces.
  • This concept is crucial for applications in optimization algorithm convergence, sensitivity analysis, and error control in statistical learning.

The stability of error bounds addresses the persistence and robustness of inequalities that control the distance to a solution set under small perturbations of the underlying data, functions, or operators defining an optimization, feasibility, or statistical estimation problem. In the context of both finite- and infinite-dimensional spaces, and across deterministic and stochastic settings, recent research has established precise quantitative criteria—often involving directional derivatives, minimax problems, or stability moduli—that govern when error bound properties endure under perturbations. This topic is foundational in variational analysis, convex optimization, nonconvex analysis, and the theory of generalization for learning algorithms.

1. Fundamental Definitions and Characterizations

A classical error bound is an inequality of the form

cd(x,Sf)[f(x)]+,c \cdot d(x, S_f) \leq [f(x)]_+,

where Sf={xf(x)0}S_f = \{ x \mid f(x) \leq 0 \}, d(x,Sf)d(x, S_f) is the distance from xx to the solution set, and [f(x)]+=max{f(x),0}[f(x)]_+ = \max\{f(x), 0\}. The constant c>0c > 0 is referred to as the error bound modulus. Local and global error bounds are considered, depending on whether the inequality holds in a neighborhood of a reference point or globally in the ambient space.

Stability of error bounds refers to the property that this inequality—and a lower bound on the modulus cc itself—persists when the function ff is subject to admissible perturbations (e.g., additive, linear, or more general small changes). This stability is essential in practical situations where the system data may be uncertain or only known approximately.

Key characterizations include:

  • In Banach spaces, the modulus of a local error bound at xˉ\bar{x} can be expressed as

Erf(xˉ)=lim infxxˉ,f(x)>0f(x)d(x,Sf).\operatorname{Er}_f(\bar{x}) = \liminf_{x \to \bar{x}, f(x)>0} \frac{f(x)}{d(x, S_f)}.

  • Primal, directional characterizations are central: for proper lower semicontinuous convex functions, the stability of the local error bound at xˉ\bar{x} is equivalent to

infh=1d+f(xˉ,h)0,\inf_{\|h\|=1} d^+ f(\bar{x}, h) \neq 0,

where d+f(xˉ,h)d^+ f(\bar{x}, h) is the (upper) directional derivative. For global stability, a uniform bound away from zero is required over all boundary points of the solution set (Wei et al., 20 Sep 2024, Wei et al., 2021, Wei et al., 2023, Kruger et al., 2015).

  • For finite families (such as linear inequalities), the error bound property is connected to the quantity

minh=1maxiJaih<0for all relevant index sets J,\min_{\|h\|=1} \max_{i\in J} a_i^\top h < 0 \quad \text{for all relevant index sets } J,

encoding the worst-case directional rate of increase over active constraints (Wei et al., 25 Sep 2025).

2. Stability Under Perturbations: Moduli, Radii, and Sensitivity

Stability analyses classify perturbations as arbitrary, convex, or linear. For a Banach-space function ff, an ε\varepsilon-perturbation if gg satisfies

lim supxxˉg(x)f(x)xxˉε.\limsup_{x \to \bar{x}} \frac{|g(x) - f(x)|}{\|x - \bar{x}\|} \leq \varepsilon.

The central concept in quantifying error bound stability is the "radius of error bounds." For the local error bound property, it is given by the boundary subdifferential slope: fbd(xˉ)=d(0,bdf(xˉ)),|\partial f|_{\mathrm{bd}}(\bar{x}) = d(0, \operatorname{bd} \partial f(\bar{x})), and the property is stable for all perturbations with size strictly less than this value (Kruger et al., 2015).

In the global context, more refined local constants (often expressed via directional derivatives or subdifferentials) are needed. If the minimum of the directional derivatives is separated from zero throughout the boundary, the global error bound persists under sufficiently small perturbations (Wei et al., 20 Sep 2024, Wei et al., 2021, Wei et al., 2023).

When specialized to polyhedral or finite systems, e.g., linear inequalities, the stability with respect to perturbations of matrix coefficients or right-hand sides reduces to checking finiteness or positivity of minimax-type constants across active sets (Wei et al., 25 Sep 2025). Notably, the system's error bound is stable under small perturbations if and only if

minh=1maxiIaih0II,\min_{\|h\|=1} \max_{i \in I} a_i^\top h \neq 0 \quad \forall I \in \mathcal{I},

where I\mathcal{I} indexes faces of the feasible region.

3. Directional Derivative, Subdifferential, and Primal Criteria

The directional derivative approach unifies the theory in Banach and finite dimensions:

  • For any proper lower semicontinuous convex function, the local and global moduli admit the representations:

Er(f,xˉ)=lim infxxˉ,f(x)>0{infh=1d+f(x,h)},\operatorname{Er}(f, \bar{x}) = \liminf_{x \to \bar{x}, f(x)>0} \left\{ -\inf_{\|h\|=1} d^+ f(x, h)\right\},

Er(f)=inff(x)>0{infh=1d+f(x,h)}.\operatorname{Er}(f) = \inf_{f(x)>0} \left\{ -\inf_{\|h\|=1} d^+ f(x, h)\right\}.

  • For polyhedral/affine systems, these criteria become

minh=1maxiIaih\min_{\|h\|=1} \max_{i\in I} a_i^\top h

over all nonempty faces II, yielding both computational and theoretical access to the error bound stability problem (Wei et al., 2023, Wei et al., 25 Sep 2025).

For semi-infinite convex constraint systems, the stability of both local and global error bounds under small, uniform (directionally consistent) linear perturbations is also equivalent to strict separation (from zero) of the minimax directional derivative across all boundary points and active constraints (Wei et al., 2023).

4. Complexity and Algorithmic Issues

The task of verifying the stability of error bounds is, in general, computationally hard:

  • For systems of linear inequalities, the error bound and stability recognition problems are not in P and are Co-NP–complete in the general case. The intractability is caused by the exponential number of active index sets to be checked; however, when the number of variables is fixed, the problem becomes polynomially solvable (Wei et al., 25 Sep 2025).
  • Effective pseudo-polynomial algorithms are available in practice by leveraging advances in convex hull computations and efficient enumeration of faces, provided the input size and dimension are moderate.

Implications are far-reaching: these complexity barriers delineate when symbolic or numeric computation of stability moduli are practical versus when heuristic or relaxatory (e.g., dual) methods must be invoked.

5. Applications: Optimization, Sensitivity, and Algorithmic Guarantees

Stability of error bounds underpins several applications:

  • Convergence analysis of optimization algorithms: Error bounds provide rates of decrease for residuals or distances in iterative schemes (projected gradient, cyclic projection, coordinate descent), and their stability implies robustness of convergence under data fluctuations or modeling errors (Kruger et al., 2015, Li et al., 2015).
  • Sensitivity analysis: For linear or convex systems, understanding the stability of Hoffman's constant (quantifying the worst-case error bound modulus) under perturbations enables robust assessment of feasibility and solution accuracy (Wei et al., 2023, Wei et al., 20 Sep 2024).
  • Parameter perturbations in parametric and semi-infinite programming: For systems defined by parameterized families (e.g., in generalized semi-infinite polynomial optimization), stability conditions yield explicit Hölder exponents governing the rate at which solution sets or functionals change with respect to data (Li et al., 2015, Hà et al., 2019).
  • Error control in inverse and ill-posed problems: Error bounds inform reconstruction errors in image processing, signal processing, and PDE-constrained optimization, with stability paramount when data or discretization is noisy (Wei et al., 20 Sep 2024, Han et al., 2021).
  • Stability in statistical learning: The persistence of generalization bounds under perturbation (in data or training procedures) is tied to algorithmic stability properties, with analogous error control results transposed to statistical risk (Feldman et al., 2018, Banerjee et al., 2022).

6. Comparative and Structural Analysis

Recent works emphasize the duality between primal (directional derivative or subdifferential slope) and dual (metric regularity, calmness) approaches to error bound stability (Kruger et al., 2015, Wei et al., 2021, Wei et al., 20 Sep 2024). While classical metric regularity theory provides sufficient conditions for stability under infinitesimal perturbation, error bounds may require strictly positive lower bounds (e.g., minimal subgradient norms on the boundary).

The hierarchy of perturbations—arbitrary, convex, linear—yields radius theorems: the "radius" of stability is identified with the smallest boundary subdifferential norm, so any perturbation with modulus below this retains an error bound of comparable strength (Kruger et al., 2015).

In the context of systems defined by finitely many or parameterized constraints:

  • Polynomial, semi-algebraic, or analytic structure allows for explicit computation or estimation of exponents and stability radii, leveraging techniques from algebraic geometry and stratification theory (Li et al., 2015, Hà et al., 2019).
  • Applications to semi-infinite systems (both convex and linear) illustrate that stability persists under uniform perturbation directionality, with the directional derivative minimax criterion providing both a necessary and sufficient test (Wei et al., 2023, Wei et al., 2021).

7. Open Problems and Research Directions

Open questions stem from both computational and theoretical angles:

  • For the general (non-fixed dimension) case, is the stability recognition problem for error bounds NP-complete or even harder?
  • To what extent can primal (directional derivative-based) characterizations be extended to nonconvex or nonsmooth settings, or generalized variational inequalities?
  • Can further connections between metric regularity, calmness, and error bound stability yield unified variational frameworks applicable across optimization, game theory, and learning?
  • The extension of these stability properties to infinite-dimensional, nonreflexive, or nonseparable Banach spaces, or for more general perturbations, remains an active area.

Stability of error bounds provides both foundational insight and practical tools for guaranteeing robust solution behavior under data and model perturbations. It unifies methods from variational analysis, convex geometry, statistical learning theory, and computational complexity, yielding both sharp inequalities and a detailed landscape of tractability and algorithmic feasibility for a broad class of optimization and feasibility problems.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Stability of Error Bounds.