Local Error-Bound Conditions
- Local error-bound conditions are defined by linking the distance to the solution set with the degree of constraint violation using quantitative regularity properties.
- They provide a framework for analyzing convergence rates and stability in optimization algorithms, including in nonconvex, nonsmooth, and degenerate contexts.
- These conditions enable explicit error estimates and rate guarantees, underpinning algorithmic performance in both theoretical and applied optimization scenarios.
A local error-bound condition is a quantitative regularity property that links the distance from a point to a reference set (typically a solution set of a system of equations, inequalities, or inclusions) with a function measuring the degree of constraint violation. In nonlinear, nonconvex, nonsmooth, or degenerate contexts, local error bounds provide structure critical for establishing rates of convergence, sharpness of solution geometry, stability under perturbations, and algorithmic guarantees.
1. Formal Definition and Scope
A local error bound for a function on a metric or Banach space at a reference point with typically takes the form: where , is the distance to the solution set, and .
Generalizations include nonlinear (Hölder-type) bounds: for some modulus , and set-valued, parametric, or variational system extensions. The property is called local as it is asserted in a neighborhood of a reference point; global error bounds hold on unbounded sets.
The local error bound modulus is defined as:
2. Characterizations and Necessary/Sufficient Conditions
Slope and Subdifferential Criteria
A unified quantitative framework for local error bounds is established by slope and subdifferential conditions (Cuong et al., 2020, Li et al., 2016). The primary tools are:
- The strong slope ,
- The nonlocal slope ,
- The subdifferential slope .
For lower semicontinuous, proper in a complete metric space, a sufficient condition (Theorem 2.2 of (Cuong et al., 2020)) is: For normed spaces and convex , for all near with is both necessary and sufficient.
For nonsmooth, locally Lipschitz and regular , sharp bounds for the local error bound modulus are provided by (Li et al., 2016): where is the outer limiting subdifferential, and end is the set of "maximal" directions in the subdifferential.
Directional Derivative and Geometric Criteria
For convex inequalities, (Wei et al., 2021) gives a primal reformulation: the local error bound at is stable under small (linear) perturbations if and only if
where is the directional derivative. This covers semi-infinite systems by maximizing over active indices.
Nonconvex and Structured Cases
For semialgebraic, tame, or polynomial systems, the Łojasiewicz-Kurdyka (KŁ) inequality and its exponents underlie the existence of error bounds, both locally and globally, often of Hölder type (Nguyen, 2017, Li et al., 2015, Chen et al., 2 Oct 2025). For instance, in polynomial optimization and parametric systems (Li et al., 2015, Chen et al., 2 Oct 2025), explicit exponents are given in terms of degree and problem dimensions.
3. Role in Optimization Algorithms and Complexity
Local error-bound conditions are pivotal for algorithmic convergence analysis and complexity.
- Under a local error bound, fixed-point iterations of averaged operators (including gradient descent, proximal methods, ADMM, operator splitting) converge linearly to the solution set—often in absence of strong convexity (Treek et al., 31 Oct 2025).
- The rate is explicit in terms of the error-bound constant :
with depending on the averaging parameter and .
- For inertial forward-backward schemes (FISTA/IFB), local error bounds—typically of Luo-Tseng type—enable super-polynomial or even linear rates in composite (nonsmooth) convex minimization (2007.07432).
- In nonconvex optimization, error bounds are central for local linear rates of first-order methods (gradient descent), and for quadratic convergence of cubic-regularization/Newton-type methods even at non-isolated or degenerate minima, where classical assumptions like strong convexity or nondegeneracy fail (Yue et al., 2018, Chen et al., 16 Feb 2025).
- Distributed asynchronous methods over graphs also exploit local error bounds for linear convergence without global regularity (Cannelli et al., 2020).
4. Connections to Constraint Qualifications and Geometry
Constraint qualifications (CQ) characterize when error bounds hold in constrained problems:
- In convex, polyhedral or conic inclusion problems, error bounds are characterized by suitable CQs such as Abadie's CQ (ACQ), Mangasarian-Fromovitz CQ, or strict constraint qualifications. For smooth cones and smooth mappings, local error bounds are equivalent to the validity of ACQ near the reference point (Huy et al., 7 Feb 2025).
- In mathematical programs with vanishing constraints (MPVC), recently developed weak CQs such as MPVC-generalized quasinormality are sufficient for error bounds, extending their applicability to degenerate or nonpolyhedral systems (Khare et al., 2018).
- The relationship between EB conditions, quadratic growth, and stationarity (e.g., enhanced M-stationarity, Polyak-Łojasiewicz inequality) has been established for both smooth and structured problems (Chen et al., 16 Feb 2025, Yue et al., 2018).
5. Stability and Perturbation Analysis
Stability of the error-bound property with respect to data perturbations is characterized in Banach spaces by the boundary subdifferential slope:
This quantity is the exact "radius of error bounds"—i.e., the supremum size of perturbation (arbitrary, convex, or linear) under which local error bounds are preserved (Kruger et al., 2015). For convex systems, stability is further equivalent to strictly positive lower bounds on certain directional derivatives (Wei et al., 2021).
6. Quantitative and Explicit Error-Bound Estimates
In polynomial, tame, or definable settings, explicit Holder-type exponents and constants can be computed:
- For rank-constrained affine feasibility, an explicit Holder exponent is given in terms of dimension, derived via polynomial Łojasiewicz exponents (Chen et al., 2 Oct 2025).
- For parametric and semi-infinite polynomial systems, the exponent depends on all primal and auxiliary variable dimensions and the maximal degree (Li et al., 2015).
- For smooth, regular, or lower-C functions, the modulus of the error bound is exactly characterized by geometric data from the subdifferential and its end set (Li et al., 2016).
A summary of main error-bound formulas:
| Setting | Error Bound Formulation | Main Quantitative Condition |
|---|---|---|
| Convex inequality | ||
| Polynomial/tame | via explicit Łojasiewicz | |
| Fixed-point FPI | via relative Hoffman const | |
| System inclusion | ACQ holds locally | |
| Variational/PD | SOSC + SRC holds |
7. Functional Equivalences and Broader Frameworks
Recent work unifies various error-bound and subdifferential properties:
- For prox-regular functions, the Kurdyka-Łojasiewicz property, level-set subdifferential error bounds, and Holder error-bounds are locally equivalent, with exponents/constant relationships computable (Wang et al., 2023).
- Moreau envelope and other regularization procedures preserve error-bound properties with controlled changes in constants/exponents.
- The equivalence between local error-bound and quadratic growth conditions has been established beyond convexity (Yue et al., 2018, Chen et al., 16 Feb 2025).
8. Significance, Applications, and Current Directions
Local error-bound conditions are now recognized as a central unifying framework in optimization theory:
- Dictating convergence and complexity of first- and second-order algorithms, including in degenerate, distributed, and nonconvex regimes;
- Underpinning the geometry of solution sets in semialgebraic, matrix, and feasible region constrained problems;
- Offering a foundation for stability and sensitivity analysis under perturbations;
- Providing explicit constants and exponents, which are crucial for algorithmic tuning and theoretical guarantees.
Open directions concern further sharpening of quantitative bounds (larger exponents, tighter constants), characterizations in infinite-dimensional/nonsmooth settings, constructive verification in complex or high-dimensional systems, and integration with nonsmooth, stochastic, or online optimization methodologies.