Papers
Topics
Authors
Recent
Search
2000 character limit reached

RCPLD in Nonlinear Optimization

Updated 25 March 2026
  • RCPLD is a constraint qualification that weakens traditional requirements by ensuring that any positive-linear dependence among gradients at a reference point forces local linear dependence.
  • It guarantees M-stationarity, local error bounds, and supports exact penalization even in non-Lipschitz, nonsmooth, and bilevel optimization settings.
  • RCPLD forms a foundational hierarchy that generalizes classical qualifications such as LICQ and MFCQ, extending its applications to disjunctive and parametric programming.

Relaxed Constant Positive Linear Dependence (RCPLD) is a constraint qualification (CQ) foundational to advanced nonlinear optimization, variational analysis, and mathematical programs with equilibrium or disjunctive constraints. RCPLD is defined for systems of equalities and inequalities and provides a verifiable but significantly weaker alternative to classical constraint qualifications such as the Mangasarian-Fromovitz CQ (MFCQ) and Linear Independence CQ (LICQ). Its main theoretical impact lies in guaranteeing KKT-type (M-stationarity) optimality conditions and the validity of local error bounds, even when conventional CQs fail. RCPLD has been systematically developed and generalized to handle parametric, bilevel, disjunctive, and nonsmooth optimization—including non-Lipschitz objectives.

1. Formal Definition of RCPLD

Let Γ:RnRm\Gamma:\mathbb{R}^n \rightrightarrows \mathbb{R}^m be a set-valued mapping

Γ(x)={yRmhi(x,y)0(iI:={1,...,}), hj(x,y)=0(jJ:={+1,...,p})}\Gamma(x) = \{y \in \mathbb{R}^m \mid h_i(x, y) \le 0 \quad (i \in I := \{1, ..., \ell\}), ~ h_j(x, y) = 0 \quad (j \in J := \{\ell+1, ..., p\})\}

where all hih_i, hjh_j are continuously differentiable in yy near (xˉ,yˉ)(\bar{x}, \bar{y}).

Let I(xˉ,yˉ)={iIhi(xˉ,yˉ)=0}I(\bar{x}, \bar{y}) = \{i \in I \mid h_i(\bar{x}, \bar{y}) = 0\} denote the active-inequality index set at (xˉ,yˉ)(\bar{x}, \bar{y}). For any hih_i, denote the gradient with respect to yy at (xˉ,yˉ)(\bar{x}, \bar{y}) as yhi(xˉ,yˉ)\nabla_y h_i(\bar{x}, \bar{y}).

RCPLD holds at (xˉ,yˉ)(\bar{x}, \bar{y}) if there exists a neighborhood UU of (xˉ,yˉ)(\bar{x}, \bar{y}) and SJS \subseteq J such that:

  • (i) {yhj(xˉ,yˉ)  jS}\{\nabla_y h_j(\bar{x}, \bar{y})~|~j \in S\} is a basis of span{yhj(xˉ,yˉ)  jJ}\operatorname{span}\{\nabla_y h_j(\bar{x}, \bar{y})~|~j \in J\},
  • (ii) rank[yhj(x,y)]jS\operatorname{rank}[\nabla_y h_j(x, y)]_{j \in S} is constant for all (x,y)U(x, y) \in U,
  • (iii) For any KI(xˉ,yˉ)K \subseteq I(\bar{x}, \bar{y}), if {yhi(xˉ,yˉ)}iK{yhj(xˉ,yˉ)}jS\{\nabla_y h_i(\bar{x}, \bar{y})\}_{i\in K} \cup \{\nabla_y h_j(\bar{x}, \bar{y})\}_{j\in S} is positive-linearly dependent (there exist (α,β)0(\alpha, \beta) \neq 0, αi0\alpha_i \ge 0): iKαiyhi(xˉ,yˉ)+jSβjyhj(xˉ,yˉ)=0,\sum_{i\in K} \alpha_i \nabla_y h_i(\bar{x},\bar{y}) + \sum_{j\in S} \beta_j \nabla_y h_j(\bar{x},\bar{y}) = 0, then {yhi(x,y)}iKS\{\nabla_y h_i(x, y)\}_{i\in K \cup S} is linearly dependent for all (x,y)U(x,y)\in U.

This definition generalizes to more complex systems (e.g., with Lipschitz functions, complementarity constraints, and disjunctive sets) by extending the role of gradients to suitable subdifferentials and normal cones, but the core logic of persistence of linear dependencies remains essential (Mehlitz et al., 2020, Xu et al., 2019, Xu et al., 2022).

2. Hierarchy of Constraint Qualifications

RCPLD is strictly weaker than classical CQs as shown in the following logical implication chain:

Stronger CQ Implies Weaker CQ
LICQ Linear independence of all active gradients
\Downarrow
MFCQ Existence of descent direction; active gradients indep.
\Downarrow
CPLD Constant positive linear dependence persists in neighborhood
\Downarrow
RCPLD Positive dependence only needs to generate local dependence
  • LICQ requires strict linear independence of active gradients.
  • MFCQ (Mangasarian-Fromovitz) requires independence and existence of a feasible descent direction.
  • CPLD (Constant Positive Linear Dependence): If any combination of gradients is positively dependent at a point, this dependence persists locally.
  • RCPLD relaxes CPLD: only requires that any positive-linear dependence among gradients at the reference point forces local linear dependence—not necessarily constancy of the dependence (Xu et al., 2019, Mehlitz et al., 2020).

Neither MFCQ nor RCRCQ (relaxed constant rank) implies the other, but both imply RCPLD (Mehlitz et al., 2020).

3. Main Theoretical Outcomes

M-Stationarity and First-Order Optimality

RCPLD is sufficient to guarantee Mordukhovich (M-)stationarity conditions for both smooth and certain nonsmooth (including non-Lipschitz) optimization systems. These conditions are generalized KKT criteria, involving limiting subdifferentials and (possibly nonunique) multipliers (Xu et al., 2019, Guo et al., 2017, Mehlitz et al., 2020).

Error Bounds (R-regularity)

RCPLD is a key sufficient condition for the so-called R-regularity of set-valued mappings, which is equivalent to local error bounds of the type

dist(y,Γ(x))κmax{0,maxiIhi(x,y),maxjJhj(x,y)}\operatorname{dist}(y, \Gamma(x)) \leq \kappa \cdot \max\left\{0, \max_{i\in I} h_i(x, y), \max_{j\in J} |h_j(x, y)|\right\}

for all (x,y)(x, y) near (xˉ,yˉ)(\bar{x}, \bar{y}) (Mehlitz et al., 2020).

The R-regularity (metric subregularity) of the feasible mapping under RCPLD ensures robust stability of the solution to perturbations and underpins further structural properties, such as the Lipschitz-like (Aubin) property of solution maps.

Exact Penalization

In the presence of RCPLD, exact penalization principles can be invoked: a local minimizer of the original problem remains a local minimizer for sufficiently large penalty parameters on the constraint violation functions, even in non-Lipschitz settings (Guo et al., 2017, Xu et al., 2019).

4. Extensions, Generalizations, and Disjunctive Systems

RCPLD extends beyond classical nonlinear programming to nonsmooth analysis, disjunctive systems, and mathematical programs with disjunctive (MPDC), complementarity (MPEC), vanishing (MPVC), and switching (MPSC) constraints (Xu et al., 2022). The generalization employs limiting and regular normal cones for nonconvex or nonregular sets.

  • Piecewise RCPLD: In general MPDCs, RCPLD alone may not suffice for a local error bound due to pathological polyhedral intersections. Piecewise RCPLD, which tests RCPLD on every polyhedral subsystem in a decomposition of the constraint sets, restores error-bounds in the general non-convex setting.
  • Ortho-disjunctive programs: For MPEC, MPVC, and MPSC, specialized versions (MPEC-RCPLD, MPVC-RCPLD) are the weakest known conditions guaranteeing error bounds and stationarity without strict complementarity (Xu et al., 2022).

In the case of cardinality-constrained optimization, RCPLD suffices for metric subregularity due to the regularity of unions of subspaces (MPDSC), unlike in general MPDCs where piecewise RCPLD is indispensable (Xiao et al., 2022).

5. Applications in Parametric and Bilevel Optimization

Parametric Optimization

For a parametric program

minyf(x,y)s.t. yΓ(x)\min_y f(x,y) \quad \text{s.t. } y \in \Gamma(x)

RCPLD guarantees R-regularity of Γ\Gamma under convexity and affine assumptions, implying uniform error bounds and the Aubin property (local Lipschitzianity of the solution map S(x)S(x) and value function ϕ(x)\phi(x)) (Mehlitz et al., 2020).

Bilevel Programming

In bilevel programs, RCPLD is central in enabling:

  • Existence of solutions in pessimistic formulations (through lower semicontinuity of the solution map),
  • Partial calmness (a necessary property for the validity of exact penalization and KKT conditions) in optimistic/value-function formulations,
  • Derivation of strong first-order optimality conditions in the combined program representation, where RCPLD is the only checkable CQ weaker than both MFCQ and CPLD ensuring M-stationarity (Xu et al., 2019, Ye, 2019, Mehlitz et al., 2020).

6. RCPLD for Non-Lipschitz and Nonsmooth Problems

RCPLD extends to non-Lipschitz settings via the use of the Mordukhovich horizon subdifferential φ(x)\partial^\infty \varphi(x^*) for the possibly nonsmooth component φ\varphi in the objective. The aa^\infty-RCPLD variant replaces pointwise gradient dependence by a generalized inclusion involving subdifferentials and ensures necessity of KKT-type conditions and error bounds for non-Lipschitz problems (Guo et al., 2017). In these settings, RCPLD remains checkable and effective in practice.

7. Summary Table: RCPLD versus Other Constraint Qualifications

Constraint Qualification Brief Definition Implication Hierarchy
LICQ All active constraint gradients are linearly independent \Downarrow
CRCQ Gradients maintain constant rank locally \Downarrow
CPLD Any positive-linear dependence among active gradients persists locally \Downarrow
RCPLD Any positive-linear dependence forces local linear dependence Sufficient for error bound (R-reg)

LICQ \Rightarrow CRCQ \Rightarrow CPLD \Rightarrow RCPLD (Xu et al., 2022, Xiao et al., 2022, Xu et al., 2019).

References

  • "R-regularity of set-valued mappings under the relaxed constant positive linear dependence constraint qualification with applications to parametric and bilevel optimization" (Mehlitz et al., 2020)
  • "Relaxed constant positive linear dependence constraint qualification for disjunctive programs" (Xu et al., 2022)
  • "Relaxed constant positive linear dependence constraint qualification and its application to bilevel programs" (Xu et al., 2019)
  • "Optimality conditions and constraint qualifications for cardinality constrained optimization problems" (Xiao et al., 2022)
  • "Constraint qualifications and optimality conditions in bilevel optimization" (Ye, 2019)
  • "Necessary Optimality Conditions and Exact Penalization for Non-Lipschitz Nonlinear Programs" (Guo et al., 2017)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Relaxed Constant Positive Linear Dependence (RCPLD).