Papers
Topics
Authors
Recent
Search
2000 character limit reached

Feasibility-Preserving Newton Algorithms

Updated 27 January 2026
  • Feasibility-preserving Newton-type algorithms are iterative methods that modify classical Newton steps with barrier functions and projection-free techniques to maintain strict feasibility.
  • They integrate self-concordant barriers, adaptive Hessian inversion, and inexact subproblem solutions to balance convergence speed with computational efficiency.
  • These methods are applicable to convex/nonconvex optimization, online convex optimization, inverse optimization, and nonlinear PDE analysis, providing robust theoretical guarantees.

Feasibility-preserving Newton-type algorithms are iterative numerical methods that leverage Newton’s approach to optimization and nonlinear equation solving while maintaining strict feasibility with respect to problem constraints. These methods incorporate projection-free techniques, self-concordant barrier regularization, adaptive Hessian inversion strategies, and inexact subproblem solutions. They are foundational in convex and nonconvex constrained optimization, online convex optimization (OCO), inverse optimization, and nonlinear PDE analysis.

1. Principles of Feasibility Preservation

Feasibility preservation requires that each iterate generated by the algorithm remains in the feasible region defined by the problem’s constraints. Classical Newton or quasi-Newton methods do not guarantee this property, necessitating either projection steps (which may be computationally intensive) or constraint-aware mechanisms. Feasibility-preserving Newton-type algorithms circumvent explicit projections by:

  • Employing barrier functions that diverge at the boundary of the feasible set, thus restricting iterates to the interior by construction.
  • Structuring Newton or quasi-Newton steps so that feasibility is retained, either via functional analytic estimates, truncation, or adaptive update strategies.
  • Utilizing operator splitting, active-set partitioning, or tailored line searches coupled with projections only onto simple sets (such as the nonnegative orthant) (Gatmiry et al., 2023, Smee et al., 2024, Ramos et al., 2020).

2. Self-Concordant Barriers and Projection-Free Newton Methods

In online convex optimization over a compact convex set KRd\mathcal{K}\subset\mathbb{R}^d, projection-free Newton-type methods use a twice-differentiable, MϕM_\phi-self-concordant barrier function ϕ:intKR\phi:\operatorname{int}\mathcal{K}\to\mathbb{R} with parameter ν\nu as a regularizer. Such barriers, satisfying

  • ϕ(x)+\phi(x)\to+\infty as xKx\to\partial \mathcal{K},
  • third derivative control

3ϕ(x)[u,u,u]2Mϕu2ϕ(x)3,|\nabla^3\phi(x)[u,u,u]| \le 2 M_\phi \|u\|_{\nabla^2\phi(x)}^3,

  • and ϕ(x)[2ϕ(x)]1ϕ(x)ν\nabla\phi(x)^\top[\nabla^2\phi(x)]^{-1}\nabla\phi(x) \le \nu, induce spectral stability on the Hessian and control the Newton decrement.

For composite objectives Φt(x)=ϕ(x)+s=1t1gsx\Phi_t(x)=\phi(x)+\sum_{s=1}^{t-1} g_s^\top x, the Newton-step becomes

xt+1=xt[2ϕ(xt)]1(ϕ(xt)+gt),x_{t+1} = x_t - [\nabla^2\phi(x_t)]^{-1} (\nabla\phi(x_t)+g_t),

which remains in K\mathcal{K} due to the barrier’s behavior. No explicit projections onto K\mathcal{K} are needed (Gatmiry et al., 2023).

Self-concordance ensures that for small moves in the local norm, Hessians remain spectrally similar, which is critical to enabling amortization of the matrix-inversion cost (see Section 4).

3. Inexact Newton Methods: Feasibility and Convergence

Feasibility-preserving inexact Newton methods extend the classical Newton iteration by allowing additive errors rkr_k in each step,

xk+1=xk[Df(xk)]1f(xk)+rk,x_{k+1} = x_k - [Df(x_k)]^{-1}f(x_k) + r_k,

with rk\|r_k\| controlled. Under a set of minor hypotheses—specifically, invertibility of Df(x0)Df(x_0), local Lipschitz continuity of the Jacobian, and a bound on the error term d<1/Kd<1/K—one can guarantee (semi-)local feasibility: all iterates remain within a prescribed ball around x0x_0, and Df(xk)Df(x_k) remains invertible. A majorant equation gd(t)g_d(t) determines the maximal radius of this ball and contraction guarantees; quadratic convergence is achieved if rk=O(xxk2)\|r_k\|=O(\|x^*-x_k\|^2), while linear convergence is retained for merely vanishing rk\|r_k\| (Ramos et al., 2020).

This approach undergirds computer-assisted existence proofs for nonlinear boundary value problems, as the machinery provides both rigorous enclosures on iterates and feasibility within the function space defined by the constraints.

4. Hessian-Inverse Reuse and Amortized Complexity

A significant computational cost in Newton-type methods is the formation and inversion of the Hessian matrix [2ϕ(x)][\nabla^2\phi(x)]. In high-dimensional settings, inverting the Hessian at every iteration is often prohibitive (O(d3)O(d^3) per inversion). Feasibility-preserving Newton-type OCO algorithms exploit the spectral stability guaranteed by the self-concordance of ϕ\phi:

  • Define a “landmark” point uu and reuse Hu1H_u^{-1} for all iterates xx close enough according to the local Hessian norm, xuHu<δ\|x-u\|_{H_u}<\delta for δ=O(1/Mϕ)\delta=O(1/M_\phi).
  • When movement exceeds this threshold, recompute Hu1H_u^{-1} at the new landmark.
  • The total number of full inversions over TT rounds is bounded by O(MϕTη)O(M_\phi T\eta), which is o(T)o(T) for suitable step-sizes η\eta (Gatmiry et al., 2023).

For large-scale optimization, this amortization is critical, reducing effective per-iteration complexity to that of a gradient step plus rare matrix inversions.

5. Adaptive and Inexact Newton Subsolvers

In feasibility-preserving Newton-type frameworks for nonnegativity-constrained or more general set-constrained optimization (e.g., nonnegative least squares or nonnegative matrix factorization), the Hessian block corresponding to strictly positive variables is generally indefinite and possibly ill-conditioned.

Methods utilize Krylov-subspace solvers such as MINRES to find Newton directions inexactly:

  • For indices in the inactive set IkI_k (where xki>δkx_k^i>\delta_k), solve HkIsI=gkIH_k^I s^I = -g_k^I via MINRES up to a residual norm or detect nonpositive curvature via rHkIr0r^\top H_k^I r \le 0, allowing rapid escape from saddle regions.
  • For active coordinates, employ scaled projected gradient updates.

Projection onto nonnegative orthant is trivial, enforcing feasibility exactly even under inexact directions. Two-metric Armijo-type line search ensures decrease in the objective function (Smee et al., 2024).

This approach readily extends to more general bound-constrained or simple set-constrained optima by choosing the projection PP accordingly.

6. Newton-Type Algorithms for Discrete and Inverse Optimization

In inverse optimization with combinatorial feasible sets, Newton-type algorithms can be constructed to iteratively adjust cost functions so as to make a given candidate solution FF^* optimal, while minimizing deviation according to measures such as weighted span. For the unit-weight case, the feasible deviation vector has a special form and Newton steps update the deviation via explicit combinatorial calculations involving the current minimal violator FiF_i.

A line search ensures that feasibility with respect to box constraints pu\ell \le p \le u is preserved at every step. The overall number of Newton steps is O(n2)O(n^2), each requiring a single call to an F\mathcal{F}-oracle, yielding a strongly polynomial-time algorithm. For general rational weights, the complexity becomes pseudo-polynomial in the worst case, and the existence of a general combinatorial, strongly polynomial scheme remains open (Bérczi et al., 2023).

7. Convergence Guarantees and Applications

Feasibility-preserving Newton-type algorithms yield theoretical guarantees competitive with the best known projection-based and first-order methods:

  • In OCO with self-concordant barriers, regret is O(RGTlogT)O(RG\sqrt{T\log T}) for losses with bounded gradients, comparable to Euclidean projection-based algorithms but without explicit projection operations (Gatmiry et al., 2023).
  • Inexact Newton methods for Banach-space nonlinear equations guarantee global feasibility and convergence under verifiable analytic conditions and are applicable to rigorous numerics in PDE and boundary value problems (Ramos et al., 2020).
  • Nonnegativity-constrained, nonconvex optimization with two-metric projection Newton-MR methods achieves optimality in O(εg3/2)O(\varepsilon_g^{-3/2}) iterations under Lipschitz-Hessian assumptions, showing robust and rapid convergence in large-scale empirical tests (Smee et al., 2024).
  • Discrete inverse optimization Newton-type schemes are strongly polynomial for unit weights, readily extend to multi-cost functions, and provide a systematic prescription to maintain feasibility through coordinate-wise truncation (Bérczi et al., 2023).

Feasibility-preserving Newton-type algorithms thus combine advanced analytic control, adaptive computational routines, and compliance with constraints to address a broad class of problems across convex, nonconvex, continuous, and combinatorial optimization.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Feasibility-Preserving Newton-Type Algorithms.