Local Linearization Projection
- Local linearization-based projection approximates nonlinear objects by using first-order Taylor expansions to simplify complex projections.
- It is applied in nonconvex optimization, constrained feasibility, and manifold methods, ensuring efficient computation and reliable convergence.
- The method underpins statistical learning and visualization techniques, achieving practical accuracy and scalability in high-dimensional settings.
Local linearization-based projection refers to a class of methods that approximate geometric objects (sets, functions, manifolds) or algorithmic steps (projection, gradient, inference) by their first-order (linear or affine) Taylor expansions at or near a current iterate, for the purpose of efficient computation or improved tractability. This paradigm is foundational across projection algorithms for nonconvex feasibility, optimization with nonlinear constraints, modern manifold methods in numerical analysis, local smoothing in nonparametric statistics, and local structure-preserving dimensionality-reduction. The approach replaces computationally intractable or nonlinear projection operations with projections onto locally linearized or affine approximations, leading to sharply improved efficiency while preserving attractive convergence or estimation properties in the local regime.
1. General Mathematical Framework
Let be a finite-dimensional Euclidean space, and a set defined by nonlinear constraints, a nonlinear manifold, or a nonlinear mapping . The standard nearest-point projection onto is, in general, nonconvex and computationally hard to compute: Local linearization-based projection approximates near a nominal point by its first-order Taylor expansion:
- If with data, linearize active constraints at :
- For , linearize the chart: .
The local projection is then defined by solving a convex quadratic program (QP) or least-squares, imposing only the linearized constraints or manifold tangency: This inexact projection operator admits second-order proximity to as under standard regularity (e.g., linear-independence constraint qualification) (Drusvyatskiy et al., 2018).
2. Alternating Projections with Local Linearization
Alternating projections seek a point in for closed sets by iteratively projecting onto each set. If is nonconvex, exact is generally impractical. Linearization-based projection circumvents this by using from the previous section as a surrogate. The canonical scheme is:
- Start near .
- For until convergence:
- (local linearized projection toward )
- .
Under prox-regularity of , smoothness and LICQ for , and transversality , this algorithm converges linearly locally to a point in , with rate controlled by the cosine of the minimal angle between the normal cones (Drusvyatskiy et al., 2018). This is robust to the inexactness of using linearized projections, as the error vanishes faster than the linear contraction.
In the special case where is a smooth manifold parameterized by a chart , the local tangent-space projection is realized by a least-squares step, optionally followed by a retraction back onto .
3. Projected Gradient Methods with Constraint Linearization
When addressing nonlinear constrained optimization, e.g., minimizing subject to , projections onto the true feasible set are nonconvex and expensive. The constraint-linearization method projects the updated iterate only onto the affine-linearized constraints at the current point. Specifically, at each step:
- Form the linearized constraint set
- Take a projected gradient step, projecting onto , not the nonlinear feasible set:
- Update via line search if necessary.
This is not classical projected gradient descent (since projection is onto a temporally local affine set) nor full SQP (as second-order information is omitted). Under regularity, the method converges globally to a KKT point, and locally linearly near a solution (Torrisi et al., 2016). For nonlinear model predictive control (NMPC), exploiting problem sparsity and introducing slacks for box constraints yield highly efficient implementations.
4. Local Linearization in Statistical and Machine Learning Methods
Local linearization-based projection underpins several approaches in statistics and machine learning:
- In additive nonparametric regression, local linear smooth backfitting is recast as orthogonal projection of the response vector onto the additive subspace in a Hilbert space with empirical semi-norm (Hiabu et al., 2022). Each iteration alternates local projections onto component function spaces, with convergence rates matching the oracle case— under standard assumptions.
- For function learning, the "linearization ML" paradigm projects the data onto a globally linear (affine) space via , then performs prediction by local consensus among nearest neighbors in the 1D output space of this linear projection. This two-phase process can outperform both MLP and logistic regression on some LIBSVM datasets (Tueno, 2019). It differs from classical local linear regression by using a single global projection and only local adaptation in predictor space.
In Bayesian neural nets, the generalized Gauss-Newton (GGN) approximation is formalized as a local linearization in parameter space: with the MAP point and the Jacobian. Posterior inference proceeds in the resulting Bayesian GLM, and predictive uncertainty is propagated through this linearization, which stabilizes predictions and enhances out-of-distribution detection compared to naive nonlinear parameter sampling (Immer et al., 2020).
5. Applications in Multidimensional Projection and Visualization
For dimensionality reduction (DR) and data visualization, local linearization-based projection is instrumental in understanding and mapping the deformation of high-dimensional local subspaces under possibly nonlinear projections:
- Define local subspaces at each sample as ellipsoids from PCA of the -nearest neighbors.
- The projection is often defined implicitly as the solution to a local nonlinear optimization.
- The Jacobian is computed analytically via the implicit function theorem, exploiting
- Local subspace basis directions are then mapped via , producing a visualization glyph that encodes subspace stretching or rotation (Bian et al., 2020).
Empirical results demonstrate that this approach achieves high numerical accuracy (mean angular error of on synthetic planar data), and its glyph-based visualization reveals subtle global and local data structures unobservable in standard scatterplots.
6. Connections to Nonlinear Boundary Value Problems and Numerical Analysis
Local linearization-based projection generalizes to iterative methods for nonlinear boundary value problems (BVPs). For two-point BVPs, the shooting-projection iteration (SPI) method reformulates standard shooting by:
- Given a shooting trajectory , construct a "projection" as the solution to a linearized BVP
with determined via Newton, Picard, or constant-slope linearizations.
- The procedure is a projection in function space onto the affine subspace satisfying the two boundary conditions, and yields the familiar shooting method updates (including Newton and fixed-point shooting) (Faragó et al., 2020).
Convergence rates are quadratic (Newton), or linear (Picard/constant-slope), reflecting their underlying linearization properties. The projection perspective offers a unifying explanation for the convergence and error-correction mechanisms of shooting and relaxation methods.
7. Theoretical Properties and Computational Aspects
The following table summarizes theoretical properties and complexity considerations across representative contexts:
| Domain | Linearized Projection Object | Convergence Rate |
|---|---|---|
| Feasibility (AP) | Polyhedral/affine set (QP/LS step) | Local linear, rate (Drusvyatskiy et al., 2018) |
| Constrained Opt. (NLP) | Affine-linear constraint set | Local linear + global (w. AL) (Torrisi et al., 2016) |
| Nonparametric stat. | Additive subspace (semi-norm) | Optimal (oracle) (Hiabu et al., 2022) |
| Dim. reduction (viz) | Local subspace, Jacobian map | Two orders magnitude more accurate glyphs (Bian et al., 2020) |
| Shooting for BVP | Linearized BVP operator | Quadratic (Newton), linear (Picard) (Faragó et al., 2020) |
Computational complexity is often dominated by small-scale QP or least-squares solves per iteration, leveraging only first derivatives. Line search or augmented Lagrangian terms ensure robustness in nonconvex contexts. In statistical applications, the block structure of projection operators facilitates scalable implementation.
Local linearization-based projection unifies a broad spectrum of algorithms in optimization, numerical analysis, statistical learning, and high-dimensional data analysis. By systematically replacing nonlinear or nonconvex projection operations with tractable linear or affine surrogates, these methods offer both practical efficiency and strong theoretical guarantees under local regularity and transversality conditions. Empirical and mathematical results across multiple domains confirm the versatility and foundational role of this approach (Drusvyatskiy et al., 2018, Torrisi et al., 2016, Tueno, 2019, Hiabu et al., 2022, Bian et al., 2020, Immer et al., 2020, Faragó et al., 2020).