Papers
Topics
Authors
Recent
2000 character limit reached

Rank-One Update: Theory and Applications

Updated 11 December 2025
  • Rank-one update is a method that modifies a matrix using an outer product of two vectors, enabling efficient analytic updates for inverses, determinants, and factorization structures.
  • This technique preserves critical properties such as symmetry and low-rank structure, making it integral to algorithms in quasi-Newton methods, matrix completion, and online learning.
  • Empirical results show that rank-one update methods like SR1, OR1MP, and CMA-ES deliver rapid convergence and superior performance in high-dimensional optimization and reinforcement learning tasks.

A rank-one update modifies a matrix, tensor, or linear operator by the addition (or subtraction) of a term formed as an outer product of two vectors. This structure allows for efficient analytic updates of matrix factorizations, inversion, determinant, and other properties, and is foundational in many algorithmic designs in numerical optimization, machine learning, signal processing, and control. Rank-one updates preserve or controllably change critical structural features and typically incur minimal incremental computational cost compared with recomputation from scratch.

1. Mathematical Foundation and Algebraic Properties

Given ARn×mA \in \mathbb{R}^{n\times m} and vectors uRnu \in \mathbb{R}^n, vRmv \in \mathbb{R}^m, a rank-one update takes the form: Anew=A+αuv,A_{\text{new}} = A + \alpha\, u v^\top, with αR\alpha \in \mathbb{R}. If AA is symmetric and u=vu = v, the update is symmetric and remains rank-one. This structure enables explicit formulas for important linear-algebraic operations:

  • Inverse update (Sherman–Morrison formula): Given invertible AA, for γ=1+vA1u0\gamma=1+v^\top A^{-1}u \ne 0,

(A+uv)1=A1A1uvA1γ(A + u v^\top)^{-1} = A^{-1} - \frac{A^{-1} u v^\top A^{-1}}{\gamma}

This underlies efficient updates in optimization and invertible neural blocks (Krämer et al., 2020).

  • Determinant update (matrix-determinant lemma):

det(A+uv)=det(A)(1+vA1u)\det(A + u v^\top) = \det(A)\cdot (1 + v^\top A^{-1} u)

Both formulas extend to rank-one symmetric and non-symmetric updates with proper adaptation.

The analytic tractability extends to eigenvalue, SVD, and QR decompositions, with secular equations governing the shifted spectrum after a rank-one perturbation. Rank-one algebraic techniques are central in quasi-Newton methods, matrix factorization algorithms, and streaming data analysis (Gandhi et al., 2017, Mitz et al., 2017).

2. Rank-One Updates in Optimization and Learning Algorithms

Quasi-Newton Methods and the SR1 Update

In unconstrained optimization, particularly large-scale nonconvex settings, the symmetric rank-one (SR1) update is a key method for building curvature (Hessian) approximations: Bk+1=Bk+(ykBksk)(ykBksk)(ykBksk)skB_{k+1} = B_k + \frac{(y_k - B_k s_k) (y_k - B_k s_k)^\top}{(y_k - B_k s_k)^\top s_k} where BkB_k is the current Hessian approximation, sks_k is the step, and yky_k is the gradient difference (Arguillere, 2013, Ranganath et al., 17 Feb 2025). Distinct from L-BFGS, which enforces positive-definiteness, SR1 allows Bk+1B_{k+1} to become indefinite, efficiently capturing negative curvature directions in deep networks. Modern limited-memory SR1 schemes, augmented with adaptive cubic regularization, achieve state-of-the-art performance in deep learning and are especially effective for escaping saddle points. Empirical results show superiority over L-BFGS, Adam, and other first-order adaptive methods on a range of deep architectures (Ranganath et al., 17 Feb 2025).

Matrix Completion and Pursuit

Orthogonal Rank-One Matrix Pursuit (OR1MP) and its economic variant (EOR1MP) reconstruct low-rank matrices from incomplete observations by iteratively extracting and fitting rank-one atoms. Each step identifies the leading singular direction of the current residual (by power iteration) and performs least-squares weight correction. This approach converges linearly and at scale is orders of magnitude faster than SVD-based solvers, with only the rank as a hyperparameter to set (Wang et al., 2014).

Policy Evaluation and Reinforcement Learning

Recent advances in dynamic programming leverage low-cost rank-one approximate policy evaluation, exemplified by Rank-One Modified Value Iteration (R1-VI) and R1-QL. Here, the transition matrix in the Bellman operator is replaced by its best stochastic rank-one approximation (outer product of all-ones and the stationary distribution). The resulting update injects a global correction: vk+1=T(vk)+γ1γdk,T(vk)vk1v_{k+1} = T(v_k) + \frac{\gamma}{1-\gamma} \langle d_k, T(v_k) - v_k \rangle 1 with dkd_k iteratively estimated by the power method. This yields policy iteration–like acceleration at the computational cost of value iteration. R1-VI and R1-QL outperform Anderson-accelerated and Nesterov–accelerated variants, delivering rapid convergence, especially as γ1\gamma \to 1 (Kolarijani et al., 3 May 2025).

3. Streaming, Factorization, and Spectral Algorithms

SVD and Eigenvalue Updates

Rank-one perturbations admit explicit secular equations for the updated singular values and eigenvalues. Fast multipole methods accelerate the computation of singular vectors post-perturbation by exploiting the Cauchy-matrix structure of the arising systems (Gandhi et al., 2017). For symmetric matrices, partial spectrum–aware secular approximations yield controlled error bounds for updating top eigenpairs, impacting large-scale graph spectral methods and dynamic learning scenarios (Mitz et al., 2017).

Subspace Tracking and Geodesic Updates

Subspace updates after a rank-one modification can be expressed as movement along a Grassmannian geodesic. The closed-form for the updated orthonormal basis and factorization ensures optimal subspace tracking in O(np)O(np) time (versus O(np2)O(np^2) for classical SVD/QR updates), which is critical for online learning and active-set methods (Zimmermann, 2017). The principal-angle distance between subspaces after a rank-one change is computable without further decomposition.

Tensor Decomposition

Rank-one tensor updates play a central role in best rank-one CPD approximations and parallel all-at-once update algorithms. For small tensors, closed-form Levenberg–Marquardt or rotational polynomial solves are available; for higher-order tensors, parallelized updates (e.g., PARO) allow simultaneous best rank-one fitting, enabling highly efficient distributed CPD (Phan et al., 2017).

4. Stochastic Search, Evolution Strategies, and Covariance Adaptation

Rank-one updates are foundational in the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and its variants. The canonical CMA-ES update scheme mixes a rank-one evolution-path outer product with a rank-μ\mu sample update: Ct+1=(1c1cμ)Ct+c1pt+1pt+1+cμiwiyi:λyi:λC_{t+1} = (1-c_1-c_\mu)C_t + c_1 p_{t+1}p_{t+1}^\top + c_\mu \sum_i w_i y_{i:\lambda}y_{i:\lambda}^\top Recent theoretical work interprets the rank-one term as a natural gradient ascent step in the MAP-IGO framework, with prior distributions shaped to align the empirical search distribution with the accumulated evolution path. This demystifies the practical efficacy of the rank-one component, reveals a principled relationship between prior strength and adaptation aggressiveness, and exposes tunable "momentum" effects in the mean update (Hamano et al., 24 Jun 2024). Algorithmic variants, such as mutation-matrix adaptation ES (MMA-ES), replace explicit matrix decompositions by direct rank-one path-based updates, achieving O(n2)O(n^2) cost with strong invariance and empirical performance (Li et al., 2017).

5. Algorithmic Design: Efficiency and Structure Preservation

The essential driver of rank-one updates is computational efficiency in high-dimensional or dynamically evolving models:

  • Low storage: Only the perturbation vectors (or directions) need to be stored, avoiding full recomputation.
  • Online and distributed scenarios: Rank-one updates support streaming, mini-batch, or dynamic changes (e.g., addition/removal of constraints, new observations).
  • Property preservation: Many updates can be precisely controlled to maintain invertibility, symmetry, or positive-definiteness with known analytic formulas.
  • Compatibility with factorization: Algorithms such as active set SVM solvers exploit Cholesky or QR rank-one updates to efficiently track constraints without repeatedly refactoring the Hessian, achieving high-accuracy solutions inaccessible to SMO-type coordinate methods (Jarre, 13 Mar 2025).

The modularity of rank-one methods enables their use as building blocks for higher-rank schemes, e.g., in quasi-Newton updates, low-rank matrix completion, or in the alternation of fixed-rank and rank-one updates in Riccati equation solvers (Mishra et al., 2013). In all these domains, the inexpensive yet structural nature of rank-one increments is paramount.

6. Empirical Performance and Applications

Comprehensive empirical investigations substantiate the theoretical benefits of rank-one updates. Notable findings include:

  • R1-VI reduces the iteration count (given γ=0.99\gamma=0.99) by 40×40\times over VI and $10$–20×20\times over Nesterov- or Anderson-accelerated variants (Kolarijani et al., 3 May 2025).
  • OR1MP/EOR1MP reaches MovieLens10M-scale performance (RMSE 0.86\approx0.86) in tens of steps and minutes, while SVD-based methods require at least an order of magnitude longer (Wang et al., 2014).
  • SR1 with cubic regularization achieves faster decrease in training loss and higher test accuracy than all tested first-order and L-BFGS methods on standard deep learning benchmarks (Ranganath et al., 17 Feb 2025).
  • CMA-ES variants with simple path-based rank-one updates remain invariant to transformation and achieve robust adaptation even in badly conditioned or non-separable problems (Li et al., 2017).
  • Active set SVM methods (CMU) secured dual optimality (KKT <1011<10^{-11}) in seconds where SMO stagnated at low accuracy, improving classification quality on ill-conditioned datasets (Jarre, 13 Mar 2025).

The empirical data demonstrate that appropriately crafted rank-one-update algorithms consistently outperform both classical and "accelerated" first-order methods in iteration complexity, wall-clock time, and solution quality, particularly in high condition number regimes or when structural adaptation is critical.

7. Broader Impact and Outlook

Rank-one updates are an indispensable toolset in numerical linear algebra, optimization, learning theory, and statistical modeling. Their analytical simplicity, structural tractability, and computational efficiency drive rapid convergence, enable online and distributed modeling, facilitate property-preserving optimization in deep learning, and support scalable, high-accuracy solutions in large-scale convex and nonconvex problems. Ongoing research focuses on:

  • Exploiting information-geometric and natural-gradient perspectives to further unify stochastic search and estimation methods.
  • Developing more effective schemes for online, streaming, and distributed environments.
  • Integrating rank-one paradigms into property-preserving deep network layers and more general manifold-constrained learning.

Recent advances continually reaffirm the centrality of rank-one update principles for both theoretical design and practical algorithmic performance across disciplines (Kolarijani et al., 3 May 2025, Ranganath et al., 17 Feb 2025, Wang et al., 2014, Hamano et al., 24 Jun 2024, Mitz et al., 2017, Jarre, 13 Mar 2025, Krämer et al., 2020).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Rank-One Update.