Papers
Topics
Authors
Recent
Search
2000 character limit reached

Orthogonal Rank-One Matrix Pursuit (OR1MP)

Updated 6 April 2026
  • The paper introduces OR1MP, a greedy algorithm that iteratively selects rank-one atoms and globally re-optimizes weights for low-rank matrix approximation.
  • It delivers provable linear convergence with robust performance in large-scale applications like collaborative filtering and image inpainting.
  • Economic OR1MP optimizes storage and computation by incrementally updating with only the latest atom and previous approximation.

Orthogonal Rank-One Matrix Pursuit (OR1MP) is a greedy, iterative framework for low-rank matrix completion, extending the core principles of Orthogonal Matching Pursuit (OMP) from sparse vector approximation to the matrix setting. The key idea is to build the target low-rank matrix as a sum of rank-one “atoms,” each selected to best explain the current residual, with global coefficient re-optimization in each iteration. The method offers efficiency, scalability, and a single tunable parameter—the target rank—while delivering provable linear convergence guarantees and robust empirical performance in large-scale matrix completion tasks such as collaborative filtering and image inpainting (Wang et al., 2014).

1. Problem Formulation and Theoretical Foundations

The matrix completion problem addressed by OR1MP considers a matrix YRn×mY \in \mathbb{R}^{n \times m} that is only partially observed on an index set Ω{1,,n}×{1,,m}\Omega \subset \{1, \dots, n\} \times \{1, \dots, m\}. The objective is to find a low-rank matrix XX such that the observed entries match those of YY:

minXrank(X)subject toPΩ(X)=PΩ(Y),\begin{align*} \min_{X} \enspace \operatorname{rank}(X) \qquad \text{subject to} \qquad P_{\Omega}(X) = P_{\Omega}(Y), \end{align*}

where PΩP_\Omega denotes the projection operator that preserves entries on Ω\Omega and zeros elsewhere. OR1MP generalizes OMP—traditionally used for vector sparsity—by pursuing a greedy expansion of XX in terms of rank-one matrices M=uvM = u v^\top (with MF=1\|M\|_F = 1) instead of scalar vector coordinates.

2. Core Algorithm: Iterative Greedy Pursuit

OR1MP proceeds in Ω{1,,n}×{1,,m}\Omega \subset \{1, \dots, n\} \times \{1, \dots, m\}0 iterations (or until a prescribed residual tolerance Ω{1,,n}×{1,,m}\Omega \subset \{1, \dots, n\} \times \{1, \dots, m\}1 is reached):

  • Initialization: Ω{1,,n}×{1,,m}\Omega \subset \{1, \dots, n\} \times \{1, \dots, m\}2, Ω{1,,n}×{1,,m}\Omega \subset \{1, \dots, n\} \times \{1, \dots, m\}3.
  • At iteration Ω{1,,n}×{1,,m}\Omega \subset \{1, \dots, n\} \times \{1, \dots, m\}4 (Ω{1,,n}×{1,,m}\Omega \subset \{1, \dots, n\} \times \{1, \dots, m\}5):

1. Atom Selection: Identify the leading left/right singular vectors Ω{1,,n}×{1,,m}\Omega \subset \{1, \dots, n\} \times \{1, \dots, m\}6 of the residual Ω{1,,n}×{1,,m}\Omega \subset \{1, \dots, n\} \times \{1, \dots, m\}7 by solving:

Ω{1,,n}×{1,,m}\Omega \subset \{1, \dots, n\} \times \{1, \dots, m\}8

yielding the rank-one atom Ω{1,,n}×{1,,m}\Omega \subset \{1, \dots, n\} \times \{1, \dots, m\}9.

2. Global Weight Update: Solve the least-squares problem over weights XX0:

XX1

with the closed-form solution XX2, where:

XX3

XX4

3. Approximation Update: Form

XX5

The process continues until the target rank is reached or the residual norm falls below tolerance.

3. Economic OR1MP (EOR1MP): Low-Storage Variant

To address the storage and computational bottlenecks of growing atom sets and weight systems, Economic OR1MP (EOR1MP) introduces an incremental update rule:

  • At each iteration, retain only XX6 and the new atom XX7.
  • Solve a two-variable least-squares problem:

    XX8

  • Update:

    XX9

  • Previous weights are updated as YY0, YY1 for YY2.

This scheme maintains storage at YY3 and reduces per-iteration complexity.

4. Geometric Rank-One Updates and Subspace Interpretation

Each OR1MP iteration can be interpreted as an orthogonal rank-one update of a low-rank matrix factorization. Given YY4 with YY5 column-orthogonal and YY6, the addition of a rank-one perturbation YY7 yields a new factorization YY8, computable in closed form (Zimmermann, 2017):

  • Project and normalize YY9: minXrank(X)subject toPΩ(X)=PΩ(Y),\begin{align*} \min_{X} \enspace \operatorname{rank}(X) \qquad \text{subject to} \qquad P_{\Omega}(X) = P_{\Omega}(Y), \end{align*}0, minXrank(X)subject toPΩ(X)=PΩ(Y),\begin{align*} \min_{X} \enspace \operatorname{rank}(X) \qquad \text{subject to} \qquad P_{\Omega}(X) = P_{\Omega}(Y), \end{align*}1.
  • Dual direction: minXrank(X)subject toPΩ(X)=PΩ(Y),\begin{align*} \min_{X} \enspace \operatorname{rank}(X) \qquad \text{subject to} \qquad P_{\Omega}(X) = P_{\Omega}(Y), \end{align*}2, minXrank(X)subject toPΩ(X)=PΩ(Y),\begin{align*} \min_{X} \enspace \operatorname{rank}(X) \qquad \text{subject to} \qquad P_{\Omega}(X) = P_{\Omega}(Y), \end{align*}3.
  • Scalars: minXrank(X)subject toPΩ(X)=PΩ(Y),\begin{align*} \min_{X} \enspace \operatorname{rank}(X) \qquad \text{subject to} \qquad P_{\Omega}(X) = P_{\Omega}(Y), \end{align*}4, minXrank(X)subject toPΩ(X)=PΩ(Y),\begin{align*} \min_{X} \enspace \operatorname{rank}(X) \qquad \text{subject to} \qquad P_{\Omega}(X) = P_{\Omega}(Y), \end{align*}5, minXrank(X)subject toPΩ(X)=PΩ(Y),\begin{align*} \min_{X} \enspace \operatorname{rank}(X) \qquad \text{subject to} \qquad P_{\Omega}(X) = P_{\Omega}(Y), \end{align*}6.
  • Parameters:

    minXrank(X)subject toPΩ(X)=PΩ(Y),\begin{align*} \min_{X} \enspace \operatorname{rank}(X) \qquad \text{subject to} \qquad P_{\Omega}(X) = P_{\Omega}(Y), \end{align*}7

  • Update:

    minXrank(X)subject toPΩ(X)=PΩ(Y),\begin{align*} \min_{X} \enspace \operatorname{rank}(X) \qquad \text{subject to} \qquad P_{\Omega}(X) = P_{\Omega}(Y), \end{align*}8

    minXrank(X)subject toPΩ(X)=PΩ(Y),\begin{align*} \min_{X} \enspace \operatorname{rank}(X) \qquad \text{subject to} \qquad P_{\Omega}(X) = P_{\Omega}(Y), \end{align*}9

    with

    PΩP_\Omega0

This update corresponds to a geodesic move on the Grassmann manifold PΩP_\Omega1, and the subspace distance is computable in closed form via the principal angle associated with the update.

5. Computational Complexity and Implementation

  • Atom selection: The leading singular vector pair of the residual (restricted to PΩP_\Omega2) is computed via power iterations in PΩP_\Omega3 per iteration.
  • Weight update in OR1MP: Solving the PΩP_\Omega4 system requires PΩP_\Omega5 per iteration and PΩP_\Omega6 storage.
  • EOR1MP update: Only PΩP_\Omega7 flops for the two-variable update, plus elementary scalar operations.
  • Full orthogonal update: The geometric rank-one factorization update has asymptotic cost PΩP_\Omega8 per iteration, never requiring SVDs of size PΩP_\Omega9 or QR on large matrices (Zimmermann, 2017).
  • Tunable parameter: Only the rank Ω\Omega0 or desired tolerance Ω\Omega1; no step-size or regularization parameter.

6. Theoretical Guarantees

OR1MP and EOR1MP admit provable linear convergence:

Ω\Omega2

This rate arises from three key properties: (a) orthogonality of the residual to all selected atoms after weight re-optimization, (b) residual norm strictly decreases, and (c) the largest singular value of the residual provides a fundamental lower bound on the decrease. Empirical convergence traces (log-residual vs. iteration) confirm this geometric decay rate (Wang et al., 2014).

7. Empirical Performance and Applications

OR1MP and EOR1MP have been benchmarked on large-scale recommendation datasets (Netflix, MovieLens) and image inpainting tasks:

Application Dataset Rank / Iterations Time (s) Performance
Image inpainting 512×512 images Ω\Omega3 few Ω\Omega428 dB PSNR
Collaborative filtering Netflix Ω\Omega5 Ω\Omega614 RMSE Ω\Omega7
Scalability / robustness Large / sparse Ω\Omega8 seconds Linear convergence

In these and other settings, OR1MP/EOR1MP are significantly more efficient than competing methods such as SVT, SVP, SoftImpute, JS, GECO, and Boost, with comparable or superior accuracy and scalability. EOR1MP, in particular, offers orders-of-magnitude speedups while retaining accuracy and convergence guarantees (Wang et al., 2014).

8. Broader Context and Geometric Insights

Each OR1MP iteration traverses a geodesic on the Grassmann manifold of Ω\Omega9-dimensional subspaces, corresponding to rank-one subspace augmentation. The closed-form update formulas enable principled incremental updates of any orthogonal matrix factorization under rank-one modifications. This geometric foundation ensures both computational efficiency and the interpretability of subspace evolution. The associated subspace distance between the current and updated models can be efficiently computed without extra SVD or QR operations (Zimmermann, 2017).

In summary, OR1MP and its economic variant leverage greedy top-SVD atom selection and efficient residual projections to deliver scalable, theoretically grounded, and empirically robust solutions for low-rank matrix completion—all driven by a single tunable rank parameter and grounded in geometric matrix analysis.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Orthogonal Rank-One Matrix Pursuit (OR1MP).