Orthogonal Rank-One Matrix Pursuit (OR1MP)
- The paper introduces OR1MP, a greedy algorithm that iteratively selects rank-one atoms and globally re-optimizes weights for low-rank matrix approximation.
- It delivers provable linear convergence with robust performance in large-scale applications like collaborative filtering and image inpainting.
- Economic OR1MP optimizes storage and computation by incrementally updating with only the latest atom and previous approximation.
Orthogonal Rank-One Matrix Pursuit (OR1MP) is a greedy, iterative framework for low-rank matrix completion, extending the core principles of Orthogonal Matching Pursuit (OMP) from sparse vector approximation to the matrix setting. The key idea is to build the target low-rank matrix as a sum of rank-one “atoms,” each selected to best explain the current residual, with global coefficient re-optimization in each iteration. The method offers efficiency, scalability, and a single tunable parameter—the target rank—while delivering provable linear convergence guarantees and robust empirical performance in large-scale matrix completion tasks such as collaborative filtering and image inpainting (Wang et al., 2014).
1. Problem Formulation and Theoretical Foundations
The matrix completion problem addressed by OR1MP considers a matrix that is only partially observed on an index set . The objective is to find a low-rank matrix such that the observed entries match those of :
where denotes the projection operator that preserves entries on and zeros elsewhere. OR1MP generalizes OMP—traditionally used for vector sparsity—by pursuing a greedy expansion of in terms of rank-one matrices (with ) instead of scalar vector coordinates.
2. Core Algorithm: Iterative Greedy Pursuit
OR1MP proceeds in 0 iterations (or until a prescribed residual tolerance 1 is reached):
- Initialization: 2, 3.
- At iteration 4 (5):
1. Atom Selection: Identify the leading left/right singular vectors 6 of the residual 7 by solving:
8
yielding the rank-one atom 9.
2. Global Weight Update: Solve the least-squares problem over weights 0:
1
with the closed-form solution 2, where:
3
4
3. Approximation Update: Form
5
The process continues until the target rank is reached or the residual norm falls below tolerance.
3. Economic OR1MP (EOR1MP): Low-Storage Variant
To address the storage and computational bottlenecks of growing atom sets and weight systems, Economic OR1MP (EOR1MP) introduces an incremental update rule:
- At each iteration, retain only 6 and the new atom 7.
- Solve a two-variable least-squares problem:
8
- Update:
9
- Previous weights are updated as 0, 1 for 2.
This scheme maintains storage at 3 and reduces per-iteration complexity.
4. Geometric Rank-One Updates and Subspace Interpretation
Each OR1MP iteration can be interpreted as an orthogonal rank-one update of a low-rank matrix factorization. Given 4 with 5 column-orthogonal and 6, the addition of a rank-one perturbation 7 yields a new factorization 8, computable in closed form (Zimmermann, 2017):
- Project and normalize 9: 0, 1.
- Dual direction: 2, 3.
- Scalars: 4, 5, 6.
- Parameters:
7
- Update:
8
9
with
0
This update corresponds to a geodesic move on the Grassmann manifold 1, and the subspace distance is computable in closed form via the principal angle associated with the update.
5. Computational Complexity and Implementation
- Atom selection: The leading singular vector pair of the residual (restricted to 2) is computed via power iterations in 3 per iteration.
- Weight update in OR1MP: Solving the 4 system requires 5 per iteration and 6 storage.
- EOR1MP update: Only 7 flops for the two-variable update, plus elementary scalar operations.
- Full orthogonal update: The geometric rank-one factorization update has asymptotic cost 8 per iteration, never requiring SVDs of size 9 or QR on large matrices (Zimmermann, 2017).
- Tunable parameter: Only the rank 0 or desired tolerance 1; no step-size or regularization parameter.
6. Theoretical Guarantees
OR1MP and EOR1MP admit provable linear convergence:
2
This rate arises from three key properties: (a) orthogonality of the residual to all selected atoms after weight re-optimization, (b) residual norm strictly decreases, and (c) the largest singular value of the residual provides a fundamental lower bound on the decrease. Empirical convergence traces (log-residual vs. iteration) confirm this geometric decay rate (Wang et al., 2014).
7. Empirical Performance and Applications
OR1MP and EOR1MP have been benchmarked on large-scale recommendation datasets (Netflix, MovieLens) and image inpainting tasks:
| Application | Dataset | Rank / Iterations | Time (s) | Performance |
|---|---|---|---|---|
| Image inpainting | 512×512 images | 3 | few | 428 dB PSNR |
| Collaborative filtering | Netflix | 5 | 614 | RMSE 7 |
| Scalability / robustness | Large / sparse | 8 | seconds | Linear convergence |
In these and other settings, OR1MP/EOR1MP are significantly more efficient than competing methods such as SVT, SVP, SoftImpute, JS, GECO, and Boost, with comparable or superior accuracy and scalability. EOR1MP, in particular, offers orders-of-magnitude speedups while retaining accuracy and convergence guarantees (Wang et al., 2014).
8. Broader Context and Geometric Insights
Each OR1MP iteration traverses a geodesic on the Grassmann manifold of 9-dimensional subspaces, corresponding to rank-one subspace augmentation. The closed-form update formulas enable principled incremental updates of any orthogonal matrix factorization under rank-one modifications. This geometric foundation ensures both computational efficiency and the interpretability of subspace evolution. The associated subspace distance between the current and updated models can be efficiently computed without extra SVD or QR operations (Zimmermann, 2017).
In summary, OR1MP and its economic variant leverage greedy top-SVD atom selection and efficient residual projections to deliver scalable, theoretically grounded, and empirically robust solutions for low-rank matrix completion—all driven by a single tunable rank parameter and grounded in geometric matrix analysis.