Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 180 tok/s
Gemini 2.5 Pro 55 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 42 tok/s Pro
GPT-4o 66 tok/s Pro
Kimi K2 163 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Rank-One Matrix Completion

Updated 15 November 2025
  • Rank-One Matrix Completion (R1MC) is a problem that recovers a rank-one matrix from observed entries, pivotal in applications like recommendation systems and channel estimation.
  • It employs techniques such as alternating minimization, greedy pursuits, and convex relaxations, balancing computational efficiency with robustness to noise and adversaries.
  • Theoretical guarantees based on combinatorial and algebraic criteria ensure sample complexity bounds and optimality, establishing R1MC as essential for low-rank inference.

Rank-One Matrix Completion (R1MC) is the problem of reconstructing a matrix of rank one from a subset of its entries, often under constraints or with the presence of noise, outliers, or adversarial perturbations. R1MC plays a foundational role in low-rank modeling, statistical independence testing, collaborative filtering, channel estimation, and crowdsourcing. Despite the apparent simplicity resulting from the rank-one structure, R1MC is computationally and theoretically rich, with connections to combinatorial optimization, algebraic geometry, robust statistics, and convex/nonconvex optimization.

1. Formal Problem Statement and Foundational Principles

Given a matrix MRm×nM \in \mathbb{R}^{m \times n} (or complex-valued for applications such as MIMO channel estimation), where a subset of entries indexed by Ω[m]×[n]\Omega \subseteq [m] \times [n] is revealed, the R1MC task is to reconstruct MM with rank(M)=1\operatorname{rank}(M)=1 such that MijM_{ij} matches the observed entries for (i,j)Ω(i,j)\in\Omega. The canonical rank-one form is M=uvTM=uv^T for uRmu\in\mathbb{R}^m, vRnv\in\mathbb{R}^n.

Key variants include:

  • Noiseless case: Mij=YijM_{ij}=Y_{ij} for (i,j)Ω(i,j)\in\Omega, rank(M)=1\operatorname{rank}(M)=1.
  • Noisy case: minu,v(i,j)Ω(uivjYij)2\min_{u,v} \sum_{(i,j)\in\Omega} (u_i v_j - Y_{ij})^2, often with regularization.
  • Independence model: For probability tables, MM lies in the simplex Δmn1\Delta^{mn-1} and must be nonnegative and sum to one (Kubjas et al., 2014).

The problem’s tractability is deeply influenced by the bipartite pattern of Ω\Omega. Graph-theoretic conditions, polynomial constraints (vanishing of 2×22\times2 minors), and combinatorial properties determine whether rank-one completion is feasible.

2. Methodologies and Algorithmic Frameworks

Greedy and Alternating Minimization

Alternating minimization iteratively updates uu and vv by solving row- and column-wise least squares:

ui(t+1)=j:(i,j)ΩYijvj(t)j:(i,j)Ω(vj(t))2 vj(t+1)=i:(i,j)ΩYijui(t+1)i:(i,j)Ω(ui(t+1))2\begin{align*} u_i^{(t+1)} &= \frac{\sum_{j: (i,j)\in\Omega} Y_{ij} v_j^{(t)}}{\sum_{j: (i,j)\in\Omega} (v_j^{(t)})^2} \ v_j^{(t+1)} &= \frac{\sum_{i: (i,j)\in\Omega} Y_{ij} u_i^{(t+1)}}{\sum_{i: (i,j)\in\Omega} (u_i^{(t+1)})^2} \end{align*}

Convergence is controlled by the spectral gap of a consensus Markov chain, with polynomial contraction rate 1Θ(1/(n2Δ))1 - \Theta(1/(n^2\Delta)) (Liu et al., 2020).

Greedy Rank-One Pursuit

R1MC is often approached by greedy pursuit algorithms adding rank-one "atoms" selected as top singular vectors of the current residual, followed by weight refinement and projection onto observed entries (Wang et al., 2014, Yao et al., 2016). The residual is updated orthogonally at each step, ensuring linear convergence:

  • Standard R1MP/OR1MP: Full weight update over all atoms.
  • Economic variants: Update only the most recent atom and previous estimate.

Efficient implementations scale to 10810^8 observed entries and achieve state-of-the-art speed relative to iterative nuclear-norm schemes.

Convex and Semidefinite Programming Approaches

Simple nuclear norm relaxation fails for deterministic R1MC; improved recoverability is achieved via two rounds of semidefinite relaxation with trace minimization, which is Lipschitz-stable under input perturbations (Cosse et al., 2017). These relaxations fit within the Lasserre hierarchy, leveraging sum-of-squares certificates and moment tensor manipulations via hierarchical low-rank decompositions.

For certifiable optimality, R1MC can be reparametrized as a convex problem over projection matrices with semidefinite constraints, further tightened by enforcing vanishing 2×22\times2 minors via Shor-style PSD blocks. Disjunctive branch-and-bound explores violated inequalities to either certify a rank-one solution or drive the solution space toward optimality (Bertsimas et al., 2023).

Gradient Descent Dynamics

Nonconvex gradient descent on the R1MC loss: f(x)=12PΩ(xxTM)F2f(x) = \frac{1}{2}\left\| \mathcal{P}_\Omega(x x^T - M^*) \right\|_F^2 converges globally with vanilla random initialization, provided the starting vector has sufficiently small norm. Implicit regularization maintains incoherence, avoiding the need for explicit regularizers. Initial alignment and norm amplification occur in O(logn)O(\log n) iterations (Kim et al., 2022).

3. Theoretical Guarantees, Optimality, and Complexity

  • Algebraic-graphical criteria: Rank-one completion feasibility is determined by vanishing 2×22\times2 minors and contraction of the observed pattern’s bipartite graph to block-diagonal form. In the simplex, existence is equivalent to ibi1\sum_i \sqrt{b_i} \leq 1 for contracted blocks (Kubjas et al., 2014).
  • Complexity: While general low-rank completion is NP-hard via reduction to tensor rank decision (Derksen, 2013), rank-one cases admit polynomial time algorithms given a spanning tree of the observed entry graph.
  • Certifiable optimality: Convex relaxations via projection matrices and minor Shor blocks achieve optimality gaps below 1%1\% for moderate dimension, outperforming heuristics by 20% ⁣ ⁣50%20\%\!-\!50\% in test MSE (Bertsimas et al., 2023).
  • Sample complexity: For random patterns and mild incoherence, R1MC achieves recovery with O(nlogn)O(n \log n) samples (Kim et al., 2022, Jiang et al., 1 Nov 2025).
  • Robustness: Filtering-based alternating minimization with exclusion of extremal entries achieves provable resilience to adversarial corruption. Exact recovery holds with (2F+1)(2F+1)-robust graphs, and thresholds for success are established for Erdős–Rényi patterns (Ma et al., 2020).

4. Robustness: Adversarial, Noisy, and Dynamic Rank Scenarios

Adversarial Crowdsourcing

R1MC augmented with local extreme-value filtering (removal of FF largest and FF smallest residuals per neighborhood) provably recovers the rank-one structure under FF-local adversarial perturbations, provided the observed graph is (2F+1)(2F+1)-robust (Ma et al., 2020). This method, termed M-MSR, achieves error under $0.2$ for up to 25%25\% adversaries, and significantly outperforms RPCA and variational Bayesian methods on crowdsourced datasets.

Dynamic Rank Estimation for Channel Estimation

In mmWave MIMO systems, robust block coordinate R1MC methods leverage autoregressive smoothing for online rank estimation across temporal frames. Lasso-type 1\ell_1 regularization on singular weights enables adaptation to abrupt rank changes and suppresses outlier-induced inflation. Completion and recovery are achieved in near-linear time per iteration, with exactness under standard RIP and sample complexity matching nuclear-norm minimization (Jiang et al., 1 Nov 2025, Jiang et al., 8 Nov 2025).

5. Geometric and Algebraic Aspects

R1MC in the standard simplex establishes connections between probability models and algebraic geometry. The feasible region for simplex completions is described by high-degree polynomials, including irreducible boundary polynomials of degree 2n12^{n-1}. The completion set, while not generally convex, is a semialgebraic manifold whose dimension is determined by the graph structure (Kubjas et al., 2014). Tensor-based reductions (Derksen) equip R1MC with a natural lift to NP-hard tensor rank decision, bridging computational complexity and low-rank modeling (Derksen, 2013).

6. Symbolic and Numerical Moment Matrix Completion

For polynomial systems arising in "unlabeled sensing," the unique solution can be recovered by rank-one moment matrix completion. Symbolic Groebner basis computation yields efficient solves for moderate nn, while numeric SDP relaxation with nuclear norm minimization robustly returns rank-one moment matrices and successful recovery under high signal-to-noise regimes. These refinements outperform homotopy-EM methods when the system size exceeds factorial complexity (Liang et al., 26 May 2024).

7. Applications and Empirical Performance

Application Main R1MC Approach Notable Result/Metric
Recommendation systems Greedy pursuit, OR1MP, EOR1MP RMSE~0.86 on Netflix with 10810^8 entries
Crowdsourcing (adversarial) M-MSR filtering alternating minimization Error <0.2<0.2 under 25%25\% adversaries
mmWave Channel Estimation Robust BCD 1\ell_1-regularized R1MC NMSE <20<-20 dB with 6%6\% pilot overhead
Unlabeled sensing Moment matrix SDP completion <1%<1\% relative error for SNR >50>50 dB
Independence model testing Combinatorial-algebraic graph reduction Polynomial-time feasibility checks

In each domain, R1MC algorithms are preferred when rank structure is dominant, sample patterns are favorable (e.g., random with sufficient connectivity), and interpretability or optimality certificates are required. Limitations include diminished robustness with high adversarial fractions, NP-hardness for associated tensor rank decisions, and increased complexity for large block-regularized or mixed-rank generalizations.


Rank-One Matrix Completion, though mathematically elementary in its factorization, is structurally and computationally intricate; it interfaces with robust and certifiable optimization, combinatorics, algebraic geometry, and engineering, and continues to motivate efficient, provable solvers for high-dimensional, real-world inference tasks.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Rank-One Matrix Completion (R1MC).