Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Alternating Minimization with Applications to Weighted Low Rank Approximation (2306.04169v3)

Published 7 Jun 2023 in cs.LG and math.OC

Abstract: Weighted low rank approximation is a fundamental problem in numerical linear algebra, and it has many applications in machine learning. Given a matrix $M \in \mathbb{R}{n \times n}$, a non-negative weight matrix $W \in \mathbb{R}_{\geq 0}{n \times n}$, a parameter $k$, the goal is to output two matrices $X,Y\in \mathbb{R}{n \times k}$ such that $| W \circ (M - X Y\top) |_F$ is minimized, where $\circ$ denotes the Hadamard product. It naturally generalizes the well-studied low rank matrix completion problem. Such a problem is known to be NP-hard and even hard to approximate assuming the Exponential Time Hypothesis [GG11, RSW16]. Meanwhile, alternating minimization is a good heuristic solution for weighted low rank approximation. In particular, [LLR16] shows that, under mild assumptions, alternating minimization does provide provable guarantees. In this work, we develop an efficient and robust framework for alternating minimization that allows the alternating updates to be computed approximately. For weighted low rank approximation, this improves the runtime of [LLR16] from $|W|_0k2$ to $|W|_0 k$ where $|W|_0$ denotes the number of nonzero entries of the weight matrix. At the heart of our framework is a high-accuracy multiple response regression solver together with a robust analysis of alternating minimization.

Citations (5)

Summary

We haven't generated a summary for this paper yet.