Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
112 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Iterative Refinement and Oversampling for Low Rank Approximation (1906.04223v17)

Published 10 Jun 2019 in math.NA and cs.NA

Abstract: Iterative refinement is particularly popular for numerical solution of linear systems of equations. We extend it to Low Rank Approximation of a matrix (LRA) and observe close link of the resulting algorithm to oversampling techniques, commonly used in randomized LRA algorithms. We elaborate upon this link and revisit oversampling and some efficient randomized LRA algorithms. Applied with sparse sketch matrices they run significantly faster and in particular yield Very Low Rank Approximation (VLRA) at sublinear cost, using much fewer scalars and flops than the input matrix has entries. This is achieved at the price of deterioration of output accuracy, but according to our formal and empirical study subsequent oversampling improves accuracy to near-optimal level under the spectral norm for a large sub-class of matrices with fast decaying spectra of singular values.

Summary

We haven't generated a summary for this paper yet.