Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
112 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Low Rank Approximation at Sublinear Cost (1906.04327v7)

Published 11 Jun 2019 in math.NA and cs.NA

Abstract: Low Rank Approximation (LRA) of an m-by-n matrix is a hot research subject, fundamental for Matrix and Tensor Computations and Big Data Mining and Analysis. Computations with LRA can be performed at sublinear cost -- by using much fewer than mn memory cells and arithmetic operations, but can we compute LRA at sublinear cost? Yes and no. No, because spectral, Frobenius, and all other norms of the error matrix of LRA output by any sublinear cost deterministic or randomized algorithm exceed their minimal values for LRA by infinitely large factors for the worst case input and even for the inputs from the small families of our Appendix. Yes, because for about two decades Cross-Approximation (C-A) iterations, running at sublinear cost, have been consistently computing close LRA worldwide. We provide new insight into that "yes" and "no" coexistence by identifying C-A iterations as recursive sketching algorithms for LRA that use sampling test matrices and run at sublinear cost. As we prove in good accordance with our numerical tests, already at a single recursive step they compute close LRA. except for a narrow class of hard inputs, which tends to shrink in the recursive process. We also discuss enhancing the power of sketching by means of using leverage scores.

Citations (7)

Summary

We haven't generated a summary for this paper yet.