Lower Bounds on Adaptive Sensing for Matrix Recovery (2311.17281v2)
Abstract: We study lower bounds on adaptive sensing algorithms for recovering low rank matrices using linear measurements. Given an $n \times n$ matrix $A$, a general linear measurement $S(A)$, for an $n \times n$ matrix $S$, is just the inner product of $S$ and $A$, each treated as $n2$-dimensional vectors. By performing as few linear measurements as possible on a rank-$r$ matrix $A$, we hope to construct a matrix $\hat{A}$ that satisfies $|A - \hat{A}|_F2 \le c|A|_F2$, for a small constant $c$. It is commonly assumed that when measuring $A$ with $S$, the response is corrupted with an independent Gaussian random variable of mean $0$ and variance $\sigma2$. Cand\'es and Plan study non-adaptive algorithms for low rank matrix recovery using random linear measurements. At a certain noise level, it is known that their non-adaptive algorithms need to perform $\Omega(n2)$ measurements, which amounts to reading the entire matrix. An important question is whether adaptivity helps in decreasing the overall number of measurements. We show that any adaptive algorithm that uses $k$ linear measurements in each round and outputs an approximation to the underlying matrix with probability $\ge 9/10$ must run for $t = \Omega(\log(n2/k)/\log\log n)$ rounds showing that any adaptive algorithm which uses $n{2-\beta}$ linear measurements in each round must run for $\Omega(\log n/\log\log n)$ rounds to compute a reconstruction with probability $\ge 9/10$. Hence any adaptive algorithm that has $o(\log n/\log\log n)$ rounds must use an overall $\Omega(n2)$ linear measurements. Our techniques also readily extend to obtain lower bounds on adaptive algorithms for tensor recovery and obtain measurement-vs-rounds trade-off for many sensing problems in numerical linear algebra, such as spectral norm low rank approximation, Frobenius norm low rank approximation, singular vector approximation, and more.
- Krylov methods are (nearly) optimal for low-rank approximation. arXiv preprint arXiv:2304.03191, 2023.
- Low-rank approximation with 1/ε1/31superscript𝜀131/\varepsilon^{1/3}1 / italic_ε start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT matrix-vector products. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2022, 2022.
- The gradient complexity of linear regression. In Conference on Learning Theory, pages 627–647. PMLR, 2020.
- How well can we estimate a sparse vector? Appl. Comput. Harmon. Anal., 34(2):317–323, 2013. ISSN 1063-5203. doi: 10.1016/j.acha.2012.08.010.
- Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. IEEE Transactions on Information Theory, 57(4):2342–2359, 2011.
- Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59(8):1207–1223, 2006. ISSN 0010-3640. doi: 10.1002/cpa.20124.
- On bayes risk lower bounds. Journal of Machine Learning Research, 17(218):1–58, 2016.
- A mathematical introduction to compressive sensing. Applied and Numerical Harmonic Analysis. Birkhäuser/Springer, New York, 2013. ISBN 978-0-8176-4947-0; 978-0-8176-4948-7. doi: 10.1007/978-0-8176-4948-7.
- Approximate sparse recovery: optimizing time and measurements. SIAM J. Comput., 41(2):436–453, 2012. ISSN 0097-5397. doi: 10.1137/100816705.
- A block lanczos method for computing the singular values and corresponding singular vectors of a matrix. ACM Transactions on Mathematical Software (TOMS), 7(2):149–169, 1981.
- Iterative hard thresholding for low cp-rank tensor models. Linear and Multilinear Algebra, pages 1–17, 2021.
- Ming Gu. Subspace iteration randomization and singular value problems. SIAM Journal on Scientific Computing, 37(3):A1139–A1173, 2015.
- Robust pca with compressed data. Advances in Neural Information Processing Systems, 28, 2015.
- On the power of adaptivity in sparse recovery. In IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS 2011. IEEE Computer Society, 2011.
- Reduced-rank regression with operator norm error. In Conference on Learning Theory, pages 2679–2716. PMLR, 2021.
- Adaptive sparse recovery with limited adaptivity. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 2729–2744. SIAM, Philadelphia, PA, 2019. doi: 10.1137/1.9781611975482.169.
- Adaptive estimation of a quadratic functional by model selection. Annals of Statistics, pages 1302–1338, 2000.
- Yi Li and David P. Woodruff. Tight bounds for sketching the operator norm, Schatten norms, and subspace embeddings. In Approximation, randomization, and combinatorial optimization. Algorithms and techniques, volume 60 of LIPIcs. Leibniz Int. Proc. Inform., pages Art. No. 39, 11. Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2016.
- On sketching matrix norms and the top singular vector. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014, pages 1562–1581. SIAM, 2014.
- Randomized block krylov methods for stronger and faster approximate singular value decomposition. Advances in neural information processing systems, 28, 2015.
- Improved algorithms for adaptive compressed sensing. In 45th International Colloquium on Automata, Languages, and Programming, volume 107 of LIPIcs. Leibniz Int. Proc. Inform., pages Art. No. 90, 14. Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2018.
- (1+ε)1𝜀(1+\varepsilon)( 1 + italic_ε )-approximate sparse recovery. In 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science—FOCS 2011, pages 295–304. IEEE Computer Soc., Los Alamitos, CA, 2011. doi: 10.1109/FOCS.2011.92.
- Lower bounds for adaptive sparse recovery. In Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 652–663. SIAM, Philadelphia, PA, 2012.
- Vector-matrix-vector queries for solving linear algebra, statistics, and graph problems. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2020, volume 176 of LIPIcs, pages 26:1–26:20. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020.
- Low rank tensor recovery via iterative hard thresholding. Linear Algebra Appl., 523:220–262, 2017. ISSN 0024-3795. doi: 10.1016/j.laa.2017.02.028.
- Smallest singular value of a random rectangular matrix. Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 62(12):1707–1739, 2009.
- Arvind K Saibaba. Randomized subspace iteration: Analysis of canonical angles and unitarily invariant norms. SIAM Journal on Matrix Analysis and Applications, 40(1):23–48, 2019.
- Tight query complexity lower bounds for pca via finite sample deformed wigner law. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 1249–1259, 2018.
- Querying a matrix through matrix-vector products. ACM Trans. Algorithms, 17(4):31:1–31:19, 2021.
- Compressed sensing of low-rank plus sparse matrices. Applied and Computational Harmonic Analysis, 2023.
- Roman Vershynin. Concentration inequalities for random tensors. Bernoulli, 26(4):3139 – 3162, 2020.
- Optimal query complexity for estimating the trace of a matrix. In International Colloquium on Automata, Languages, and Programming, pages 1051–1062. Springer, 2014.
- David P Woodruff. Sketching as a tool for numerical linear algebra. arXiv preprint arXiv:1411.4357, 2014.
- Leveraging subspace information for low-rank matrix reconstruction. Signal processing, 163:123–131, 2019.
- Efficient matrix sensing using rank-1 gaussian measurements. In International conference on algorithmic learning theory, pages 3–18. Springer, 2015.