Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Nonconvex Deterministic Matrix Completion by Projected Gradient Descent Methods (2401.06592v1)

Published 12 Jan 2024 in math.OC, cs.IT, and math.IT

Abstract: We study deterministic matrix completion problem, i.e., recovering a low-rank matrix from a few observed entries where the sampling set is chosen as the edge set of a Ramanujan graph. We first investigate projected gradient descent (PGD) applied to a Burer-Monteiro least-squares problem and show that it converges linearly to the incoherent ground-truth with respect to the condition number \k{appa} of ground-truth under a benign initialization and large samples. We next apply the scaled variant of PGD to deal with the ill-conditioned case when \k{appa} is large, and we show the algorithm converges at a linear rate independent of the condition number \k{appa} under similar conditions. Finally, we provide numerical experiments to corroborate our results.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. P. Chen and D. Suter, “Recovering the missing components in a large noisy low-rank matrix: Application to SFM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 8, pp. 1051–1063, 2004.
  2. Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for recommender systems,” Computer, vol. 42, no. 8, pp. 30–37, 2009.
  3. Q. Zhang, V. Y. Tan, and C. Suh, “Community detection and matrix completion with social and item similarity graphs,” IEEE Transactions on Signal Processing, vol. 69, pp. 917–931, 2021.
  4. D. Gross, Y.-K. Liu, S. T. Flammia, S. Becker, and J. Eisert, “Quantum state tomography via compressed sensing,” Physical Review Letters, vol. 105, no. 15, p. 150401, 2010.
  5. Z. Liu and L. Vandenberghe, “Interior-point method for nuclear norm approximation with application to system identification,” SIAM Journal on Matrix Analysis and Applications, vol. 31, no. 3, pp. 1235–1256, 2010.
  6. E. J. Candès and B. Recht, “Exact matrix completion via convex optimization,” Communications of the ACM, vol. 55, no. 6, pp. 111–119, 2012.
  7. E. J. Candès and Y. Plan, “Matrix completion with noise,” Proceedings of the IEEE, vol. 98, no. 6, pp. 925–936, 2010.
  8. R. H. Keshavan, A. Montanari, and S. Oh, “Matrix completion from a few entries,” IEEE Transactions on Information Theory, vol. 56, no. 6, pp. 2980–2998, 2010.
  9. M. A. Davenport and J. Romberg, “An overview of low-rank matrix recovery from incomplete observations,” IEEE Journal of Selected Topics in Signal Processing, vol. 10, no. 4, pp. 608–622, 2016.
  10. Z. Zhu, Q. Li, G. Tang, and M. B. Wakin, “Global optimality in low-rank matrix optimization,” IEEE Transactions on Signal Processing, vol. 66, no. 13, pp. 3614–3628, 2018.
  11. Y. Chi, Y. M. Lu, and Y. Chen, “Nonconvex optimization meets low-rank matrix factorization: An overview,” IEEE Transactions on Signal Processing, vol. 67, no. 20, pp. 5239–5269, 2019.
  12. S. Oymak, K. Mohan, M. Fazel, and B. Hassibi, “A simplified approach to recovery conditions for low rank matrices,” in 2011 IEEE International Symposium on Information Theory Proceedings.   IEEE, 2011, pp. 2318–2322.
  13. E. J. Candès and T. Tao, “The power of convex relaxation: Near-optimal matrix completion,” IEEE Transactions on Information Theory, vol. 56, no. 5, pp. 2053–2080, 2010.
  14. B. Recht, “A simpler approach to matrix completion.” Journal of Machine Learning Research, vol. 12, no. 12, 2011.
  15. S. Burer and R. D. Monteiro, “A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization,” Mathematical Programming, vol. 95, no. 2, pp. 329–357, 2003.
  16. R. Sun and Z.-Q. Luo, “Guaranteed matrix completion via non-convex factorization,” IEEE Transactions on Information Theory, vol. 62, no. 11, pp. 6535–6579, 2016.
  17. Q. Zheng and J. Lafferty, “Convergence analysis for rectangular matrix completion using burer-monteiro factorization and gradient descent,” arXiv preprint arXiv:1605.07051, 2016.
  18. J. Chen, D. Liu, and X. Li, “Nonconvex rectangular matrix completion via gradient descent without ℓ2,∞subscriptℓ2\ell_{2,\infty}roman_ℓ start_POSTSUBSCRIPT 2 , ∞ end_POSTSUBSCRIPT regularization,” IEEE Transactions on Information Theory, vol. 66, no. 9, pp. 5806–5841, 2020.
  19. T. Tong, C. Ma, and Y. Chi, “Accelerating ill-conditioned low-rank matrix estimation via scaled gradient descent.” Journal of Machine Learning Research, vol. 22, pp. 150–1, 2021.
  20. J. Zhang, S. Fattahi, and R. Y. Zhang, “Preconditioned gradient descent for over-parameterized nonconvex matrix factorization,” in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 5985–5996.
  21. E. Heiman, G. Schechtman, and A. Shraibman, “Deterministic algorithms for matrix completion,” Random Structures & Algorithms, vol. 45, no. 2, pp. 306–317, 2014.
  22. M. Ashraphijuo, V. Aggarwal, and X. Wang, “On deterministic sampling patterns for robust low-rank matrix completion,” IEEE Signal Processing Letters, vol. 25, no. 3, pp. 343–347, 2017.
  23. S. Bhojanapalli and P. Jain, “Universal matrix completion,” in International Conference on Machine Learning.   PMLR, 2014, pp. 1881–1889.
  24. S. P. Burnwal and M. Vidyasagar, “Deterministic completion of rectangular matrices using asymmetric ramanujan graphs: Exact and stable recovery,” IEEE Transactions on Signal Processing, vol. 68, pp. 3834–3848, 2020.
  25. B. Recht, M. Fazel, and P. A. Parrilo, “Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization,” SIAM review, vol. 52, no. 3, pp. 471–501, 2010.
  26. Y. Chen, “Incoherence-optimal matrix completion,” IEEE Transactions on Information Theory, vol. 61, no. 5, pp. 2909–2923, 2015.
  27. J.-F. Cai, E. J. Candès, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM Journal on Optimization, vol. 20, no. 4, pp. 1956–1982, 2010.
  28. Z. Lin, M. Chen, and Y. Ma, “The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices,” arXiv preprint arXiv:1009.5055, 2010.
  29. R. Meka, P. Jain, and I. Dhillon, “Matrix completion from power-law distributed samples,” Advances in Neural Information Processing Systems, vol. 22, 2009.
  30. G. Liu, Q. Liu, X.-T. Yuan, and M. Wang, “Matrix completion with deterministic sampling: Theories and methods,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 2, pp. 549–566, 2019.
  31. F. J. Király, L. Theran, and R. Tomioka, “The algebraic combinatorial approach for low-rank matrix completion.” Journal of Machine Learning Research, vol. 16, no. 1, pp. 1391–1436, 2015.
  32. D. L. Pimentel-Alarcón, N. Boston, and R. D. Nowak, “A characterization of deterministic sampling patterns for low-rank matrix completion,” IEEE Journal of Selected Topics in Signal Processing, vol. 10, no. 4, pp. 623–636, 2016.
  33. S. Hoory, N. Linial, and A. Wigderson, “Expander graphs and their applications,” Bulletin of the American Mathematical Society, vol. 43, no. 4, pp. 439–561, 2006.
  34. A. Lubotzky, “Expander graphs in pure and applied mathematics,” Bulletin of the American Mathematical Society, vol. 49, no. 1, pp. 113–162, 2012.
  35. Y. T. Lee, R. Peng, and D. A. Spielman, “Sparsified cholesky solvers for sdd linear systems,” arXiv preprint arXiv:1506.08204, 2015.
  36. E. Lubetzky and A. Sly, “Explicit expanders with cutoff phenomena,” Electronic Journal of Probability, vol. 16, 2011.
  37. K. Eisenträger, S. Hallgren, K. Lauter, T. Morrison, and C. Petit, “Supersingular isogeny graphs and endomorphism rings: reductions and solutions,” in Advances in Cryptology–EUROCRYPT 2018: 37th Annual International Conference on the Theory and Applications of Cryptographic Techniques.   Springer, 2018, pp. 329–368.
  38. M. R. Murty, “Ramanujan graphs,” Journal of the Ramanujan Mathematical Society, vol. 18, no. 1, pp. 1–20, 2003.
  39. S. P. Burnwal, “Deterministic matrix completion using ramanujan graphs,” Ph.D. dissertation, Indian Institute of Technology, Hyderabad, 2021.
  40. S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi, and B. Recht, “Low-rank solutions of linear matrix equations via procrustes flow,” in Proceedings of The 33rd International Conference on Machine Learning, 2016, pp. 964–973.
  41. X. Li, Z. Zhu, A. Man-Cho So, and R. Vidal, “Nonconvex robust low-rank matrix recovery,” SIAM Journal on Optimization, vol. 30, no. 1, pp. 660–686, 2020.
  42. Y. Chen and M. J. Wainwright, “Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees,” arXiv preprint arXiv:1509.03025, 2015.
  43. J. Tanner and K. Wei, “Low rank matrix completion by alternating steepest descent methods,” Applied and Computational Harmonic Analysis, vol. 40, no. 2, pp. 417–429, 2016.
  44. E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval via wirtinger flow: Theory and algorithms,” IEEE Transactions on Information Theory, vol. 61, no. 4, pp. 1985–2007, 2015.
  45. A. Lubotzky, R. Phillips, and P. Sarnak, “Ramanujan graphs,” Combinatorica, vol. 8, no. 3, pp. 261–277, 1988.
  46. S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi, and B. Recht, “Low-rank solutions of linear matrix equations via procrustes flow,” in International Conference on Machine Learning.   PMLR, 2016, pp. 964–973.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com