Low-rank matrix estimation via nonconvex spectral regularized methods in errors-in-variables matrix regression
Abstract: High-dimensional matrix regression has been studied in various aspects, such as statistical properties, computational efficiency and application to specific instances including multivariate regression, system identification and matrix compressed sensing. Current studies mainly consider the idealized case that the covariate matrix is obtained without noise, while the more realistic scenario that the covariates may always be corrupted with noise or missing data has received little attention. We consider the general errors-in-variables matrix regression model and proposed a unified framework for low-rank estimation based on nonconvex spectral regularization. Then in the statistical aspect, recovery bounds for any stationary points are provided to achieve statistical consistency. In the computational aspect, the proximal gradient method is applied to solve the nonconvex optimization problem and is proved to converge in polynomial time. Consequences for specific matrix compressed sensing models with additive noise and missing data are obtained via verifying corresponding regularity conditions. Finally, the performance of the proposed nonconvex estimation method is illustrated by numerical experiments.
- Fast global convergence of gradient methods for high-dimensional statistical recovery. Ann. Stat., 40(5):2452–2482, 2012.
- Supplementary Material: Fast global convergence of gradient methods for high-dimensional statistical recovery. 2012.
- Rank-penalized estimation of a quantum system. Phys. Rev. A, 88(3):032113, 2013.
- An ℓ1,ℓ2,ℓ∞subscriptℓ1subscriptℓ2subscriptℓ\ell_{1},\ell_{2},\ell_{\infty}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , roman_ℓ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT-regularization approach to high-dimensional errors-in-variables models. Electron. J. Stat., 10(2):1729–1750, 2016.
- Linear and conic programming estimators in high dimensional errors-in-variables models. J. Royal Stat. Soc. B, 79(3):939–956, 2017.
- P. J. Bickel and Y. Ritov. Efficient estimation in the errors in variables model. Ann. Stat., 15(2):513–540, 1987.
- MEBoost: Variable selection in the presence of measurement error. Stat. Med., 38(15):2705–2718, 2019.
- E. J. Candès and T. Tao. The Dantzig selector: Statistical estimation when p𝑝pitalic_p is much larger than n𝑛nitalic_n. Ann. Stat., 35(6):2313–2351, 2007.
- E. J. Candès and T Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56:2053–2080, 2009.
- E. J. Candès and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inf. Theory, 56(5):2053–2080, 2010.
- Measurement Error in Nonlinear Models: A Modern Perspective. CRC Press, Boca Raton, 2006.
- Y. D. Chen and C. Caramanis. Noisy and missing data regression: Distribution-oblivious support recovery. In International Conference on Machine Learning, pages 383–391, 2013.
- A. Datta and H. Zou. Cocolasso for high-dimensional error-in-variables regression. Ann. Stat., 45(6):2400–2426, 2017.
- Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc., 96(456):1348–1360, 2001.
- A rank minimization heuristic with application to minimum order system approximation. Proceedings of the 2001 American Control Conference. (Cat. No.01CH37148), 6:4734–4739, 2001.
- Towards faster rates and oracle property for low-rank matrix estimation. In Proc. ICML, 2015.
- Inference in high dimensional linear measurement error models. J. Multivar. Anal., page 104759, 2021.
- X. Li and D. Y. Wu. Low-rank matrix estimation via nonconvex optimization methods in multi-response errors-in-variables regression. J. Global Optim., 2023.
- X. Li and D. Y. Wu. Low-rank matrix estimation via nonconvex optimization methods in multi-response errors-in-variables regression. J. Glob. Optim., 88:79–114, 2024.
- Sparse recovery via nonconvex regularized M-estimators over ℓqsubscriptℓ𝑞\ell_{q}roman_ℓ start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT-balls. Comput. Stat. Data Anal., 152:107047, 2020.
- P.-L. Loh. Local optima of nonconvex regularized M-estimators. Dept. Elect. Eng. Comput. Sci., UC Berkeley, Berkeley, 2013.
- High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity. Ann. Stat., 40(3):1637–1664, 2012.
- Supplementary material: High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity. Ann. Statist., 2012.
- Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima. J. Mach. Learn. Res., 16(1):559–616, 2015.
- Generalized nonconvex nonsmooth low-rank minimization. 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 4130–4137, 2014.
- Spectral regularization algorithms for learning large incomplete matrices. Journal of machine learning research : JMLR, 11:2287–2322, 2010.
- B. K. Natarajan. Sparse approximate solutions to linear systems. SIAM J. Comput., 24:227–234, 1995.
- S. Negahban and M. J. Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. Ann. Stat., 39(2):1069–1097, 2011.
- Y. Nesterov. Gradient methods for minimizing composite objective function. Technical report, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE), 2007.
- Y. Nesterov. Introductory lectures on convex optimization: A basic course, volume 87, chapter 2, pages 56,61. Springer Science & Business Media, Berlin, 2013.
- Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev., 52(3):471–501, 2010.
- M. Rosenbaum and A. B. Tsybakov. Sparse recovery under matrix uncertainty. Ann. Stat., 38(5):2620–2651, 2010.
- M. Rosenbaum and A. B. Tsybakov. Improved matrix uncertainty selector. In From Probability to Statistics and Back: High-Dimensional Models and Processes–A Festschrift in Honor of Jon A. Wellner, pages 276–290. Institute of Mathematical Statistics, 2013.
- S. Yu. Rotfel’d. Remarks on the singular numbers of a sum of completely continuous operators. Functional Analysis and Its Applications, 1:252–253, 1967.
- A. Sagan and J. E. Mitchell. Low-rank factorization for rank minimization with nonconvex regularizers. Comput. Optim. Appl., 79:273–300, 2021.
- Measurement error in LASSO: Impact and likelihood bias correction. Stat. Sinica, pages 809–829, 2015.
- Covariate selection in high-dimensional generalized linear models with measurement error. J. Comput. Graph. Stat., 27(4):739–749, 2018.
- R. Tibshirani. Regression shrinkage and selection via the Lasso. J. Royal Stat. Soc. B, 58(1):267–288, 1996.
- M. J. Wainwright. Structured regularizers for high-dimensional problems: Statistical and computational issues. Annu. Rev. Stat. Appl., 1:233–253, 2014.
- Scalable interpretable learning for multi-response error-in-variables regression. J. Multivar. Anal., page 104644, 2020.
- Joint sparse and low-rank multi-task learning with extended multi-attribute profile for hyperspectral target detection. Remote. Sens., 11:150, 2019.
- Large-scale low-rank matrix learning with nonconvex regularizers. IEEE Trans. Pattern Anal. Machine Intell., 41:2628–2643, 2017.
- Fast low-rank matrix learning with nonconvex regularization. IEEE ICDM, pages 539–548, 2015.
- A perturbation inequality for concave functions of singular values and its applications in low-rank matrix recovery. Applied and Computational Harmonic Analysis, 40:396–416, 2016.
- C.-H. Zhang. Nearly unbiased variable selection under minimax concave penalty. Ann. Stat., 38(2):894–942, 2010.
- H. Zhou and L. X. Li. Regularized matrix regression. J. Royal Stat. Soc. B, 76(2):463–483, 2014.
- H. Zou and T. Hastie. Regularization and variable selection via the elastic net. J. Royal Stat. Soc. B, 67(2):301–320, 2005.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.