Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 79 tok/s
Gemini 2.5 Pro 30 tok/s Pro
GPT-5 Medium 29 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 116 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 468 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Tuning parameter selection for the adaptive nuclear norm regularized trace regression (2405.06889v1)

Published 11 May 2024 in stat.ME and math.OC

Abstract: Regularized models have been applied in lots of areas, with high-dimensional data sets being popular. Because tuning parameter decides the theoretical performance and computational efficiency of the regularized models, tuning parameter selection is a basic and important issue. We consider the tuning parameter selection for adaptive nuclear norm regularized trace regression, which achieves by the Bayesian information criterion (BIC). The proposed BIC is established with the help of an unbiased estimator of degrees of freedom. Under some regularized conditions, this BIC is proved to achieve the rank consistency of the tuning parameter selection. That is the model solution under selected tuning parameter converges to the true solution and has the same rank with that of the true solution in probability. Some numerical results are presented to evaluate the performance of the proposed BIC on tuning parameter selection.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (57)
  1. Selecting the tuning parameter in penalized gaussian graphical models. Statistics and Computing, 29(3):559–569, 2019.
  2. Hirotugu Akaike. Statistical predictor identification. Annals of the Institute of Statistical Mathematics, 22(1):203–217, 1970.
  3. Francis R Bach. Consistency of trace norm minimization. Journal of Machine Learning Research, 9:1019–1048, 2008.
  4. Amir Beck. First-Order Methods in Optimization. SIAM, 2017.
  5. On cross-validated lasso in high dimensions. The Annals of Statistics, 49(3):1300–1317, 2021.
  6. A note on cross-validation for lasso under measurement errors. Technometrics, 62(4):549–556, 2020.
  7. An overview of low-rank matrix recovery from incomplete observations. IEEE Journal of Selected Topics in Signal Processing, 10(4):608–622, 2016.
  8. Bradley Efron. The estimation of prediction error: covariance penalties and cross-validation. Journal of the American Statistical Association, 99(467):619–632, 2004.
  9. Andreas Elsener and Sara van de Geer. Robust low-rank matrix estimation. The Annals of Statistics, 46(6):3481–3509, 2018.
  10. Generalized high-dimensional trace regression via nuclear norm regularization. Journal of Econometrics, 212(1):177–202, 2019.
  11. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348–1360, 2001.
  12. Tuning parameter selection in high dimensional penalized likelihood. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75(3):531–552, 2013.
  13. Event labeling combining ensemble detectors and background knowledge. Progress in Artificial Intelligence, 2:113–127, 2014.
  14. On low-rank trace regression under general sampling distribution. Journal of Machine Learning Research, 23(321):1–49, 2022.
  15. Tuning parameter selection in sparse regression modeling. Computational Statistics and Data Analysis, 59:28–40, 2013.
  16. The lasso, persistence, and cross-validation. In the International Conference on Machine Learning, pages 1031–1039. PMLR, 2013.
  17. Risk consistency of cross-validation with lasso-type procedures. Statistica Sinica, 27:1017–1036, 2017.
  18. Matrix analysis. Cambridge university press, 2012.
  19. Consistent model selection criteria on high dimensions. Journal of Machine Learning Research, 13(1):1037–1057, 2012.
  20. Jing Lei. Cross-validation with confidence. Journal of the American Statistical Association, 115(532):1978–1997, 2020.
  21. Double fused lasso regularized regression with both matrix and vector valued predictors. Electronic Journal of Statistics, 15:1909–1950, 2021.
  22. Multivariate regression with calibration. In Advances in Neural Information Processing Systems, pages 127–135, 2014.
  23. Interior-point method for nuclear norm approximation with application to system identification. SIAM Journal on Matrix Analysis and Applications, 31(3):1235–1256, 2010.
  24. Convex optimization methods for dimension reduction and coefficient estimation in multivariate linear regression. Mathematical Programming, 131(1):163–194, 2012.
  25. Matrix differential calculus with applications in statistics and econometrics. John Wiley & Sons, 2019.
  26. Spectral regularization algorithms for learning large incomplete matrices. Journal of Machine Learning Research, 11(11):2287–2322, 2010.
  27. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. The Annals of Statistics, 39(2):1069–1097, 2011.
  28. Statistical monitoring of multivariate multiple linear regression profiles in phase i with calibration application. Quality and Reliability Engineering International, 26(3):291–303, 2010.
  29. Ralph Tyrell Rockafellar. Convex Analysis. Princeton University Press, 2015.
  30. Sparse multivariate regression with covariance estimation. Journal of Computational and Graphical Statistics, 19(4):947–962, 2010.
  31. Gideon Schwarz. Estimating the dimension of a model. The Annals of Statistics, 6(2):461–464, 1978.
  32. Charles M Stein. Estimation of the mean of a multivariate normal distribution. The Annals of Statistics, pages 1135–1151, 1981.
  33. Consistent selection of tuning parameters via variable selection stability. Journal of Machine Learning Research, 14(1):3419–3440, 2013.
  34. Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288, 1996.
  35. Sparsity and smoothness via the fused lasso [j]. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(1):91–108, 2005.
  36. Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. The Annals of Statistics, 39(5):2302–2329, 2011.
  37. Covid-19 open-data: curating a fine-grained, global-scale data repository for sars-cov-2. 2020. Work in progress, 2020.
  38. Unified lasso estimation by least squares approximation. Journal of the American Statistical Association, 102(479):1039–1048, 2007.
  39. Shrinkage tuning parameter selection with a diverging number of parameters. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(3):671–683, 2009.
  40. Tuning parameter selectors for the smoothly clipped absolute deviation method. Biometrika, 94(3):553–568, 2007.
  41. Consistent tuning parameter selection in high dimensional sparse linear regression. Journal of Multivariate Analysis, 102(7):1141–1151, 2011.
  42. G Alistair Watson. Characterization of the subdifferential of some matrix norms. Linear Algebra and its Applications, 170(0):33–45, 1992.
  43. High-dimensional multi-task learning using multivariate regression and generalized fiducial inference. Journal of Computational and Graphical Statistics, 31:1–15, 2022.
  44. A survey of tuning parameter selection for high-dimensional regression. Annual Review of Statistics and Its Application, 7:209–226, 2020.
  45. Consistent tuning parameter selection in high-dimensional group-penalized regression. Science China Mathematics, 62(4):751–770, 2019.
  46. Ming Yuan. Degrees of freedom in low rank matrix estimation. Science China Mathematics, 59(12):2485–2502, 2016.
  47. Dimension reduction and coefficient estimation in multivariate linear regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(3):329–346, 2007.
  48. Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49–67, 2006.
  49. Cun-Hui Zhang. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics, 38(2):894–942, 2010.
  50. Low rank matrix recovery for real-time cardiac mri. In 2010 IEEE International Symposium on Biomedical Imaging: from Nano to Macro, pages 996–999. IEEE, 2010.
  51. Trace regression model with simultaneously low rank and row (column) sparse parameter. Computational Statistics and Data Analysis, 116:1–18, 2017.
  52. Regularized matrix regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(2):463–483, 2014.
  53. Yunzhang Zhu. A convex optimization formulation for multivariate regression. In Advances in Neural Information Processing Systems, pages 17652–17661, 2020.
  54. Estimation of low rank high-dimensional multivariate linear models for multi-response data. Journal of the American Statistical Association, 117(538):693–703, 2022.
  55. Hui Zou. The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101(476):1418–1429, 2006.
  56. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320, 2005.
  57. On the adaptive elastic-net with a diverging number of parameters. The Annals of Statistics, 37(4):1733–1751, 2009.

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 0 likes.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube