Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DNNLasso: Scalable Graph Learning for Matrix-Variate Data (2403.02608v1)

Published 5 Mar 2024 in cs.LG and math.OC

Abstract: We consider the problem of jointly learning row-wise and column-wise dependencies of matrix-variate observations, which are modelled separately by two precision matrices. Due to the complicated structure of Kronecker-product precision matrices in the commonly used matrix-variate Gaussian graphical models, a sparser Kronecker-sum structure was proposed recently based on the Cartesian product of graphs. However, existing methods for estimating Kronecker-sum structured precision matrices do not scale well to large scale datasets. In this paper, we introduce DNNLasso, a diagonally non-negative graphical lasso model for estimating the Kronecker-sum structured precision matrix, which outperforms the state-of-the-art methods by a large margin in both accuracy and computational time. Our code is available at https://github.com/YangjingZhang/DNNLasso.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. G. I. Allen and R. Tibshirani. Transposable regularized covariance models with an application to missing data imputation. The Annals of Applied Statistics, 4(2):764, 2010.
  2. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. Journal of Machine Learning Research, 9:485–516, 2008.
  3. A. P. Dawid. Some matrix-variate distribution theory: Notational considerations and a Bayesian application. Biometrika, 68(1):265–274, 1981.
  4. J. Eckstein and D. P. Bertsekas. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming, 55(1):293–318, 1992.
  5. B. Efron. Are a set of microarrays independent of each other? The Annals of Applied Statistics, 3(3):922, 2009.
  6. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432–441, 2008.
  7. D. Gabay and B. Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Computers and Mathematics with Applications, 2(1):17–40, 1976.
  8. R. Glowinski and A. Marroco. Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de dirichlet non linéaires. Revue française d’automatique, informatique, recherche opérationnelle. Analyse numérique, 9(R2):41–76, 1975.
  9. Tensor graphical lasso (TeraLasso). Journal of the Royal Statistical Society: Series B (Statistical Methodology), 81(5):901–931, 2019.
  10. Matrix Variate Distributions. Chapman and Hall/CRC, 1999.
  11. Topics in Matrix Analysis. Cambridge University Press, 1991.
  12. QUIC: Quadratic approximation for sparse inverse covariance estimation. Journal of Machine Learning Research, 15(1):2911–2947, 2014.
  13. The bigraphical lasso. In International Conference on Machine Learning, pages 1229–1237, 2013.
  14. S. L. Lauritzen. Graphical Models, volume 17. Clarendon Press, 1996.
  15. C. Leng and C. Y. Tang. Sparse matrix graphical models. Journal of the American Statistical Association, 107(499):1187–1200, 2012.
  16. Columbia Object Image Library (COIL-20). Technical report CUCS-005-96, 1996.
  17. Y. Nesterov. Gradient methods for minimizing composite functions. Mathematical Programming, 140(1):125–161, 2013.
  18. N. Parikh and S. Boyd. Proximal algorithms. Foundations and Trends in Optimization, 1(3):127–239, 2014.
  19. R. T. Rockafellar. Convex Analysis. Princeton University Press, 1996.
  20. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2:494–515, 2008.
  21. A. Stevens and R. Willett. Graph-guided regularization for improved seasonal forecasting. Climate Informatics, 2019.
  22. Graph-guided regularized regression of pacific ocean climate variables to increase predictive skill of southwestern US winter precipitation. Journal of Climate, 34(2):737–754, 2021.
  23. T. Tsiligkaridis and A. O. Hero. Covariance estimation in high dimensions via Kronecker product expansions. IEEE Transactions on Signal Processing, 61(21):5347–5360, 2013.
  24. On convergence of Kronecker graphical lasso algorithms. IEEE Transactions on Signal Processing, 61(7):1743–1755, 2013.
  25. J. Yin and H. Li. Model selection and estimation in the matrix normal graphical model. Journal of Multivariate Analysis, 107:119–140, 2012.
  26. J. H. Yoon and S. Kim. EiGLasso for scalable sparse Kronecker-sum inverse covariance estimation. Journal of Machine Learning Research, 23(110):1–39, 2022.
  27. M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model. Biometrika, 94(1):19–35, 2007.
  28. S. Zhou. Gemini: Graph estimation with matrix variate normal instances. The Annals of Statistics, 42(2):532–562, 2014.

Summary

We haven't generated a summary for this paper yet.