Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

Consistent Bayesian Sparsity Selection for High-dimensional Gaussian DAG Models with Multiplicative and Beta-mixture Priors (1903.03531v1)

Published 8 Mar 2019 in math.ST, stat.ME, and stat.TH

Abstract: Estimation of the covariance matrix for high-dimensional multivariate datasets is a challenging and important problem in modern statistics. In this paper, we focus on high-dimensional Gaussian DAG models where sparsity is induced on the Cholesky factor L of the inverse covariance matrix. In recent work, ([Cao, Khare, and Ghosh, 2019]), we established high-dimensional sparsity selection consistency for a hierarchical Bayesian DAG model, where an Erdos-Renyi prior is placed on the sparsity pattern in the Cholesky factor L, and a DAG-Wishart prior is placed on the resulting non-zero Cholesky entries. In this paper we significantly improve and extend this work, by (a) considering more diverse and effective priors on the sparsity pattern in L, namely the beta-mixture prior and the multiplicative prior, and (b) establishing sparsity selection consistency under significantly relaxed conditions on p, and the sparsity pattern of the true model. We demonstrate the validity of our theoretical results via numerical simulations, and also use further simulations to demonstrate that our sparsity selection approach is competitive with existing state-of-the-art methods including both frequentist and Bayesian approaches in various settings.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.