Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scalable Bayesian shrinkage and uncertainty quantification for high-dimensional regression (1509.03697v2)

Published 12 Sep 2015 in stat.ME

Abstract: Bayesian shrinkage methods have generated a lot of recent interest as tools for high-dimensional regression and model selection. These methods naturally facilitate tractable uncertainty quantification and incorporation of prior information. This benefit has led to extensive use of the Bayesian shrinkage methods across diverse applications. A common feature of these models is that the corresponding priors on the regression coefficients can be expressed as scale mixture of normals. While the three-step Gibbs sampler used to sample from the often intractable associated posterior density has been shown to be geometrically ergodic for several of these models, it has been demonstrated recently that convergence of this sampler can still be quite slow in modern high-dimensional settings despite this apparent theoretical safeguard. We propose a new method to draw from the same posterior via a tractable two-step blocked Gibbs sampler. We demonstrate that our proposed two-step blocked sampler exhibits vastly superior convergence behavior compared to the original three-step sampler in high-dimensional regimes on both real and simulated data. We also provide a detailed theoretical underpinning to the new method in the context of the Bayesian lasso. First, we prove that the proposed two-step sampler is geometrically ergodic, and derive explicit upper bounds for the (geometric) rate of convergence. Furthermore, we demonstrate theoretically that while the original Bayesian lasso chain is not Hilbert-Schmidt, the proposed chain is trace class (and hence Hilbert-Schmidt). The trace class property implies that the corresponding Markov operator is compact, and its (countably many) eigenvalues are summable. It also facilitates a rigorous comparison of the two-step blocked chain with "sandwich" algorithms which aim to improve performance of the two-step chain by inserting an inexpensive extra step.

Summary

We haven't generated a summary for this paper yet.