Convergence analysis of data augmentation algorithms in Bayesian lasso models with log-concave likelihoods (2512.20041v1)
Abstract: We study the convergence properties of a class of data augmentation algorithms targeting posterior distributions of Bayesian lasso models with log-concave likelihoods. Leveraging isoperimetric inequalities, we derive a generic convergence bound for this class of algorithms and apply it to Bayesian probit, logistic, and heteroskedastic Gaussian linear lasso models. Under feasible initializations, the mixing times for the probit and logistic models are of order $O[(p+n)3 (pn{1-c} + n)]$, up to logarithmic factors, where $n$ is the sample size, $p$ is the dimension of the regression coefficients, and $c \in [0,1]$ is determined by the lasso penalty parameter. The mixing time for the heteroskedastic Gaussian model is $O[n(n+p)3 (p n{1-c} + n)]$, up to logarithmic factors.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.