Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Asymptotic Minimaxity, Optimal Posterior Concentration and Asymptotic Bayes Optimality of Horseshoe-type Priors Under Sparsity (1510.01307v4)

Published 5 Oct 2015 in math.ST and stat.TH

Abstract: In this article, we investigate certain asymptotic optimality properties of a very broad class of one-group continuous shrinkage priors for simultaneous estimation and testing of a sparse normal mean vector. Asymptotic optimality of Bayes estimates and posterior concentration properties corresponding to the general class of one-group priors under consideration are studied where the data is assumed to be generated according to a multivariate normal distribution with a fixed unknown mean vector. Under the assumption that the number of non-zero means is known, we show that Bayes estimators arising out of this general class of shrinkage priors under study, attain the minimax risk, up to some multiplicative constant, under the $l_2$ norm. In particular, it is shown that for the horseshoe-type priors such as the three parameter beta normal mixtures with parameters $a=0.5, b>0$ and the generalized double Pareto prior with shape parameter $\alpha=1$, the corresponding Bayes estimates become asymptotically minimax. Moreover, posterior distributions arising out of this general class of one-group priors are shown to contract around the true mean vector at the minimax $l_2$ rate for a wide range of values of the global shrinkage parameter depending on the proportion of non-zero components of the underlying mean vector. An important and remarkable fact that emerges as a consequence of one key result essential for proving the aforesaid minimaxity result is that, within the asymptotic framework of Bogdan et al. (2011), the natural thresholding rules due to Carvalho et al. (2010) based on the horseshoe-type priors, asymptotically attain the optimal Bayes risk w.r.t. a $0-1$ loss, up to the correct multiplicative constant and are thus, asymptotically Bayes optimal under sparsity (ABOS).

Summary

We haven't generated a summary for this paper yet.