Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Risk quantification for the thresholding rule for multiple testing using Gaussian scale mixtures (1711.08705v1)

Published 23 Nov 2017 in math.ST and stat.TH

Abstract: In this paper we study the asymptotic properties of Bayesian multiple testing procedures for a large class of Gaussian scale mixture pri- ors. We study two types of multiple testing risks: a Bayesian risk proposed in Bogdan et al. (2011) where the data are assume to come from a mixture of normal, and a frequentist risk similar to the one proposed by Arias-Castro and Chen (2017). Following the work of van der Pas et al. (2016), we give general conditions on the prior such that both risks can be bounded. For the Bayesian risk, the bound is almost sharp. This result show that under these conditions, the considered class of continuous prior can be competitive with the usual two-group model (e.g. spike and slab priors). We also show that if the non-zeros component of the parameter are large enough, the minimax risk can be made asymptotically null. The separation rates obtained are consistent with the one that could be guessed from the existing literature (see van der Pas et al., 2017b). For both problems, we then give conditions under which an adaptive version of the result can be obtained.

Summary

We haven't generated a summary for this paper yet.