Mixture Data-Dependent Priors (1708.00099v1)
Abstract: We propose a two-component mixture of a noninformative (diffuse) and an informative prior distribution, weighted through the data in such a way to prefer the first component if a prior-data conflict arises. The data-driven approach for computing the mixture weights makes this class data-dependent. Although rarely used with any theoretical motivation, data-dependent priors are often used for different reasons, and their use has been a lot debated over the last decades. However, our approach is justified in terms of Bayesian inference as an approximation of a hierarchical model and as a conditioning on a data statistic. This class of priors turns out to provide less information than an informative prior, perhaps it represents a suitable option for not dominating the inference in presence of small samples. First evidences from simulation studies show that this class could also be a good proposal for reducing mean squared errors.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.