Papers
Topics
Authors
Recent
Search
2000 character limit reached

Selection of inverse gamma and half-t priors for hierarchical models: sensitivity and recommendations

Published 26 Aug 2021 in stat.ME | (2108.12045v1)

Abstract: While the importance of prior selection is well understood, establishing guidelines for selecting priors in hierarchical models has remained an active, and sometimes contentious, area of Bayesian methodology research. Choices of hyperparameters for individual families of priors are often discussed in the literature, but rarely are different families of priors compared under similar models and hyperparameters. Using simulated data, we evaluate the performance of inverse gamma and half-$t$ priors for estimating the standard deviation of random effects in three hierarchical models: the 8-schools model, a random intercepts longitudinal model, and a simple multiple outcomes model. We compare the performance of the two prior families using a range of prior hyperparameters, some of which have been suggested in the literature, and others that allow for a direct comparison of pairs of half-$t$ and inverse-gamma priors. Estimation of very small values of the random effect standard deviation led to convergence issues especially for the half-$t$ priors. For most settings, we found that the posterior distribution of the standard deviation had smaller bias under half-$t$ priors than under their inverse-gamma counterparts. Inverse gamma priors generally gave similar coverage but had smaller interval lengths than their half-$t$ prior counterparts. Our results for these two prior families will inform prior specification for hierarchical models, allowing practitioners to better align their priors with their respective models and goals.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.