Marginal inferential models: prior-free probabilistic inference on interest parameters
Abstract: The inferential models (IM) framework provides prior-free, frequency-calibrated, posterior probabilistic inference. The key is the use of random sets to predict unobservable auxiliary variables connected to the observable data and unknown parameters. When nuisance parameters are present, a marginalization step can reduce the dimension of the auxiliary variable which, in turn, leads to more efficient inference. For regular problems, exact marginalization can be achieved, and we give conditions for marginal IM validity. We show that our approach provides exact and efficient marginal inference in several challenging problems, including a many-normal-means problem. In non-regular problems, we propose a generalized marginalization technique and prove its validity. Details are given for two benchmark examples, namely, the Behrens--Fisher and gamma mean problems.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.