Calibrating generalized predictive distributions (2107.01688v1)
Abstract: In prediction problems, it is common to model the data-generating process and then use a model-based procedure, such as a Bayesian predictive distribution, to quantify uncertainty about the next observation. However, if the posited model is misspecified, then its predictions may not be calibrated -- that is, the predictive distribution's quantiles may not be nominal frequentist prediction upper limits, even asymptotically. Rather than abandoning the comfort of a model-based formulation for a more complicated non-model-based approach, here we propose a strategy in which the data itself helps determine if the assumed model-based solution should be adjusted to account for model misspecification. This is achieved through a generalized Bayes formulation where a learning rate parameter is tuned, via the proposed generalized predictive calibration (GPrC) algorithm, to make the predictive distribution calibrated, even under model misspecification. Extensive numerical experiments are presented, under a variety of settings, demonstrating the proposed GPrC algorithm's validity, efficiency, and robustness.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.