Pooling information in likelihood-free inference
Abstract: Likelihood-free inference (LFI) methods, such as approximate Bayesian computation, have become commonplace for conducting inference in complex models. Many approaches are based on summary statistics or discrepancies derived from synthetic data. However, determining which summary statistics or discrepancies to use for constructing the posterior remains a challenging question, both practically and theoretically. Instead of relying on a single vector of summaries for inference, we propose a new pooled posterior that optimally combines inferences from multiple LFI posteriors. This pooled approach eliminates the need to select a single vector of summaries or even a specific LFI algorithm. Our approach is straightforward to implement and avoids performing a high-dimensional LFI analysis involving all summary statistics. We give theoretical guarantees for the improved performance of the pooled posterior mean in terms of asymptotic frequentist risk and demonstrate the effectiveness of the approach in a number of benchmark examples.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.