Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GATSBI: Generative Adversarial Training for Simulation-Based Inference (2203.06481v1)

Published 12 Mar 2022 in stat.ML, cs.LG, and stat.ME

Abstract: Simulation-based inference (SBI) refers to statistical inference on stochastic models for which we can generate samples, but not compute likelihoods. Like SBI algorithms, generative adversarial networks (GANs) do not require explicit likelihoods. We study the relationship between SBI and GANs, and introduce GATSBI, an adversarial approach to SBI. GATSBI reformulates the variational objective in an adversarial setting to learn implicit posterior distributions. Inference with GATSBI is amortised across observations, works in high-dimensional posterior spaces and supports implicit priors. We evaluate GATSBI on two SBI benchmark problems and on two high-dimensional simulators. On a model for wave propagation on the surface of a shallow water body, we show that GATSBI can return well-calibrated posterior estimates even in high dimensions. On a model of camera optics, it infers a high-dimensional posterior given an implicit prior, and performs better than a state-of-the-art SBI approach. We also show how GATSBI can be extended to perform sequential posterior estimation to focus on individual observations. Overall, GATSBI opens up opportunities for leveraging advances in GANs to perform Bayesian inference on high-dimensional simulation-based models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Poornima Ramesh (1 paper)
  2. Jan-Matthis Lueckmann (8 papers)
  3. Jan Boelts (5 papers)
  4. David S. Greenberg (9 papers)
  5. Pedro J. Gonçalves (7 papers)
  6. Jakob H. Macke (39 papers)
  7. Álvaro Tejero-Cantero (5 papers)
Citations (29)

Summary

We haven't generated a summary for this paper yet.