Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Maximum Likelihood Learning of Unnormalized Models for Simulation-Based Inference (2210.14756v2)

Published 26 Oct 2022 in cs.LG and stat.ML

Abstract: We introduce two synthetic likelihood methods for Simulation-Based Inference (SBI), to conduct either amortized or targeted inference from experimental observations when a high-fidelity simulator is available. Both methods learn a conditional energy-based model (EBM) of the likelihood using synthetic data generated by the simulator, conditioned on parameters drawn from a proposal distribution. The learned likelihood can then be combined with any prior to obtain a posterior estimate, from which samples can be drawn using MCMC. Our methods uniquely combine a flexible Energy-Based Model and the minimization of a KL loss: this is in contrast to other synthetic likelihood methods, which either rely on normalizing flows, or minimize score-based objectives; choices that come with known pitfalls. We demonstrate the properties of both methods on a range of synthetic datasets, and apply them to a neuroscience model of the pyloric network in the crab, where our method outperforms prior art for a fraction of the simulation budget.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Pierre Glaser (4 papers)
  2. Michael Arbel (29 papers)
  3. Samo Hromadka (4 papers)
  4. Arnaud Doucet (161 papers)
  5. Arthur Gretton (127 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.