Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient State-space Exploration in Massively Parallel Simulation Based Inference (2106.15508v1)

Published 29 Jun 2021 in cs.DC

Abstract: Simulation-based Inference (SBI) is a widely used set of algorithms to learn the parameters of complex scientific simulation models. While primarily run on CPUs in HPC clusters, these algorithms have been shown to scale in performance when developed to be run on massively parallel architectures such as GPUs. While parallelizing existing SBI algorithms provides us with performance gains, this might not be the most efficient way to utilize the achieved parallelism. This work proposes a new algorithm, that builds on an existing SBI method - Approximate Bayesian Computation with Sequential Monte Carlo(ABC-SMC). This new algorithm is designed to utilize the parallelism not only for performance gain, but also toward qualitative benefits in the learnt parameters. The key idea is to replace the notion of a single 'step-size' hyperparameter, which governs how the state space of parameters is explored during learning, with step-sizes sampled from a tuned Beta distribution. This allows this new ABC-SMC algorithm to more efficiently explore the state-space of the parameters being learnt. We test the effectiveness of the proposed algorithm to learn parameters for an epidemiology model running on a Tesla T4 GPU. Compared to the parallelized state-of-the-art SBI algorithm, we get similar quality results in $\sim 100$x fewer simulations and observe ~80x lower run-to-run variance across 10 independent trials.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Sourabh Kulkarni (3 papers)
  2. Csaba Andras Moritz (9 papers)

Summary

We haven't generated a summary for this paper yet.