Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Etalumis: Bringing Probabilistic Programming to Scientific Simulators at Scale (1907.03382v2)

Published 8 Jul 2019 in cs.LG, cs.PF, and stat.ML

Abstract: Probabilistic programming languages (PPLs) are receiving widespread attention for performing Bayesian inference in complex generative models. However, applications to science remain limited because of the impracticability of rewriting complex scientific simulators in a PPL, the computational cost of inference, and the lack of scalable implementations. To address these, we present a novel PPL framework that couples directly to existing scientific simulators through a cross-platform probabilistic execution protocol and provides Markov chain Monte Carlo (MCMC) and deep-learning-based inference compilation (IC) engines for tractable inference. To guide IC inference, we perform distributed training of a dynamic 3DCNN--LSTM architecture with a PyTorch-MPI-based framework on 1,024 32-core CPU nodes of the Cori supercomputer with a global minibatch size of 128k: achieving a performance of 450 Tflop/s through enhancements to PyTorch. We demonstrate a Large Hadron Collider (LHC) use-case with the C++ Sherpa simulator and achieve the largest-scale posterior inference in a Turing-complete PPL.

Citations (53)

Summary

  • The paper introduces Etalumis, a novel framework that integrates probabilistic programming with scientific simulators to enable scalable Bayesian inference without rewriting the simulators.
  • It employs a probabilistic execution protocol (PPX) and combines MCMC with an LSTM-based inference compilation approach to handle complex models with thousands of latent variables.
  • Experimental results on particle physics simulations demonstrate its capability to achieve 450 Tflop/s on 1,024 supercomputer nodes, setting a new benchmark for scalability in PPLs.

Probabilistic Programming at Scale with Etalumis

The paper, Etalumis: Bringing Probabilistic Programming to Scientific Simulators at Scale, presents a significant effort to merge the robustness of probabilistic programming languages (PPLs) with the complexity and practicality of existing scientific simulators. It highlights the challenges of making probabilistic programming applicable to scientific simulators due to issues involving computational costs, complexities in rewriting simulators, and scalability. The authors introduce "Etalumis," a novel probabilistic programming framework aiming to overcome these impediments by interfacing directly with existing scientific simulators and leveraging HPC resources for scalable inference implementations.

Overview

Etalumis employs a probabilistic execution protocol (PPX), which is a cross-platform API that facilitates the probabilistic control of stochastic simulators. Through PPX, existing simulators can be executed without alterations, enabling them to function as probabilistic models under inference engines. The framework provides Markov Chain Monte Carlo (MCMC) and deep-learning-based inference compilation (IC) engines, that address the computational challenges associated with Bayesian inference in complex models.

The training of dynamic neural networks becomes pivotal in Etalumis's IC approach, where the inference engine utilizes a novel LSTM-based architecture to propose distributions for observational data, allowing for fast amortized inference. This enables tractable and repeated inference on simulations.

Experimental Results

Etalumis demonstrates its capabilities with a practical application in particle physics, specifically using the Sherpa simulator to model particle interactions at the Large Hadron Collider (LHC). The research showcases the largest-scale posterior inference within PPLs, achieving remarkable processing speeds by training a dynamic 3DCNN--LSTM architecture across 1,024 nodes of the Cori supercomputer at Lawrence Berkeley National Laboratory, attaining 450 Tflop/s performance.

The significant optimization efforts, including enhancements to PyTorch's built-in MPI functionalities, facilitated large-scale distributed training, essential for handling the complexities of the Sherpa simulator's model, which involves approximately 25,000 latent variables.

Implications

The implications of this research are profound for domains where stochastic simulator-based models are prevalent. By enabling Bayesian inference in models defined by existing complex simulators, Etalumis opens the door to better interpretability and faster inference across various scientific domains, including those using sophisticated models in cosmology, climate science, and epidemiology.

Speculative Outlook

Looking forward, the framework offers promising prospects in advancing the deployment of PPLs in high-performance and high-stakes scientific calculations. Integrating sophisticated techniques like inference compilation into probabilistic programming paradigms within HPC environments could stimulate novel approaches to real-time and large-scale data interpretability, progressively transforming our analytical capabilities in scientific exploration.

Further developments could explore broader applications across different simulator environments, refining the interface protocol to encompass additional languages and simulators. There is also potential for pushing boundaries in machine learning scalability, aligning further HPC advancements with neural network optimizations to improve inference speed and accuracy while expanding Etalumis's applicability.

In conclusion, this paper represents an essential step in bridging machine learning frameworks and traditional scientific computational models, effectively enhancing the depth and breadth of probabilistic programming's applications in scientific research.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com