Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 63 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 194 tok/s Pro
GPT OSS 120B 445 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

GABI: Geometric Autoencoders for Bayesian Inversion

Updated 25 September 2025
  • The paper introduces a two-stage 'learn first, observe later' framework that uses geometry-aware autoencoders to capture latent representations of physical fields.
  • It achieves scalable Bayesian inversion by mapping high-dimensional problems to low-dimensional latent spaces with geometry-conditioned generative priors.
  • Significant applications in heat transfer, airfoil flows, and 3D resonance demonstrate competitive accuracy and robust uncertainty quantification.

Geometric Autoencoders for Bayesian Inversion (GABI) constitute a framework for uncertainty quantification and inference in high-dimensional inverse problems, specifically targeting scenarios with variable and complex geometries. The method leverages geometry-aware autoencoder architectures to parameterize the space of possible physical responses and then fuses this geometry-adapted prior with a likelihood informed by noisy, sparse observations to obtain Bayesian posterior estimates. GABI enacts a “learn first, observe later” paradigm, separating large-scale unsupervised model learning from downstream adaptation to specific inference tasks (Vadeboncoeur et al., 24 Sep 2025).

1. Framework Overview

GABI is formulated as a two-stage process exploiting geometric deep learning and generative modeling:

  • Learning Stage (“learn first”): A geometry-conditioned autoencoder is trained on a dataset of full-field responses—each associated with distinct system geometries, boundary conditions, and possibly different physical parameters. The encoder Enθ(un)=Eθ(un;Gn)E_n^\theta(u_n) = E^\theta(u_n; \mathcal{G}_n) maps a physical field unu_n (e.g., temperature, pressure, vibration amplitude) defined on discretized geometry Gn\mathcal{G}_n to a latent vector znRdzz_n \in \mathbb{R}^{d_z}. The decoder Dnψ(z)=Dψ(z;Gn)D_n^\psi(z) = D^\psi(z; \mathcal{G}_n) inverts this mapping, reconstructing the original field on the same geometry.
  • Inference Stage (“observe later”): Upon receiving a new geometry Go\mathcal{G}_o and observations yoy_o modeled as yo=Houo+ξoy_o = H_o u_o + \xi_o, with uo=Doψ(z)u_o = D_o^\psi(z) and noise ξoN(0,σ2I)\xi_o \sim \mathcal{N}(0, \sigma^2 I), the autoencoder’s latent prior is combined with the likelihood of the observed data in the Bayesian framework. The key result (Lemma 2.1) is that, under the pushforward prior $u \sim D_o^\psi_{\#} q_z$, the posterior for uou_o is the pushforward of the latent space posterior: $p(u_o|y_o) = D_o^\psi_{\#} p(z|y_o)$.

This structure enables the training of a foundation model that is fully geometry-aware and independent of the specific observation process, supporting broad generalization across observation types and spatial domains (Vadeboncoeur et al., 24 Sep 2025).

2. Geometry-Conditioned Generative Priors

The central technical innovation in GABI is the use of autoencoders whose encoder and decoder architectures explicitly respect and operate on variable mesh, graph, or discretization data structures representing the underlying physical geometries. The encoder and decoder are parameterized to accept geometric context as input, ensuring that latent variables zz encode information relevant to the geometry G\mathcal{G}. The prior on each field is formulated as the pushforward of a simple latent distribution (typically qz=N(0,I)q_z = \mathcal{N}(0, I)) through the geometry-conditioned decoder:

p(u)=DG,#ψqzp(u) = D^\psi_{\mathcal{G}, \#} q_z

This design trains the latent prior to capture not only low-dimensional structures of the response fields but also regularities and dependencies induced by the geometric variability of the training data. Notably, the autoencoder is optimized by minimizing:

minθ,ψ  EDunDnψ(Enθ(un))2+δ(pzθ,qz)\min_{\theta,\psi} \;\mathbb{E}_{\mathcal{D}} \| u_n - D_n^\psi(E_n^\theta(u_n)) \|^2 + \delta(p_z^\theta, q_z)

where δ\delta is a chosen divergence (e.g., maximum mean discrepancy) enforcing alignment between the empirical distribution of latent codes and the fixed prior (Vadeboncoeur et al., 24 Sep 2025).

3. Bayesian Inversion in the Latent Space

Once trained, GABI enables Bayesian inversion by translating inference from the ambient field space to the fixed, low-dimensional latent space. Given observations yoy_o on a new geometry Go\mathcal{G}_o:

  1. The observation model is expressed as yo=HoDoψ(z)+ξoy_o = H_o D_o^\psi(z) + \xi_o, leading to a likelihood p(yoz)=N(HoDoψ(z),σ2I)p(y_o|z) = \mathcal{N}(H_o D_o^\psi(z), \sigma^2 I).
  2. The latent posterior is computed as p(zyo)p(yoz)qzp(z|y_o) \propto p(y_o|z) q_z.
  3. Posterior samples z(i)z^{(i)} from p(zyo)p(z|y_o) are propagated through the decoder to yield posterior field samples u(i)=Doψ(z(i))u^{(i)} = D_o^\psi(z^{(i)}).

This approach permits efficient sampling-based inference in Rdz\mathbb{R}^{d_z} (often dzd_z is on the order of $10$–$100$) as opposed to the much higher-dimensional field spaces, rendering the inversion computationally tractable even in the presence of complex geometric context and high observational noise (Vadeboncoeur et al., 24 Sep 2025).

4. Implementation: Sampling and Scalability

GABI employs both Markov Chain Monte Carlo (using the No-U-Turn Sampler) and Approximate Bayesian Computation (ABC) as posterior sampling methodologies in the latent space. ABC, in particular, capitalizes on extensive GPU acceleration by proposing a large ensemble of latent codes, decoding them in parallel, and selecting samples whose predicted observations are within a user-specified residual threshold. This leverages modern computational hardware, enabling practical uncertainty quantification at scale for large engineering domains and variable geometries.

GABI’s encoder-decoder implementations draw on geometric deep learning: graph convolutional networks and transformer blocks are commonly used to process non-Euclidean discretizations, supporting variable mesh sizes and allowing for flexible mesh input/output shapes (Vadeboncoeur et al., 24 Sep 2025).

5. Applications and Empirical Validation

The framework has been validated on multiple large-scale engineering and physics problems characterized by both ill-posedness and substantial geometric variabilities:

  • Steady-State Heat: GABI is trained over $1,000$ distinct rectangular domains with varying boundary conditions. Posterior predictive fields on new domains, inferred from a handful of noisy measurements, achieve accuracy competitive with supervised regression baselines, while uncertainty quantification is robust and well calibrated.
  • RANS Flow Around Airfoils: Using graph-based autoencoder architectures, GABI reconstructs the full pressure/velocity fields on airfoils of previously unseen shape, conditioned on sparse observations. The learned latent prior captures geometry-conditioned variability, enabling rapid and accurate Bayesian inversion without explicit knowledge of the PDEs.
  • 3D Car Body Helmholtz Resonance and Source Localization: The posterior over field responses, as well as source localization given partial data, is recovered efficiently via ABC sampling on the geometry-conditioned latent space.
  • Complex Terrain Flow: Large-scale multi-GPU training and inference is demonstrated for RANS airflow fields over real-world terrains.

In each case, GABI’s approach yields predictive accuracy comparable to deterministic supervised learning methods in settings where such approaches are feasible, while providing rigorous uncertainty quantification and the flexibility to adapt to arbitrary observation processes (Vadeboncoeur et al., 24 Sep 2025).

6. Comparison to Existing Approaches and Architectural Considerations

GABI’s principal distinction is the decoupling of model learning and inference: prior information is distilled from data irrespective of the future observation process (“train once, use anywhere”). The geometry-aware design contrasts with classical GP latent variable models, which generally assume fixed geometry and may not scale to complex, variable domains. Unlike supervised learning, GABI delivers well-calibrated Bayesian UQ independent of the training task—critical for engineering problems with inherently unpredictable observation structures.

From an architectural perspective, the framework is agnostic: any geometric autoencoder architecture capable of processing (and reconstructing) fields on variable discretizations or graphs is compatible. The method’s reliance on empirical latent space alignment (via divergence matching) and reconstruction error minimization ensures the pushforward prior remains informative and geometry-adaptive (Vadeboncoeur et al., 24 Sep 2025).

7. Significance and Broader Impact

GABI provides a flexible, geometry-adaptive solution to the pervasive challenges of Bayesian inversion in physically complex and data-scarce engineering domains. The “learn first, observe later” separation, geometry-aware generative priors, and sampling-based posterior inference enable principled uncertainty quantification on arbitrary geometries, independent of governing PDEs. The capacity for rapid, robust field reconstruction and UQ underpins potential application in control, monitoring, and diagnostics of physical systems, especially where traditional supervised or analytical inversion methods are intractable or unreliable due to geometric variability and scarce data (Vadeboncoeur et al., 24 Sep 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Geometric Autoencoders for Bayesian Inversion (GABI).