Conditional Generative INADE Model
- The model unifies forward simulation and inverse inference in a single invertible architecture using two triangular normalizing flows.
- It leverages lower-triangular and upper-triangular flows to achieve tractable Jacobian computation and robust conditioning in Bayesian settings.
- Experimental evaluations on Gaussian, nonlinear, and inpainting tasks highlight its stable performance and practical versatility.
The Conditional Generative INADE Model (Invertible Normalizing-flow–based Amortized Dual Encoder) is a conditional generative modeling framework that unifies forward simulation and inverse inference within a single invertible architecture. It is designed for Bayesian inversion tasks, where efficient sampling from both the likelihood (forward or simulation problems) and the posterior (inverse inference problems) is required. By composing two triangular normalizing flows—one lower-triangular (“likelihood” flow) and one upper-triangular (“posterior” flow)—the INADE model achieves analytical invertibility, tractable Jacobian computation, and robust conditioning, offering a principled approach to amortized generative and inference modeling (Leeuwen et al., 4 Sep 2025).
1. Mathematical Construction and Triangular Flow Architecture
Let be the unknown (prior) variable, the observed (data) variable, and independent latent variables distributed according to , with and .
The INADE model centers on the construction of a single invertible map: implementing both directions:
- Running “forward” with input yields stochastic simulation .
- Running “forward” with input returns inference .
is constructed as the composition of two triangular flows:
The combined map explicitly yields: Invertibility follows from the triangular structure: In practice, and are each parameterized as neural network coupling flows or other triangular normalizing flows, ensuring computational tractability (Leeuwen et al., 4 Sep 2025).
2. Bayesian Objective and Variational Training Loss
Given samples from the true joint , the goal is to train such that the pushforward matches . This is formulated as minimizing the Kullback–Leibler divergence: For , this reduces to: With the triangular decomposition, the objective splits into two terms: Each term is a standard normalizing flow loss for pushing a standard Gaussian to either conditional posterior or likelihood (Leeuwen et al., 4 Sep 2025).
3. Conditional Sampling Procedures: Forward and Inverse Operation
After training, the model generates samples in two modes:
Forward (Simulation) Mode:
Given , draw and generate , providing .
1 2 3 4 |
Input: u ∈ R^n y ← Normal(0, I_m) f ← F_like(y; u) return f |
Given , draw and generate , providing .
1 2 3 4 |
Input: f ∈ R^m x ← Normal(0, I_n) u ← F_post(x; f) return u |
4. Invertibility, Jacobians, and Computational Properties
The INADE model is analytically invertible by construction; is bijective with . The block-wise triangular Jacobians enable tractable determinant computations: Each term is efficiently computable in or , depending on dimension. Efficient evaluation of the joint density is achieved by pulling back to . The architecture ensures stable conditioning, even as likelihood variances approach zero, mitigating the ill-conditioning encountered in standard joint transport approaches (Leeuwen et al., 4 Sep 2025).
5. Experimental Results and Numerical Demonstrations
Empirical evaluation is provided in three settings:
- Gaussian–linear toy model: The combined map remains well-conditioned even as likelihood variance goes to zero.
- Nonlinear (sign-function) benchmark: A two-dimensional benchmark with MParT coupling flows demonstrates accurate push-forward and conditional sampling, with exhibiting conditioning intermediate between the two triangular flows.
- Inpainting (MNIST): An affine instantiation of applies to pixel “removal” (simulation) and “inpainting” (inference), yielding realistic uncertainty maps and multiple posterior samples.
These experiments highlight the ability of the INADE model to produce high-quality conditional samples and stable conditioning across both linear and nonlinear inverse problems (Leeuwen et al., 4 Sep 2025).
6. Relation to Broader Conditional Generative Modeling
The INADE model provides a unified, invertible framework for conditional generative tasks, addressing both simulation and inference, in contrast to standard conditional normalizing flows that typically address a single direction. The carefully constructed triangular structure facilitates both tractable training and efficient evaluation, offering theoretical and practical advantages—especially in cases of near-deterministic or ill-conditioned likelihood functions. Empirical results demonstrate its utility in diverse domains, suggesting applicability to a broad range of inverse and generative modeling problems (Leeuwen et al., 4 Sep 2025).