Papers
Topics
Authors
Recent
Search
2000 character limit reached

Conditional Generative INADE Model

Updated 9 December 2025
  • The model unifies forward simulation and inverse inference in a single invertible architecture using two triangular normalizing flows.
  • It leverages lower-triangular and upper-triangular flows to achieve tractable Jacobian computation and robust conditioning in Bayesian settings.
  • Experimental evaluations on Gaussian, nonlinear, and inpainting tasks highlight its stable performance and practical versatility.

The Conditional Generative INADE Model (Invertible Normalizing-flow–based Amortized Dual Encoder) is a conditional generative modeling framework that unifies forward simulation and inverse inference within a single invertible architecture. It is designed for Bayesian inversion tasks, where efficient sampling from both the likelihood μFU=u\mu_{F\mid U=u} (forward or simulation problems) and the posterior μUF=f\mu_{U\mid F=f} (inverse inference problems) is required. By composing two triangular normalizing flows—one lower-triangular (“likelihood” flow) and one upper-triangular (“posterior” flow)—the INADE model achieves analytical invertibility, tractable Jacobian computation, and robust conditioning, offering a principled approach to amortized generative and inference modeling (Leeuwen et al., 4 Sep 2025).

1. Mathematical Construction and Triangular Flow Architecture

Let uRnu\in\mathbb{R}^n be the unknown (prior) variable, fRmf\in\mathbb{R}^m the observed (data) variable, and (x,y)(x, y) independent latent variables distributed according to μX,Y=μXμY\mu_{X,Y} = \mu_X \otimes \mu_Y, with XN(0,In)X \sim \mathcal{N}(0, I_n) and YN(0,Im)Y \sim \mathcal{N}(0, I_m).

The INADE model centers on the construction of a single invertible map: S:Rn×RmRn×Rm,S1=R,S : \mathbb{R}^n \times \mathbb{R}^m \longrightarrow \mathbb{R}^n \times \mathbb{R}^m, \quad S^{-1} = R, implementing both directions:

  • Running SS “forward” with input (u,y)(u, y) yields stochastic simulation fμFU=uf \sim \mu_{F|U=u}.
  • Running R=S1R = S^{-1} “forward” with input (x,f)(x, f) returns inference uμUF=fu \sim \mu_{U|F=f}.

SS is constructed as the composition of two triangular flows: L:(u,y)(u,Flike(y;u)),(lower-triangular: likelihood) U:(u,f)(Fpost(u;f),f),(upper-triangular: posterior)\begin{align*} L &: (u, y) \mapsto (u, F_{\text{like}}(y ; u)), \quad \text{(lower-triangular: likelihood)} \ U &: (u', f) \mapsto (F_{\text{post}}(u' ; f), f), \quad \text{(upper-triangular: posterior)} \end{align*}

The combined map S=ULS = U \circ L explicitly yields: S1(u,y)=Fpost(u;Flike(y;u)),S2(u,y)=Flike(y;u).S_1(u, y) = F_{\text{post}}\bigl(u; F_{\text{like}}(y; u)\bigr), \quad S_2(u, y) = F_{\text{like}}(y; u). Invertibility follows from the triangular structure: R(x,f)=S1(x,f)=(Fpost1(x;f), Flike1(f;Fpost1(x;f))).R(x, f) = S^{-1}(x, f) = \left( F_{\text{post}}^{-1}(x; f),~ F_{\text{like}}^{-1}(f; F_{\text{post}}^{-1}(x; f)) \right). In practice, FlikeF_{\text{like}} and FpostF_{\text{post}} are each parameterized as neural network coupling flows or other triangular normalizing flows, ensuring computational tractability (Leeuwen et al., 4 Sep 2025).

2. Bayesian Objective and Variational Training Loss

Given samples {(ui,fi)}\{(u_i, f_i)\} from the true joint μU,F\mu_{U,F}, the goal is to train (Fpost,Flike)(F_{\text{post}}, F_{\text{like}}) such that the pushforward F#μX,YF_\#\mu_{X,Y} matches μU,F\mu_{U,F}. This is formulated as minimizing the Kullback–Leibler divergence: KL(μU,F    F#μX,Y)=E(u,f)μU,F[log((F1)#πU,F(x,y))]+const.\mathrm{KL}\bigl(\mu_{U,F} \;\|\; F_\#\mu_{X,Y}\bigr) = \mathbb{E}_{(u,f)\sim\mu_{U,F}}\left[ -\log\bigl((F^{-1})_\#\pi_{U,F}(x, y)\bigr) \right] + \mathrm{const.} For μX,Y=N(0,I)\mu_{X,Y} = \mathcal{N}(0, I), this reduces to: E(u,f)μU,F[12F1(u,f)2logdetF1(u,f)]+const.\mathbb{E}_{(u, f)\sim\mu_{U,F}}\left[ \frac12\|F^{-1}(u, f)\|^2 - \log\left|\det\nabla F^{-1}(u, f)\right| \right] + \mathrm{const.} With the triangular decomposition, the objective splits into two terms: J(Fpost,Flike)=E[12Fpost1(u;f)2logdetuFpost1(u;f)]Jpost+E[12Flike1(f;u)2logdetfFlike1(f;u)]JlikeJ(F_{\text{post}},F_{\text{like}}) = \underbrace{ \mathbb{E}\left[ \frac12\|F_{\text{post}}^{-1}(u; f)\|^2 - \log|\det\nabla_u F_{\text{post}}^{-1}(u; f)| \right] }_{J_{\text{post}}} + \underbrace{ \mathbb{E}\left[ \frac12\|F_{\text{like}}^{-1}(f; u)\|^2 - \log|\det\nabla_f F_{\text{like}}^{-1}(f; u)| \right] }_{J_{\text{like}}} Each term is a standard normalizing flow loss for pushing a standard Gaussian to either conditional posterior or likelihood (Leeuwen et al., 4 Sep 2025).

3. Conditional Sampling Procedures: Forward and Inverse Operation

After training, the model generates samples in two modes:

Forward (Simulation) Mode:

Given uRnu \in \mathbb{R}^n, draw yN(0,Im)y \sim \mathcal{N}(0, I_m) and generate f=Flike(y;u)f = F_{\text{like}}(y; u), providing fμFU=uf \sim \mu_{F|U=u}.

1
2
3
4
Input: u  R^n
y  Normal(0, I_m)
f  F_like(y; u)
return f
Inverse (Inference) Mode:

Given fRmf \in \mathbb{R}^m, draw xN(0,In)x \sim \mathcal{N}(0, I_n) and generate u=Fpost(x;f)u = F_{\text{post}}(x; f), providing uμUF=fu \sim \mu_{U|F=f}.

1
2
3
4
Input: f  R^m
x  Normal(0, I_n)
u  F_post(x; f)
return u
The full SS or RR maps allow retrieval of auxiliary latent outputs if needed (Leeuwen et al., 4 Sep 2025).

4. Invertibility, Jacobians, and Computational Properties

The INADE model is analytically invertible by construction; S=ULS = U \circ L is bijective with R=S1R = S^{-1}. The block-wise triangular Jacobians enable tractable determinant computations: detS=detUdetL=(detuFpost)(detyFlike)\det\nabla S = \det\nabla U \cdot \det\nabla L = \bigl(\det \partial_{u'}F_{\text{post}}\bigr) \cdot \bigl(\det \partial_y F_{\text{like}}\bigr) Each term is efficiently computable in O(n)O(n) or O(m)O(m), depending on dimension. Efficient evaluation of the joint density p(u,f)p(u, f) is achieved by pulling back to (x,y)(x, y). The architecture ensures stable conditioning, even as likelihood variances approach zero, mitigating the ill-conditioning encountered in standard joint transport approaches (Leeuwen et al., 4 Sep 2025).

5. Experimental Results and Numerical Demonstrations

Empirical evaluation is provided in three settings:

  • Gaussian–linear toy model: The combined SS map remains well-conditioned even as likelihood variance goes to zero.
  • Nonlinear (sign-function) benchmark: A two-dimensional benchmark with MParT coupling flows demonstrates accurate push-forward and conditional sampling, with SS exhibiting conditioning intermediate between the two triangular flows.
  • Inpainting (MNIST): An affine instantiation of SS applies to pixel “removal” (simulation) and “inpainting” (inference), yielding realistic uncertainty maps and multiple posterior samples.

These experiments highlight the ability of the INADE model to produce high-quality conditional samples and stable conditioning across both linear and nonlinear inverse problems (Leeuwen et al., 4 Sep 2025).

6. Relation to Broader Conditional Generative Modeling

The INADE model provides a unified, invertible framework for conditional generative tasks, addressing both simulation and inference, in contrast to standard conditional normalizing flows that typically address a single direction. The carefully constructed triangular structure facilitates both tractable training and efficient evaluation, offering theoretical and practical advantages—especially in cases of near-deterministic or ill-conditioned likelihood functions. Empirical results demonstrate its utility in diverse domains, suggesting applicability to a broad range of inverse and generative modeling problems (Leeuwen et al., 4 Sep 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Conditional Generative INADE Model.