Papers
Topics
Authors
Recent
2000 character limit reached

Physics-Informed Normalizing Flows

Updated 29 November 2025
  • Physics-Informed Normalizing Flows (PINF) are generative models that integrate physical laws, such as PDEs and conservation principles, directly into flow architectures.
  • They combine continuous and discrete normalizing flows with physics-informed loss functions to accurately model high-dimensional stochastic systems.
  • PINF enables mesh-free, uncertainty-aware data-driven simulations with applications in PDE solutions, inverse problems, and Bayesian inference.

Physics-Informed Normalizing Flows (PINF) are a class of generative models that extend normalizing flows by embedding physical constraints—often expressed as partial differential equations (PDEs), stochastic differential equations, or conservation laws—directly into the invertible network architecture or loss function. This integration enables data-driven solutions that respect complex physical priors, normalization, and structural invariances. PINF approaches have demonstrated mesh-free, causality-free, and uncertainty-aware learning and inference across high-dimensional stochastic systems, physical simulations, inverse problems, dynamical systems identification, and density modeling.

1. Mathematical Foundations and Core Flow Architectures

PINF models build upon the invertible mapping property of normalizing flows, most commonly using continuous normalizing flow (CNF) or discrete coupling-layer constructions. In CNF, a latent variable trajectory z(t)Rdz(t)\in\mathbb{R}^d evolves under a neural ODE: dz(t)dt=fθ(z(t),t)\frac{dz(t)}{dt} = f_\theta(z(t),t) The change-of-variables formula (Liouville/instantaneous theorem) ensures density conservation: ddtlogp(z(t),t)=zfθ(z(t),t)\frac{d}{dt} \log p(z(t),t) = -\nabla_z \cdot f_\theta(z(t),t) Discrete flows (e.g., RealNVP/MAF/planar/Sylvester flows) express an invertible mapping x=gθ(z)x = g_\theta(z) via a sequence of elementary transformations, each with a tractable Jacobian determinant.

Physics-informed constraints are incorporated in PINF by matching the evolution of the log-density or sample trajectories to the PDE/dynamical system solution, rather than solely enforcing pointwise residuals. For Fokker-Planck dynamics (diffusion processes), PINF introduces “augmented drift” ff^* and integrates characteristics: dx/dt=f(x,t) f(x,t)=b(x,t)D(x,t)logp(x,t)D(x,t)dx/dt = f^*(x,t)\ f^*(x,t) = b(x,t) - D(x,t)\nabla\log p(x,t) - \nabla\cdot D(x,t) The log-density then evolves as dlogp/dt=f(x,t)d \log p/dt = -\nabla \cdot f^*(x,t), directly parameterized by neural networks (Liu et al., 2023).

Hamiltonian-informed flows embed symplectic structure for kinetic PDEs: (q,p)(q,p)\left(\mathbf{q}, \mathbf{p}\right) \mapsto \left(\mathbf{q}', \mathbf{p}'\right) using leapfrog integrators for a learned neural Hamiltonian Hθ(q,p)=Kθ(p)+Vθ(q)H_\theta(\mathbf{q},\mathbf{p}) = K_\theta(\mathbf{p}) + V_\theta(\mathbf{q}) (Souveton et al., 7 May 2025).

PI-NFF models random fields via a bijective mapping fθ:(z,x)kf_\theta:(z,x)\mapsto k between KL-expanded Gaussian fields z(x,ω)z(x,\omega) and physical fields k(x,ω)k(x,\omega), trained with maximum likelihood and physics-informed loss (Guo et al., 2021).

2. Physics-Informed Losses and Constraint Imposition

PINF frameworks replace grid-based, explicit enforcement of physics with “pushforward” loss terms that guarantee normalization and respect system conservation by construction. The physics-informed loss typically matches the neural approximation to the solution induced by ODE or PDE characteristics: L(θ)=1Nklogpode(xk,tk)ϕθ(xk,tk)2\mathcal{L}(\theta) = \frac{1}{N}\sum_k \left|\log p_\text{ode}(x_k,t_k) - \phi_\theta(x_k,t_k)\right|^2 In multi-field contexts (PI-NFF), data-driven likelihoods are combined with PDE and boundary condition losses: Lequ=Ex,ξ,ϵNx[u(x,ξu,ϵu);k(x,ξk,ϵk)]f(x,ξf,ϵf)2\mathcal{L}_\text{equ} = \mathbb{E}_{x,\xi,\epsilon} \|\mathcal{N}_x[u(x,\xi_u,\epsilon_u); k(x,\xi_k,\epsilon_k)] - f(x,\xi_f,\epsilon_f)\|^2 where neural flows for uu, kk, and ff are evaluated at randomly sampled collocation points.

Dynamical system PINF introduces a blockwise loss:

  • Negative log-likelihood
  • Modal uncorrelation (statistical independence of latent modes)
  • Evolution consistency and prediction in state/physical space
  • Velocity consistency (latent velocity matches displacement derivative)

Variational PINF in Bayesian inverse problems uses ELBO-type objectives with “soft” residual penalties, seamlessly integrating physical laws via automatic differentiation (Meng, 2023).

3. Representative Applications

High-dimensional PDE Solution: PINF solves both time-dependent and steady-state Fokker-Planck equations in up to 50D, remains mesh-free, normalizes density exactly, and achieves sub-percent relative error (Liu et al., 2023).

Bayesian Inference for Spectroscopy: Sylvester PINF achieves calibrated posterior distributions in magnetic resonance spectroscopy, accurately quantifies metabolite concentration uncertainty, and exposes parameter correlations/bimodality not captured by CRLB (Merkofer et al., 6 May 2025).

Modal System Identification: PINF applies to nonlinear dynamical systems by mapping measurements to statistically independent latent coordinates through invertible normalizing flows, facilitating nonlinear modal decomposition and accurate long-term prediction (Rostamijavanani et al., 23 Jan 2025).

Field Reconstruction in Rare-event Detectors: Continuous PINF architectures enable differentiable electric field reconstruction in noble-element time-projection chambers, enforcing Maxwellian conservativity and reducing calibration requirements by an order of magnitude (Li et al., 29 Oct 2025).

Efficient Physical Simulation: Hamiltonian PINF delivers volume-preserving fast solvers for Vlasov-Poisson equations, recovers explicit Hamiltonians with physically meaningful parameters, and generalizes to intermediate states (Souveton et al., 7 May 2025).

General Inverse Problems: PINF for transcranial ultrasound enables fast amortized Bayesian image reconstruction by fusing conditional flows and adjoint-based summary statistics, with robust uncertainty quantification (Orozco et al., 2023).

4. Network Parameterization, Computational Properties, and Training

PINF architectures utilize deep coupling layers or neural ODE blocks with time-dependent conditioning, spectral-invariant layers, or permutation/translation-invariant neural potentials. Representative configurations:

Losses are minimized by Adam with problem-dependent learning rates, batch sizes, and, in inverse settings, by stochastic Monte Carlo over the relevant latent/posterior variables. PINF supports mini-batch training, natural integration of automatic differentiation for PDE residuals, and does not require explicit grid discretization.

5. Empirical Results, Quantitative Benchmarks, and Uncertainty Quantification

Application PINF Metric (Best/Median) Baseline Comparison
Fokker-Planck (Liu et al., 2023) MAPE < 1%, Rel. Error < 0.2% Visually/quantitatively matches analytic solution
MRS (Merkofer et al., 6 May 2025) SNF MAE 0.797, ELBO best LCModel MAE 1.34, VAE higher KL
TPC Field (Li et al., 29 Oct 2025) MSE 6 cm² (6e5 events) Histogram FDC (5e6 events), similar accuracy
Modal ID (Rostamijavanani et al., 23 Jan 2025) MSE 1e-6 (reconstruction) POD MSE 2.9e-6 vs 7.1e-3
Hamiltonian PDE (Souveton et al., 7 May 2025) W1(q)≈0.007–0.057 Baseline MLP W1(q)≈0.138
Ultrasound (Orozco et al., 2023) PSNR 38.67 dB, SSIM 0.9646 FWI 33.25, U-Net 35.63 [slices]

Posterior calibration curves, moment matching, and uncertainty contraction with increased measurement data confirm robust Bayesian behavior. Effective sample size in PI–SNF remains ≈0.3 even at low string tension, outperforming deterministic flows in lattice field theory sampling (Caselle et al., 2023).

6. Limitations and Extensions

PINF inherits complexity from normalizing flows (scaling, memory) and may require careful handling of loss weighting, regularizer choice, and latent parameterization for very high-dimensional problems. Specific limitations include:

  • Sensitivity to learning-rate and architecture scaling (Rostamijavanani et al., 23 Jan 2025)
  • Need for correct physical residual libraries in model discovery (Both et al., 2021)
  • No built-in guarantee of invertibility in some amortized PINN-inspired flows (Prasha et al., 14 Sep 2025)
  • Difficulty for nonlinear or highly resonant multimode systems, addressed in extensions such as spectral-submanifold theory or graph-coupled flows.

Developments in symplectic and operator-informed flows, integration with PINNs/DeepONets, and expansion to scientific simulation (e.g., calorimetry, multiphysics Monte Carlo) offer broadening roles for PINF approaches.

7. Relation to Other Physics-Informed Machine Learning Paradigms

PINF generalizes the expressivity of PINNs and DeepONets by constructing generative models with exact density evaluation, inherent uncertainty quantification, and physics-based structural constraints. Unlike classical PINNs, PINF flows do not rely on grid-based residuals; normalization is guaranteed by the invertible flow and probabilistic matching. The connection to Bayesian parametric inference is leveraged through variational objectives, spectral priors (GAN/KL expansion), and stochastic flow mappings across latent spaces (Meng, 2023, Both et al., 2021).

By sampling characteristic flows, enforcing conservation laws, and decoding through analytic physical models, PINF approaches unify data-driven learning and physical law adherence, establishing a rigorous foundation for the scientific use of normalizing flows as interpretable, uncertainty-aware, and physics-compatible models.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Physics-Informed Normalizing Flows (PINF).