Generalised Flow Maps: Unified Transport Modeling
- Generalised Flow Maps are unified mathematical frameworks that extend classical flow maps by incorporating stochastic dynamics and geometric constraints on manifolds.
- They employ two-parameter families and self-distillation training strategies to efficiently model, simulate, and learn transport phenomena in varied spaces.
- Empirical benchmarks demonstrate state-of-the-art performance in applications like protein torsion synthesis and wireless channel mapping with significantly reduced computational steps.
Generalised Flow Maps (GFMs) provide a unified mathematical and algorithmic formalism for modeling, simulating, and learning transport phenomena and probabilistic flows in both Euclidean and non-Euclidean (Riemannian and Lie group) spaces. The concept extends classical flow maps—deterministic solutions of ODEs describing transport of mass, probability, or state—by incorporating stochasticity, geometric constraints, and few-step generative modeling tools suitable for modern applications such as geometric deep learning, generative modeling, turbulence, and wireless channel map synthesis.
1. Mathematical Foundations and Definitions
Generalised Flow Maps are two-parameter families of maps on a manifold , corresponding to the transport of points or particles from time to time under a generative probability flow. Let be probability densities on . The map is uniquely defined such that, for every solution curve of the Riemannian probability-flow ODE
with and the target, one has for all (Davis et al., 24 Oct 2025).
On a Riemannian manifold, the interpolating curves between endpoints are geodesics: , . In Euclidean space, this reduces to a linear segment. Parametric representations use the exponential map: , where is a learnable vector field.
In the context of turbulence, the notion of generalised flows refers to probability measures on sets of Lagrangian trajectories, extending Arnold's deterministic Lagrangian flow to a probabilistic setting (Thalabard et al., 2018).
2. Theoretical Properties and Equivalent Characterisations
Generalised Flow Maps possess several equivalent characterisations on a manifold :
- Generalised Lagrangian: .
- Generalised Eulerian: , replacing ambient-space gradients by manifold differentials.
- Semigroup: , reflecting compositional consistency.
- Tangent Condition: , ensuring correspondence with the instantaneous vector field on the diagonal.
These equivalences ensure that GFMs unify a broad class of transport and generative modeling frameworks, including consistency models, shortcut models, and mean flows, with the Riemannian manifold setting generalising addition to the exponential map and gradients to differentials (Davis et al., 24 Oct 2025).
3. Training Methodologies for Generative Modeling
Generalised Flow Maps are deployed as few-step generative models using self-distillation training strategies that enforce one of the GFM characterisations off-diagonal (i.e., for ), alongside Riemannian flow matching along the diagonal:
- Generalised Lagrangian Self-Distillation (G-LSD): Enforces pathwise ODE consistency in time.
- Generalised Eulerian Self-Distillation (G-ESD): Enforces stationarity with respect to the flow's starting position, yielding a Riemannian generalisation of mean flows and consistency training.
- Generalised Progressive Self-Distillation (G-PSD): Enforces the semigroup property, recovering shortcut model analogues.
The overall loss combines a Riemannian flow-matching loss on the diagonal with the relevant self-distillation penalty. Practical training utilises batched sampling, forward passes through neural parametrisations of , and stepwise stochastic gradient descent (Davis et al., 24 Oct 2025).
In physics-informed inverse problems such as channel knowledge map (CKM) construction, a variant called linear transport guided flow matching (LT-GFM) models the process as a deterministic ODE along optimal transport paths between prior and target, with supervised alignment of the velocity field to the minimum transport direction (Huang et al., 6 Jan 2026).
4. Domain-Specific Extensions: Lie Groups and Turbulence
The GFM formalism generalises to non-Euclidean settings:
- Lie Groups: On a Lie group with exponential map , the interpolation between is the exponential curve . The conditional vector field for flow matching becomes , naturally implemented via group and algebra operations. Neural networks parametrise vector fields by operating in coordinate charts or via group actions, and training minimises the squared-norm difference to the analytic conditional field (Sherry et al., 1 Apr 2025).
- Turbulence: Brenier's generalised least-action principle reframes ideal fluid mechanics as a variational problem over probability measures on flow maps, yielding "generalised flows" that describe turbulent Lagrangian statistics. The GFM in this context seeks measures minimising expected action, subject to incompressibility and prescribed initial/final couplings. Monte Carlo methods on finite permutation flows enable numerical solution, capturing classical solutions for short lags and coarse-graining turbulent statistics for longer times (Thalabard et al., 2018).
5. Empirical Benchmarks and Comparative Performance
GFMs and their variants have demonstrated empirical advantages in geometric generative modeling:
- Riemannian Manifolds: On tasks including protein torsion angle synthesis (tori), geospatial catastrophe generation (sphere), rotations (SO(3)), and hyperbolic manifold data, GFM variants achieve state-of-the-art sample quality for single- and few-step inference. For example, on 2D protein tori, one neural-function-evaluation (NFE) yields MMD 0.02–0.05 versus 0.45 for Riemannian flow matching (RFM), with up to 20× fewer steps required to match sample quality. GFM variants also achieve superior or competitive negative log-likelihoods relative to diffusion or flow-based baselines (Davis et al., 24 Oct 2025).
- Wireless Channel Maps (CKMs): LT-GFM enables real-time, high-fidelity CKM generation with an order-of-magnitude reduction in inference steps (10 Euler steps vs. 1000 for DDPM), a 43% reduction in Fréchet Inception Distance (FID), and a factor-of-25 speedup. In both channel gain map (CGM) and spatial correlation map (SCM) synthesis, physics-informed conditioning (e.g., building masks, edge maps) and Hermitian symmetry are enforced for physical validity (Huang et al., 6 Jan 2026).
| Domain | GFM Variant | Key Metric Improvement | Reference |
|---|---|---|---|
| Protein Tori | GFM (Riemannian) | 20× MMD reduction at 1 NFE | (Davis et al., 24 Oct 2025) |
| SO(3) Rotations | GFM | Matched NLL, better low-step MMD | (Davis et al., 24 Oct 2025) |
| Wireless CKMs | LT-GFM | 43% FID reduction, 25× faster inference | (Huang et al., 6 Jan 2026) |
For turbulence, GFM matches classical Eulerian flows below a critical time, coarse-grains multi-scale structures, and enables explicit statistical characterization of turbulent dispersion (Thalabard et al., 2018).
6. Physical, Geometric, and Algorithmic Constraints
GFMs incorporate domain-specific constraints for enhanced physical and statistical fidelity:
- Physics-Informed Priors: Integration of edge maps and building masks to preserve environmental discontinuities; enforcement of Hermitian symmetry in complex-valued matrices for SCM validity (Huang et al., 6 Jan 2026).
- Geometric Consistency: Exponential-map parametrisation and differential-geometric treatment ensure consistent behavior on manifolds, Lie groups, and homogeneous spaces, with explicit extension from Euclidean flow maps (Davis et al., 24 Oct 2025, Sherry et al., 1 Apr 2025).
- Probabilistic Coarse-Graining: In turbulence, GFMs yield probabilistic descriptions that average fine-scale fluctuations while preserving large-scale coherence, effectively bridging deterministic and stochastic modeling regimes (Thalabard et al., 2018).
A key caveat is the necessity to restrict certain variational formulations (e.g., in turbulence) to time intervals shorter than the intrinsic turnover time for physical relevance. Beyond these, non-deterministic behavior can emerge, potentially reducing physical interpretability (Thalabard et al., 2018).
7. Theoretical and Practical Unification
Generalised Flow Maps unify a range of generative modeling paradigms:
- When instantiated with appropriate interpolants, exponential maps, and training losses, GFMs encompass consistency models, shortcut models, and mean flows as special cases in both Euclidean and Riemannian geometry.
- They provide a framework for closed-form, simulation-free generative modeling on matrix Lie groups, enabling pose, rotation, and transformation-aware synthesis (Sherry et al., 1 Apr 2025).
- The self-distillation-based training schemes enable few-step sampling with high sample quality, providing computational advantages over classical diffusion and flow-matching methods (Davis et al., 24 Oct 2025, Huang et al., 6 Jan 2026).
GFMs thus establish a rigorous, efficient, and extendable foundation for probabilistic transport modeling across a broad range of scientific and engineering domains, from fluid dynamics to wireless communications and geometric machine learning.