Mirror Flow Matching for Constrained Models
- The paper introduces Mirror Flow Matching, a unified geometric framework that leverages mirror maps and regularized velocity fields for constrained generative modeling.
- It integrates optimal transport, mirror descent, and heavy-tailed prior matching to ensure sample feasibility and robust convergence in convex domains.
- Empirical results on synthetic and real data demonstrate lower MMD and improved FID scores compared to baseline methods, underscoring its effectiveness in constraint satisfaction.
Mirror Flow Matching encompasses a class of generative modeling and symmetry estimation approaches that leverage mirror maps, reflection geometry, and velocity field matching—often targeting convex or constrained domains and handling heavy-tailed distributions. The methodology integrates advances from optimal transport, flow matching, and mirror descent, providing a unified geometric framework for sampling, constraint satisfaction, and stability in generative models.
1. Geometric Foundation of Mirror Flow Matching
Mirror Flow Matching is formulated for generative modeling on convex domains, where the aim is to construct deterministic or stochastic flows that transform a tractable prior (often Gaussian or heavy-tailed) into a target distribution supported on a convex set (Guan et al., 10 Oct 2025). Standard flow matching techniques encounter issues when directly applied to constrained domains: unconstrained flows may generate samples outside , and naive coupling to Gaussian priors can destabilize matching when targets or their dual representations exhibit heavy tails.
To address these challenges, Mirror Flow Matching uses a mirror map —a strictly convex potential—that geometrically lifts into an unconstrained dual space. For , the mapping encodes the data in mirror (dual) coordinates. In the dual space, linear interpolation is well-posed, and samples are transported using a learned velocity field . The inverse mirror map converts dual samples back into the primal space, ensuring that generated points respect physical or domain constraints.
2. Regularized Mirror Maps and Heavy-Tailed Priors
The framework identifies two principal obstacles for stable flow matching in convex domains:
- Log-barrier mirror maps (e.g., for ) can induce dual distributions with infinite moments; such heavy tails render the ODEs ill-posed and invalidate error bounds.
- Gaussian priors, when mismatched to heavy-tailed dual targets, can amplify mode shifting and gradient explosion in the conditional velocity field estimation.
Mirror Flow Matching rectifies these issues through a regularized mirror map of the form
with controlling tail decay (Guan et al., 10 Oct 2025). The polynomial decay of the barrier component ensures all needed dual moments are finite provided the data mass near the boundary decays as and for -th moments. The strong convexity from the quadratic term enforces a metric equivalence between primal and dual Euclidean spaces, enabling reliable transfer of error bounds.
Further stabilization is achieved by coupling with a Student- prior, which better matches the heavy-tailed dual and brings the velocities and conditional expectations under analytic control. This design avoids the instability and mode displacement encountered using Gaussian priors for dual targets supported on boundary-concentrated domains.
3. Flow Construction and Velocity Field Matching
Mirror Flow Matching constructs trajectories and velocity fields as follows:
- Draw from and from .
- Form a linear interpolation in dual coordinates: , .
- Recover primal trajectories via .
- The primal velocity is deduced from the chain rule:
where is fitted by minimizing a regression loss on the flow matching condition (Guan et al., 10 Oct 2025).
This dual-to-primal procedure ensures that generated samples lie within the convex domain and that the constraint geometry is respected at all times. The selection of mirror map and prior determines both the expressivity and the regularity of the velocity field, impacting stability and theoretical convergence.
4. Theoretical Guarantees and Error Analysis
Explicit bounds for Wasserstein convergence and feasibility arise from the regularized mirror map and Student- prior:
- Spatial Lipschitzness: Given the prior tail decay and the polynomial barrier, the velocity field is spatially Lipschitz and temporally regular (Proposition P_Lipschitz_x_t).
- Wasserstein convergence rate [see Theorem T_Error_T_Flow, (Guan et al., 10 Oct 2025)]:
where is the Lipschitz constant, is the Euler ODE discretization step, is the velocity field accuracy, and is the integration upper boundary. An analogous primal space bound holds by leveraging strong convexity.
- Sample feasibility: By design, primal samples satisfy the domain constraints to 100% feasibility rate when the co-designed mirror map and prior are used.
5. Empirical Performance on Synthetic and Real Data
Experiments on synthetic and real-world tasks validate the effectiveness of Mirror Flow Matching (Guan et al., 10 Oct 2025):
- On high-dimensional polytopes and L₂ balls equipped with mixture-of-Gaussians targets, Mirror t-Flow (regularized mirror map + Student- prior) achieves lower Maximum Mean Discrepancy (MMD) and KL divergence compared to Gauge Flow Matching and Reflected Flow Matching baselines.
- The approach guarantees all generated samples lie strictly within the constraints.
- In watermark-constrained image generation (AFHQv2), Mirror Flow Matching matches or betters the FID and CMMD scores of Mirror Diffusion Models and achieves such results in less training time.
These results suggest that the geometric regularization and prior matching intrinsic to Mirror Flow Matching yield superior quality and speed advantages, particularly in domains where boundary violation is unacceptable.
6. Significance and Implications for Constrained Generative Modeling
Mirror Flow Matching provides:
- A principled approach for generative modeling on non-Euclidean or constrained convex domains, extending the applicability of flow matching beyond unconstrained Euclidean target distributions.
- Theoretical tools for handling heavy-tailed target distributions, ensuring moment control and flow regularity by appropriate mirror map and prior selection.
- Practical algorithmic benefits in feasible sample generation, scalability to high-dimensional or complex constraint settings, and favorable empirical performance.
A plausible implication is that further refinements of the mirror map or prior—using data-adaptive potentials or more general heavy-tailed distributions—could expand applicability to broader classes of constraint sets and target measures.
7. Context within Mirror-Based Generative Modeling
Mirror Flow Matching is part of a larger research stream where reflection, symmetry or mirror maps enable stable modeling in geometrically involved domains. Related approaches include Reflected Flow Matching for constrained CNFs (Xie et al., 26 May 2024), Wasserstein mirror gradient flows for optimal transport (Deb et al., 2023), and mirror descent-based strategies in both generative and reinforcement learning models (Chen et al., 31 Jul 2025). The central methodological innovation lies in leveraging mirror geometry—via regularized potentials and tailored priors—to transform the dynamics and matching conditions into settings where traditional unconstrained approaches fail.
In summary, Mirror Flow Matching equips generative modeling on convex and constrained domains with geometric constraint satisfaction, regularization for heavy-tailed measures, and theoretical guarantees for stable velocity field learning and sample feasibility. The integration of regularized mirror maps and heavy-tailed priors establishes both a robust mathematical foundation and an empirically competitive framework for constrained generative learning.