Papers
Topics
Authors
Recent
2000 character limit reached

Rectified MeanFlow: Fast, Efficient One-Step Generation

Updated 3 December 2025
  • Rectified MeanFlow is a generative modeling framework that fuses trajectory straightening with one-step mean velocity estimation to produce high-quality samples.
  • It employs a single reflow iteration and a loss truncation heuristic to reduce training instability and computational cost.
  • Empirical results demonstrate state-of-the-art performance with improved FID scores and faster convergence across various image resolutions.

Rectified MeanFlow is a generative modeling framework that integrates the trajectory-straightening mechanism of Rectified Flow with the efficient one-step sampling paradigm of MeanFlow. By learning the mean velocity field along rectified, nearly straight probability paths using only a single reflow iteration and employing a loss truncation heuristic, Rectified MeanFlow enables fast, high-quality sample generation with significantly reduced computational cost. The approach has demonstrated state-of-the-art empirical performance across multiple image resolutions and task domains, establishing its position among leading “fastforward” generative modeling strategies (Zhang et al., 28 Nov 2025).

1. Background: Probability Flow ODEs and Sampling Challenges

Generative modeling via continuous-time probability flow ODEs involves transporting a simple prior distribution, typically Gaussian p(z)=N(0,I)p(\mathbf z)=\mathcal N(0,I), to a complex data distribution p(x)p(\mathbf x) over a unit interval t[0,1]t \in [0,1], governed by

dxt=v(xt,t)dt.d\mathbf x_t = v(\mathbf x_t, t)\,dt.

The instantaneous velocity field v(x,t)v(\mathbf x,t) can vary abruptly, necessitating numerous small integration steps for accurate path tracking via numerical solvers (Euler, Runge-Kutta, etc.), each incurring costly network evaluations and potential Jacobian–vector products. This often results in tens to hundreds of neural network calls per sample, adversely affecting efficiency (Zhang et al., 28 Nov 2025).

2. Trajectory Straightening: Rectified Flow Principle

Rectified Flow straightens sample paths by iteratively “reflowing” couplings between data and prior. Starting from a learned flow vθkv^k_\theta and its associated couplings (x,z)pxzk1(\mathbf x,\mathbf z)\sim p^{k-1}_{xz}, backward ODE sampling produces new pairings for retraining. With each reflow iteration, the transport paths become increasingly straight, formally measured by the curvature

κ(t)=t2zt=tv(zt,t),\kappa(t) = \|\partial_t^2 \mathbf z_t\| = \bigl\|\partial_t v(\mathbf z_t, t)\bigr\|,

where zt=(1t)x+tz\mathbf z_t = (1-t)\mathbf x + t\mathbf z. Ideal straightness (κ(t)0\kappa(t)\equiv 0) is approached by re-sampling and training on progressively less curved couplings (Zhang et al., 28 Nov 2025).

3. MeanFlow: Fastforward Generation via Mean Velocity Fields

MeanFlow achieves one-step generative sampling by directly learning the time-averaged velocity,

vˉ(x)=EtU[0,1][v(x,t)].\bar v(\mathbf x) = \mathbb{E}_{t \sim U[0,1]}[v(\mathbf x, t)].

Given vˉ\bar v, one can deterministically generate samples by

x0=zvˉ(z),\mathbf x_0 = \mathbf z - \bar v(\mathbf z),

bypassing all ODE integration. The mean velocity is approximated by a neural network, sθ(x)vˉ(x)s_\theta(\mathbf x) \approx \bar v(\mathbf x), trained by regressing onto Monte Carlo-averaged instantaneous velocities. However, learning vˉ\bar v directly on highly curved flow paths induces unstable gradients and slow convergence (Zhang et al., 28 Nov 2025).

4. Rectified MeanFlow: Unified Framework and Training

Rectified MeanFlow overcomes MeanFlow’s noisy supervision by applying it on couplings sampled from a single reflow step, resulting in trajectories with substantially reduced curvature variance. The training sequence is:

  1. Pretrain 1-rectified flow vθ1v^1_\theta on independent couplings (x,z)(\mathbf x, \mathbf z).
  2. Generate rectified couplings via backward ODE under v1v^1.
  3. Apply the truncation heuristic: discard the top k%k\% pairs by xz\|\mathbf x - \mathbf z\| to remove residual high-curvature cases.
  4. Train a MeanFlow network uθ(x,r,t)u_\theta(\mathbf x, r, t) using the simplified objective:

L(θ)=E(x,z),r<tuθ(zt,r,t)[v1(zt,t)(tr)ddtuθ(zt,r,t)]22,L(\theta) = \mathbb{E}_{(\mathbf x, \mathbf z), r<t}\Big\|\,u_\theta(\mathbf z_t, r, t) - [v^1(\mathbf z_t, t) - (t-r)\tfrac d{dt}u_\theta(\mathbf z_t, r, t)]\Big\|_2^2,

with the path parameterization zt=(1t)x+tz\mathbf z_t = (1-t)\mathbf x + t\mathbf z and time derivative via JVP.

Typically, one reflow suffices to stabilize training. The truncation step further reduces loss variance and targets the elimination of the worst-case, high-curvature pairs, with optimal k=10%k=10\% (Zhang et al., 28 Nov 2025).

5. Theoretical Rationale and Interplay of Components

By straightening paths in advance (rectification), the outputs v1(zt,t)v^1(\mathbf z_t, t) and their time-averaged counterparts become closely aligned and low in variance, which supports efficient MeanFlow training. The combination of (i) straightened couplings and (ii) mean velocity field modeling circumvents the requirement for perfect trajectory linearity, as MeanFlow remains robust once the overall curvature is reduced. Thus, Rectified MeanFlow creates a synergy between geometric simplification (rectification) and efficient field estimation (MeanFlow), yielding improved sample quality and faster convergence (Zhang et al., 28 Nov 2025).

6. Algorithmic Workflow

Training Procedure

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
for i in range(T_flow):
    x = sample_data()
    z = sample_prior()
    t = uniform(0, 1)
    z_t = (1-t)*x + t*z
    target = z - x
    theta -= eta * grad(norm(v_theta(z_t, t) - (z-x))**2)

couplings = []
for i in range(N):
    x = sample_data()
    z = solve_backward_ODE(x, v^1)
    d = norm(x-z)
    couplings.append((x, z, d))
q = percentile(couplings, 100-k)
RectifiedSet = [pair for pair in couplings if pair[2] <= q]

for j in range(T_MF):
    x, z = sample(RectifiedSet)
    r, t = sample_times()
    z_t = (1-t)*x + t*z
    u_tgt = v^1(z_t, t) - (t-r)*JVP_time(u_phi(z_t, r, t))
    phi -= eta * grad(norm(u_phi(z_t, r, t) - stopgrad(u_tgt))**2)

One-Step Sampling

1
2
3
z = sample_prior()
x_hat = z - u_phi(z, r=0, t=1)
return x_hat

7. Empirical Performance

Rectified MeanFlow exhibits superior sample quality and training efficiency relative to previous one-step and rectified flow distillation methods, as evidenced by class-conditional FID scores on ImageNet:

Resolution Backbone (CFG/Autoguidance) 2-rectified flow++ FID Prior best one-step FID Re-MeanFlow FID
64×64 EDM2-S, Autoguidance 4.31 ≈2.88 2.87
256×256 SiT-XL, CFG 3.43 3.41
512×512 EDM2-S, Autoguidance 3.32 (AYF) 3.03

Training efficiency (ImageNet-64): GPU-hours reduced by 2.9× vs AYF and 26.6× vs. 2-rectified flow++. FLOPs remain lowest among comparably accurate one-step models (Zhang et al., 28 Nov 2025).

8. Extensions, Practical Applications, and Broader Impact

Rectified MeanFlow principles have been adapted to image enhancement scenarios, notably in FlowIE (Zhu et al., 1 Jun 2024). There, rectified flows establish linear many-to-one mappings from noise to high-quality images, conditioned via adapters leveraging coarse restorations. Second-order explicit midpoint integration reduces inference to 4–5 steps, resulting in over 10× speedup compared to diffusion-based methods, while maintaining or improving visual fidelity across tasks such as blind super-resolution, colorization, inpainting, and dehazing. Ablations confirm that the mean-value update imparts measurable improvements over Euler schemes, underscoring the generality and practical utility of the rectified meanflow approach in real-world enhancement pipelines.

Recent developments explore the decomposition and optimization dynamics of MeanFlow, as in AlphaFlow (Zhang et al., 23 Oct 2025), which addresses the gradient conflict between trajectory flow matching and consistency, leading to curriculum-based convex objectives that further improve convergence and overall sample quality. Improved MeanFlow (iMF) (Geng et al., 1 Dec 2025) reformulates the training loss for network-independence and introduces flexible explicit guidance, achieving even higher single-step FID on ImageNet by combining multi-token conditioning and streamlined architectures.


Rectified MeanFlow establishes an overview between geometric trajectory straightening and fastforward velocity field modeling, facilitating efficient, high-fidelity one-step generation with broad empirical and applied success (Zhang et al., 28 Nov 2025, Zhu et al., 1 Jun 2024).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Rectified MeanFlow.