Papers
Topics
Authors
Recent
2000 character limit reached

Interval-Averaged Velocity

Updated 3 December 2025
  • Interval-averaged velocity is defined as the mean instantaneous velocity field over a finite interval, aggregating dynamic evolution into a computable vector.
  • It is applied in generative modeling to achieve one-step or few-step mappings between high-dimensional distributions and to enable parameter-efficient network compression.
  • Its theoretical properties, such as path-additivity and differential identities, provide stable training and reduced computational overhead in modern deep neural architectures.

Interval-averaged velocity, also referred to as "MeanFlow," denotes the mean instantaneous velocity field along a continuous path over a finite time interval. Originating in the discretization of ordinary differential equations (ODEs) and modern generative modeling, this concept provides a computationally tractable summary of dynamical evolution, facilitating one-step or few-step mappings between high-dimensional distributions. Recent research formalizes the use of interval-averaged velocity as the foundation for scalable, efficient, and robust generative models and for parameter-efficient compression in deep neural architectures such as ResNet.

1. Formal Definition and Foundational Identities

For a dynamical system with state xRdx\in\mathbb{R}^d evolving under an instantaneous velocity field vt(x):Rd×[0,1]Rdv_t(x):\mathbb{R}^d\times[0,1]\to\mathbb{R}^d, the interval-averaged velocity over [r,t][r,t] is defined as

u(xt,r,t):=1trrtv(xτ,τ)dτ,u(x_t,r,t):=\frac{1}{t-r}\int_{r}^{t}v(x_\tau,\tau)\,d\tau,

where xτx_\tau is the path state at time τ\tau and (tr)>0(t-r)>0. This operation aggregates the instantaneous velocities across a finite span, yielding a Rd\mathbb{R}^d-valued vector that encodes the net rate of change over [r,t][r,t]. For base time $0$ and endpoint TT, the special case reads: vˉT(x)=1T0Tvt(x)dt.\bar v_T(x) = \frac{1}{T}\int_{0}^{T}v_t(x)\,dt. This integral formulation enables algebraic and differential identities central to consistency-based and flow-matching generative models:

  • Path-additivity/Interval Splitting Consistency: For any r<s<tr < s < t,

(tr)u(xt,r,t)=(sr)u(xs,r,s)+(ts)u(xt,s,t),(t-r)u(x_t,r,t) = (s-r)u(x_s,r,s) + (t-s)u(x_t,s,t),

enabling recursive computation and algebraic parameterization across nested intervals (Guo et al., 22 Jul 2025).

  • MeanFlow Differential Identity: Differentiating (tr)u(xt,r,t)(t-r)u(x_t,r,t) with respect to tt, and assuming sufficient regularity,

v(xt,t)=u(xt,r,t)+(tr)[tu(xt,r,t)+xu(xt,r,t)v(xt,t)],v(x_t,t) = u(x_t,r,t) + (t-r) [\partial_t u(x_t,r,t) + \nabla_x u(x_t,r,t)\,v(x_t,t)],

establishing a direct relationship between instantaneous and averaged fields (You et al., 24 Aug 2025).

These properties allow interval-averaged velocity to serve as a regression target, yielding tractable learning objectives that obviate the need for full trajectory optimization.

2. Methodological Frameworks: From Differential to Algebraic MeanFlow

Two principal methodological tracks exploit interval-averaged velocity in generative learning: Modular MeanFlow (MMF) and its algebraic extension, SplitMeanFlow.

  • Modular MeanFlow (MMF): Constructs a regression loss by plugging linear interpolation surrogates into the MeanFlow identity. The resulting loss,

Lλ=Ex0,x1,r<tuθ(xt,r,t)+(tr)SGλ[tuθ+xuθx1x0tr]x1x0tr2,\mathcal{L}_\lambda = \mathbb{E}_{x_0,x_1,r<t} \left\| u_\theta(x_t,r,t) + (t-r)\,\mathrm{SG}_\lambda \left[ \partial_t u_\theta + \nabla_x u_\theta\cdot \frac{x_1-x_0}{t-r} \right] - \frac{x_1-x_0}{t-r} \right\|^2,

allows interpolation via the gradient modulation parameter λ[0,1]\lambda\in[0,1]. λ=1\lambda=1 yields full backpropagation through Jacobian terms (expressive but potentially unstable), while λ=0\lambda=0 stops gradients for maximal stability. Intermediate λ\lambda enables a bias-variance tradeoff and a continuation scheme for curriculum learning (You et al., 24 Aug 2025).

  • SplitMeanFlow: Derives a purely algebraic self-consistency law from the additivity of definite integrals (Interval Splitting Consistency). The central identity is

u(zt,r,t)=(1λ)u(zs,r,s)+λu(zt,s,t)u(z_t,r,t) = (1-\lambda)u(z_s,r,s) + \lambda u(z_t,s,t)

with λ=tstr\lambda = \frac{t-s}{t-r}, permitting training and sampling without differential operators or Jacobian–vector products (JVPs). The limiting case sts\to t recovers the MeanFlow differential identity, establishing full theoretical generality (Guo et al., 22 Jul 2025).

These frameworks enable robust learning of averaged velocity fields, supporting stable training across modalities and architectural types.

3. Implementation, Training Objectives, and Sampling

Interval-averaged velocity is operationalized via neural architectures (typically U-Net backbones or small MLPs) that ingest the current state and time parameters and output the average velocity vector.

  • MMF: Employs a curriculum-style warmup, progressively increasing λ\lambda to transition from coarse, stable supervision to fully-expressive, differentiable training. The schedule is linear:

λ(i)=min{1,i/Twarmup}\lambda(i) = \min\{ 1,\, i / T_\mathrm{warmup} \}

for step ii and warmup TwarmupT_\mathrm{warmup}. Sampling is performed with a single forward pass:

x0=x1uθ(x1,r=0,t=1),x_0 = x_1 - u_\theta(x_1, r=0, t=1),

corresponding to one-step data generation (You et al., 24 Aug 2025).

  • SplitMeanFlow: Trains the interval-averaged velocity network by enforcing consistency over randomly split intervals, with a loss formed from the difference between uθ(zt,r,t)u_\theta(z_t, r, t) and a convex combination (with stop-gradient) of uθu_\theta evaluated at intermediate subintervals. A boundary consistency term uθ(z,t,t)=v(z,t)u_\theta(z, t, t) = v(z, t) is enforced via a proportion pp of minibatch examples. Sampling with kk steps decomposes [0,1][0,1] into kk intervals and recursively applies the split update (Guo et al., 22 Jul 2025).

4. Applications in Generative Modeling and Deep Network Compression

Interval-averaged velocity is foundational to recent advances in efficient generative modeling:

  • One-step and Few-step Generative Modeling: Modular MeanFlow (MMF) and SplitMeanFlow enable high-fidelity generation with only one or a few function evaluations, bypassing the computational bottlenecks of traditional diffusion or flow-matching approaches. Empirical results demonstrate competitive, and sometimes superior, sample quality and deterministic reconstruction on datasets such as CIFAR-10 and ImageNet-64. Notably, in speech synthesis, SplitMeanFlow achieves up to 20× speedup with parity in word error rate and perceptual metrics vs. ten-step baselines (Guo et al., 22 Jul 2025).
  • Parameter-efficient Deep Networks: In MeanFlow-Incubated ResNet (MFI-ResNet), entire stages of ResNet are compressed by replacing KK residual blocks with one or two MeanFlow modules that directly implement the interval-averaged velocity transformation for the feature evolution. This yields a 46% reduction in parameter count and computational complexity on CIFAR-10/100, with maintained or improved accuracy. MeanFlow modules emulate the cumulative effect of multiple residual connections as a single meta-mapping, substantially reducing architectural redundancy (Sun et al., 16 Nov 2025).

5. Computational and Theoretical Properties

Interval-averaged velocity learning confers substantial computational and theoretical advantages:

  • JVP-free or Controllable Overhead: SplitMeanFlow eliminates Jacobian–vector products entirely by virtue of its algebraic objective. MMF confines JVPs to a weighted, curriculum-learned region, often only incurring a 15–20% runtime overhead per sample when fully active (You et al., 24 Aug 2025, Guo et al., 22 Jul 2025).
  • Stability/Expressiveness Tradeoff: By modulating the degree of differentiability (via λ\lambda in MMF or the mixing ratio pp in SplitMeanFlow), training can be tuned between stability (favoring underfitting, smooth solutions) and full expressiveness (risking overfitting or gradient explosion). Curriculum approaches deliver stable early-phase convergence with late-phase accuracy, yielding global convergence guarantees under mild regularity conditions (e.g., Lipschitz continuity) (You et al., 24 Aug 2025).
  • Unification of Consistency-based, Flow-matching, and MeanFlow Methods: MMF and SplitMeanFlow recover classical consistency models (λ=0\lambda=0, fixed time), instantaneous flow-matching (infinitesimal interval limit), and other prominent generative learning objectives as special cases (You et al., 24 Aug 2025).

6. Empirical Results, Ablations, and Robustness

Extensive experiments across image synthesis, trajectory modeling, and large-scale speech synthesis reveal that interval-averaged velocity models exhibit:

  • Lower FID and MSE in one-step and few-step image generation versus flow-matching and consistency-based baselines.
  • Consistent performance under low-data and out-of-distribution settings, with curriculum MMF achieving FID ≤ 5 even when trained on only 1% of CIFAR-10 data, where fixed-λ models overfit severely.
  • Increased stability, evidenced by monotonic decrease in training loss for curriculum-based MMF and by robust optimization in algebraic SplitMeanFlow models (You et al., 24 Aug 2025, Guo et al., 22 Jul 2025).
  • Parameter compression and architectural generality in deep discriminative models, as MFI-ResNet demonstrates, with transferable methodology to a broad range of modern backbones (Sun et al., 16 Nov 2025).

7. Extensions and Architectural Integration

By abstracting the feature transformation process in neural networks as continuous flows, interval-averaged velocity modules provide a meta-architectural tool for compressing, analyzing, and extending deep learning models. Stages in VGG, DenseNet, and MobileNet architectures can be systematically replaced with one-step or few-step MeanFlow (or SplitMeanFlow) modules, followed by selective incubation to recover or exceed baseline expressive power (Sun et al., 16 Nov 2025). This approach offers a plug-and-play recipe for scalable, modular, and parameter-efficient deep learning.


References:

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Interval-Averaged Velocity.