Papers
Topics
Authors
Recent
2000 character limit reached

Overlap-and-Blend Temporal Co-Denoising

Updated 10 November 2025
  • The paper presents an overlap-and-blend approach that partitions long videos into overlapping windows and applies independent, Heun-based denoising with weighted fusion for seamless output.
  • It leverages precise mathematical formulations in both pixel and latent spaces, using cosine and Hamming window blending to ensure temporal consistency and high fidelity.
  • Overlap-and-blend temporal co-denoising supports multi-text and spatial conditionings, enabling scalable video inpainting, outpainting, and robust long-range video editing.

Overlap-and-blend temporal co-denoising is a class of techniques designed to enable temporally consistent generation, editing, or inpainting of long videos with diffusion models, extending the effective length and controllability of outputs beyond the domain of monolithic short-clip models. It achieves this by partitioning the video into overlapping temporal segments ("windows"), independently denoising each window, and then fusing the partial results in overlap regions using smooth weighting schemes. The paradigm has been substantially formalized and empirically validated in both long video generation ("Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising" (Wang et al., 2023)) and long video inpainting/outpainting ("Unified Long Video Inpainting and Outpainting via Overlapping High-Order Co-Denoising" (Lyu et al., 5 Nov 2025)), where it enables seamless, scalable, and high-fidelity video synthesis over hundreds of frames, with support for multi-text or spatial conditionings.

1. Mathematical Formulation and General Principles

Overlap-and-blend temporal co-denoising operates in the latent (or pixel) space of a video modeled as a sequence of LL (or TT) frames. The diffusion trajectory at time tt can be formalized as {xt}t=0T\{\mathbf{x}_t\}_{t=0}^T for the whole video, or XtRT×dX_t \in \mathbb{R}^{T \times d}, where dd is the per-frame latent dimension in encoded space. For inpainting/outpainting (Lyu et al., 5 Nov 2025), masked or zero-padded latent buffers are used for the target video region.

At each diffusion timestep, the system works with overlapping windows {xt(i)}\{x_t^{(i)}\} extracted from the full buffer. For window length WW (or clip length MM) and overlap OO (stride S=MOS=M-O or specified), the ii-th window covers frames si:si+W1s_i : s_i+W-1 with si=1+(i1)(WO)s_i = 1 + (i-1)(W - O) and i=1,,Ni=1,\dots,N. Adjacent windows necessarily overlap in O=WSO=W-S (or MSM-S) frames, which is central to the recombination strategy.

Each window xt(i)x_t^{(i)} is independently denoised using either a pre-trained short-clip noise predictor ϵθ\bm{\epsilon}_\theta or a fine-tuned high-capacity "score model" ff. The reverse diffusion step may be modeled using first-order DDPM/DDIM stepping or, more effectively, with a second-order Heun (improved Euler) method for enhanced stability and quality. The generation or reconstruction for the whole video is then formulated as the solution to a least-squares blending problem or as a weighted sum in the overlap regions.

2. Clip Extraction, Denoising, and Reverse Diffusion

A key step is the extraction of temporally overlapping sub-clips or windows: Fi(xt)=[xt,Si,,xt,Si+M1]F_i(\mathbf{x}_t) = \left[\mathbf{x}_{t,S i}, \dots, \mathbf{x}_{t,S i + M-1}\right] or, equivalently,

xt(i)=Xt[si:si+W1]x^{(i)}_t = X_t[s_i : s_i+W-1]

where SS is the stride and MM (or WW) the window length.

Forward Process (Training):

q(xtxt1)=N(xt;αtxt1,βtI)q(\mathbf{x}_t|\mathbf{x}_{t-1}) = \mathcal{N}\left(\mathbf{x}_t;\sqrt{\alpha_t}\mathbf{x}_{t-1},\beta_t I\right)

with the standard noise schedule αt=1βt\alpha_t=1-\beta_t.

Reverse Step (Inference):

Each short window is denoised independently via the reverse diffusion model: ps(xt1ixti,ϕi)=N(xt1i;μ~t(xti,ϕi),β~tI)p^s\left(\mathbf{x}_{t-1}^i|\mathbf{x}_t^i,\bm{\phi}_i\right) = \mathcal{N}\left(\mathbf{x}_{t-1}^i;\tilde\mu_t(\mathbf{x}_t^i,\bm{\phi}_i), \tilde\beta_t I\right) or, for high-order solvers (Heun's method) (Lyu et al., 5 Nov 2025): k1(i)=f(xt(i),t) x~tΔt/2(i)=xt(i)+Δt2k1(i) k2(i)=f(x~tΔt/2(i),tΔt2) xtΔt(i)=xt(i)+Δt2(k1(i)+k2(i))\begin{aligned} k_1^{(i)} &= f(x_t^{(i)}, t)\ \tilde{x}^{(i)}_{t-\Delta t/2} &= x^{(i)}_t + \frac{\Delta t}{2}k_1^{(i)}\ k_2^{(i)} &= f\left(\tilde{x}^{(i)}_{t-\Delta t/2}, t-\frac{\Delta t}{2}\right)\ x_{t-\Delta t}^{(i)} &= x_t^{(i)} + \frac{\Delta t}{2}(k_1^{(i)}+k_2^{(i)}) \end{aligned} Here, ff is the model's predicted score/noise, and Δt\Delta t the diffusion step.

3. Overlap Identification and Blending Strategies

Overlap regions between adjacent windows ii and i+1i+1 are precisely: Oi={jj[S(i+1),Si+M1]},Oi=MS\mathcal{O}_i = \left\{ j \mid j \in [S(i+1), S i + M - 1]\right\}, \quad |\mathcal{O}_i| = M - S

To merge the independently denoised window outputs into a full-length frame sequence, a blending function wi(p)w_i(p) assigns per-frame weights (within the window) for aggregation:

  • Linear ramp or cosine schedule over overlap region (Wang et al., 2023)
  • Hamming window of length WW (Lyu et al., 5 Nov 2025): wj=αβcos(2π(j1)W1),j=1,,W;α=0.54,β=0.46w_j = \alpha - \beta \cos\left(\frac{2\pi(j-1)}{W-1}\right), \quad j=1,\dots,W;\, \alpha=0.54,\,\beta=0.46

The fused latent at index kk: XtΔt[k]=i:k[si,si+W1]wksi+1xtΔt(i)[ksi+1]i:k[si,si+W1]wksi+1X_{t-\Delta t}[k] = \frac{\sum_{i: k \in [s_i, s_i + W - 1]} w_{k - s_i + 1} \cdot x_{t-\Delta t}^{(i)}[k - s_i + 1]}{\sum_{i: k \in [s_i, s_i + W - 1]} w_{k - s_i + 1}} This construction guarantees smoothness across window boundaries and optimality in the least-squares sense.

4. Algorithmic Implementation and Pseudocode

A unified pseudocode skeleton for overlap-and-blend temporal co-denoising is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
for n = 0 to N-1:
    t = t_n
    Δt = t_n - t_{n+1}
    X_{t-Δt}  0; denom  0
    for i in 1..Nwin:
        s_i = 1 + (i-1)*(W-O)
        x_t^{(i)} = X_t[s_i : s_i+W-1]
        # Heun update:
        k1 = f(x_t^{(i)}, t)
        x_mid = x_t^{(i)} + (Δt/2)*k1
        k2 = f(x_mid, t - Δt/2)
        x_{t-Δt}^{(i)} = x_t^{(i)} + Δt*(k1 + k2)/2
        # Blend:
        for j in 1..W:
            idx = s_i + j - 1
            weight = H[j]
            X_{t-Δt}[idx] += weight * x_{t-Δt}^{(i)}[j]
            denom[idx] += weight
    for k in 1..T:
        X_{t-Δt}[k] /= denom[k]
    X_t  X_{t-Δt}
output_frames  Decoder(X_0)

This pipeline is compatible with both pixel-space and VAE-based (latent-space) diffusion models. For multi-text conditioned generation, each window carries its own embedding, and across semantic boundaries a convex combination is assigned per window.

5. Empirical Validations and Ablation Results

Comprehensive ablation studies demonstrate the criticality of various components:

  • Heun’s Second-Order Solver vs. First-Order Euler: Second-order integration systematically improves temporal quality. For video inpainting (Lyu et al., 5 Nov 2025), replacing Euler with Heun increased PSNR from 14.78 dB to 15.74 dB (+6.5%), SSIM from 0.515 to 0.603 (+17.1%), and reduced LPIPS from 0.613 to 0.529 (−13.7%).
  • Blending Schedule: Hamming window blending eliminates hard seams and "ghosting" at window boundaries, outperforming both hard (no blend) and uniform (mean) schemes.
  • Window Length: With W=80W=80–100 and 50% overlap, the system achieves artifact-free long-range coherence without excessive memory use. Too-small windows (<30) cause global drift, while too-large windows exceed GPU capacity.
  • Scalability: Sliding window/overlap lets the system support arbitrarily long videos, as demonstrated by successful tests on hundreds of frames on single 80 GB H100 GPUs. Competing baselines (e.g., VACE, Alibaba Wan 2.1) cap out at 81–245 frames before out-of-memory errors.
  • Editing Consistency: The overlap strategy enables precise editing (e.g., object addition or removal over hundreds of frames) without visible seams or drift, as required for high-fidelity video inpainting and outpainting.

6. Applications, Conditioning, and Extensions

Overlap-and-blend temporal co-denoising supports:

  • Multi-condition generation: Assigning unique text, semantic, or spatial conditionings per window (or per clip) enables compositional generation and fine-grained control.
  • Semantic transitions: Convex linear interpolation of conditioning embeddings across windows produces perceptually smooth transitions for scene changes or prompt switching (Wang et al., 2023).
  • Bidirectional temporal attention: For further temporal consistency, within-window denoising may use attention mechanisms anchored in window centers to propagate context forward and backward (ensuring matching content on both ends of overlaps).
  • Practical long-range video editing: Robust spatially controllable inpainting and outpainting at scale.

A plausible implication is that, by agnostically leveraging pretrained short-clip models and a task-agnostic blending strategy, this paradigm obviates the need to retrain bespoke long-video models, significantly improving efficiency and flexibility.

7. Limitations and Future Directions

The memory and compute efficiency of overlap-and-blend depends on judicious window sizing and overlap, as excessive overlap increases computational redundancy. While the Hamming window blend is highly effective, further refinements (e.g., window shape adaptation or cross-window residual updates) may further reduce residual artifacts. Adoption of higher-order solvers (beyond Heun) remains an open area, although diminishing returns have been empirically observed (Lyu et al., 5 Nov 2025). Integration with advanced text or multimodal conditionings may drive further advances in controllability and sample diversity.

In summary, overlap-and-blend temporal co-denoising constitutes a rigorously formalized, empirically validated approach to scalable, high-fidelity, and consistent long video synthesis and editing, directly leveraging short-clip diffusion models and principled multi-window optimization (Wang et al., 2023, Lyu et al., 5 Nov 2025).

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Overlap-and-Blend Temporal Co-Denoising.