Papers
Topics
Authors
Recent
Search
2000 character limit reached

Dual-Clock Denoising in Video Diffusion

Updated 20 February 2026
  • Dual-clock denoising is a method that uses two distinct noise schedules over spatial regions to balance strict motion control with free background evolution.
  • It integrates with standard DDPM/DDIM samplers by regionally overriding the denoising process to lock in user-specified motion while preserving overall scene appearance.
  • Empirical evaluations demonstrate that this approach improves motion adherence and dynamic realism without requiring additional training or model modifications.

Dual-clock denoising is a region-dependent inference modification for image-to-video (I2V) diffusion models designed to achieve precise, user-controlled motion in video generation while preserving natural scene dynamics and appearance. The mechanism introduces two concurrent noise schedules (“clocks”) over separately masked spatial regions, enabling strict adherence to motion in user-specified areas and unconstrained, realistic dynamics elsewhere. Dual-clock denoising underpins Time-to-Move (TTM), a plug-and-play, training-free framework for motion- and appearance-conditioned video generation that requires no model retraining and incurs no runtime overhead (Singer et al., 9 Nov 2025).

1. Motivation and Conceptual Overview

Motion control in pretrained video diffusion models typically requires domain-specific fine-tuning or model retraining, imposing computational and practical constraints. Existing approaches to direct video control—such as SDEdit-style interventions—initiate reverse diffusion from a uniformly noised reference video at a chosen level tt^*. However, a single noise level yields a trade-off: enforcing reference adherence throughout the frame freezes scene dynamics (with low noise), or allows drift in user-controlled regions (with high noise).

Dual-clock denoising circumvents this by regionally blending two denoising schedules during sampling:

  • In motion-specified regions (determined by a user mask), the reference is lightly noised (strong schedule), “locking in” desired motion with high fidelity.
  • In all other regions, heavier noising is applied (weak schedule), allowing the model’s generative prior to synthesize unconstrained, plausible background evolution.

At each reverse diffusion step, masked pixels are directly overridden with the reference’s appropriately noised value, while unmasked pixels proceed with model-predicted denoising. At a predefined cutoff (tstrongt_{\mathrm{strong}}), regional overriding ceases and the entire frame follows the standard denoising process.

2. Mathematical Formulation

Let VwRF×3×H×WV^w \in \mathbb{R}^{F \times 3 \times H \times W} denote the user-warped reference video; IR3×H×WI \in \mathbb{R}^{3 \times H \times W}, the clean first frame for appearance anchoring; and M{0,1}F×H×WM \in \{0,1\}^{F \times H \times W}, a binary mask specifying regions for motion control. The variance-preserving forward diffusion process is

q(xtx0)=N(xt;αtx0,(1αt)I)q(x_t|x_0) = \mathcal{N}\left(x_t;\,\sqrt{\alpha_t}\,x_0,\,(1-\alpha_t)\,I\right)

with {αt}t=0T\{\alpha_t\}_{t=0}^T a fixed noise schedule (α0=1\alpha_0 = 1, αT0\alpha_T \approx 0).

Two timesteps tweak>tstrongt_\mathrm{weak}>t_\mathrm{strong} parameterize the noise levels:

  • Weak clock (tweakt_\mathrm{weak}): background regions, allowing high uncertainty and flexible dynamics.
  • Strong clock (tstrongt_\mathrm{strong}): masked regions, enforcing tight motion control.

At initialization (SDEdit-style), noisy input is prepared as

xtweak=αtweakVw+1αtweakε,εN(0,I)x_{t_\mathrm{weak}} = \sqrt{\alpha_{t_\mathrm{weak}}}\, V^w + \sqrt{1-\alpha_{t_\mathrm{weak}}}\, \varepsilon,\quad \varepsilon \sim \mathcal{N}(0, I)

During the reverse process, for each t=tweak,,1t = t_\mathrm{weak}, \ldots, 1:

  1. Obtain the predicted denoised frame conditioned on II:

x^t1(xt,t,I)=Denoiserθ(xt,t,I)\hat{x}_{t-1}(x_t, t, I) = \mathrm{Denoiser}_\theta(x_t, t, I)

  1. Compute reference at t1t-1:

xt1w=αt1Vw+1αt1εx^w_{t-1} = \sqrt{\alpha_{t-1}}\,V^w + \sqrt{1-\alpha_{t-1}}\,\varepsilon

  1. Region-wise blending:

xt1=(1M)x^t1(xt,t,I)+Mxt1wx_{t-1} = (1 - M)\odot\hat{x}_{t-1}(x_t, t, I) + M\odot x^w_{t-1}

When t1=tstrongt-1=t_\mathrm{strong}, region-wise overriding stops; for all ttstrongt\leq t_\mathrm{strong}, standard model denoising is used globally.

3. Integration with Standard DDPM/DDIM Samplers

Dual-clock denoising directly modifies the standard denoiser update in the DDPM/DDIM reverse diffusion process. Classically,

xt1=x^t1(xt,t)x_{t-1} = \hat{x}_{t-1}(x_t, t)

Dual-clock denoising generalizes this as: xt1=(1M)x^t1(xt,t,I)+M(αt1Vw+1αt1ε)x_{t-1} = (1 - M)\odot \hat{x}_{t-1}(x_t, t, I) + M\odot \Bigl(\sqrt{\alpha_{t-1}}\,V^w + \sqrt{1-\alpha_{t-1}}\,\varepsilon\Bigr) This ensures reference adherence in masked regions, with application-agnostic compatibility for any I2V diffusion backbone.

The complete procedure is summarized by the following pseudocode:

1
2
3
4
5
6
7
8
9
x = sqrt(alpha[t_weak]) * Vw + sqrt(1-alpha[t_weak]) * Normal(0, I)
for t = t_weak downto 1:
    x_hat = D_theta(x, t, I)
    if t > t_strong:
        ref = sqrt(alpha[t-1]) * Vw + sqrt(1-alpha[t-1]) * Normal(0, I)
        x = (1 - M) * x_hat + M * ref
    else:
        x = x_hat
return x  # output is x_0, the final denoised video

4. Role of Image Conditioning for Appearance Preservation

Precise control of motion without sacrificing scene consistency requires conditioning the denoising process not just on text, but on an explicit appearance anchor. TTM employs an I2V diffusion backbone where each denoising step receives both the current noisy frames and a fixed embedding of the clean first frame II. This explicit appearance conditioning preserves color, identity, and high-frequency detail in static regions, mitigating the drift or “washing out” effects that would otherwise be exacerbated in heavily noised areas outside the motion mask.

By anchoring all non-warped regions, appearance conditioning complements the regional noise scheduling of dual-clock denoising, ensuring that only user-specified motion is inserted, while the global visual identity remains tied to the provided II.

5. Empirical Evaluation and Ablation Results

Extensive ablations were conducted on the MC-Bench object-motion benchmark using the SVD backbone. Three metrics were reported:

  • CoTracker Distance (CTD): Measures motion adherence (lower is better)
  • Dynamic Degree: Quantifies naturalness of motion (higher is better)
  • Imaging Quality: Reference-free perceptual measure (higher is better)

Key ablation configurations and benchmarking are summarized below:

Configuration CTD ↓ Dyn Img ↑
Single-clock @ tweakt_\mathrm{weak} 27.3 0.265 0.623
Single-clock @ tstrongt_\mathrm{strong} 5.53 0.353 0.620
RePaint-style (override) 2.95 0.411 0.578
Dual-clock (TTM) 7.97 0.427 0.617

Single-clock approaches demonstrate a sharp trade-off: tweakt_\mathrm{weak} causes significant drift (+ poor motion adherence), while tstrongt_\mathrm{strong} results in unnatural “frozen” video. RePaint-style full-frame overrides yield strict adherence but degrade natural dynamics and imaging quality. The dual-clock scheme best balances motion control and realistic scene evolution: it substantially improves realism (Dynamic Degree $0.427$) relative to either single-clock baseline and maintains reasonable adherence (CTD 8\approx8 pixels).

This suggests dual-clock denoising uniquely trades off adherence and generative flexibility, outperforming alternatives in combined motion and realism objectives.

6. Practical Implications and Compatibility

Dual-clock denoising constitutes a lightweight, plug-and-play upgrade to generic I2V diffusion model inference. It does not require backbone architecture modification, fine-tuning, or additional training. The mechanism allows for precise motion control at pixel level, beyond what is achievable with text-only prompting or traditional SDEdit-style strategies. The approach generalizes across video backbones and supports various user-supplied reference manipulations, including cut-and-drag and depth reprojection.

The mechanism integrates robustly with image-anchored conditioning, suggesting a pathway toward generalized controllable video synthesis methods that retain high-fidelity appearance, realistic backgrounds, and interpretable local motion.

7. Summary and Scope

Dual-clock denoising provides finely localized, schedule-aware conditioning in the reverse process of I2V diffusion models. By regionally blending reference-guided denoising with unconstrained generative dynamics, it enables non-destructive, training-free, appearance-preserving, and highly controllable video synthesis. The method achieves a favorable balance between user-intent fidelity and perceptual naturalism, as validated by quantitative and qualitative metrics (Singer et al., 9 Nov 2025). Its generality and computational minimalism broaden the applicability of controllable generative video models in diverse conditioning and editing pipelines.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Dual-Clock Denoising Mechanism.