Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SynMotion: Custom Video Motion Synthesis

Updated 1 July 2025
  • SynMotion is a motion-customized video synthesis framework that disentangles subject and motion semantics through a dual-embedding mechanism.
  • It integrates efficient visual adapters within a pre-trained video diffusion generator to achieve refined motion control and enhanced temporal consistency.
  • The framework demonstrates state-of-the-art performance on MotionBench by achieving high motion accuracy, subject fidelity, and robust generalization across diverse scenarios.

SynMotion is a motion-customized video generation framework that enables precise transfer, adaptation, and control of human (and other subject) motions in video synthesis by combining dual-level semantic disambiguation with parameter-efficient visual adaptation. Addressing the limitations of semantic-only, visual-only, and naive concatenative approaches to video motion customization, SynMotion introduces a suite of architectural innovations and training paradigms that promote specification, diversity, and fidelity of both motion and subject appearance. Its methodologies are grounded in a dual-embedding semantic comprehension mechanism, parameter-efficient visual denoising adapters, a subject-prior regularization strategy, and a new benchmark for systematic evaluation.

1. Model Architecture and Component Integration

At its core, SynMotion builds on a pre-trained MM-DiT-based video diffusion generator (using HunyuanVideo as base) and introduces two complementary adaptation pathways:

  • Semantic Pathway: Text prompts structured as "<subject, motion>" are processed using a Multimodal LLM (MLLM). These are decomposed into a subject embedding (esube_{sub}) and a motion embedding (emote_{mot}), incorporating learnable residuals and a dedicated embedding refiner. The mechanism ensures flexible recombination and discriminative control, allowing learned motion features to be reused with new or arbitrary subjects.
  • Visual Pathway: Specialized, lightweight, trainable motion adapters (low-rank adapters) are inserted into each denoising block within the frozen video backbone. These adapters adjust only a small subset of weights (W~=W+BA\tilde{\mathbf{W}_*} = \mathbf{W}_* + \mathbf{B}_* \mathbf{A}_*), where A\mathbf{A}_*, B\mathbf{B}_* are small trainable matrices, infusing the generative process with enhanced motion realism and temporal smoothness.

The joint pipeline is trained by minimizing

L=EE(x),ϵN(0,1),eθ,t[ϵϵθ(zt,eθ,t)22],{\mathcal L} = \mathbb{E}_{\mathcal{E}(x), \epsilon \sim \mathcal{N}(0,1), e_\theta, t} \left[\left\|\epsilon-\epsilon_\theta(z_t, e_\theta, t)\right\|_2^2\right],

where ztz_t is the noisy latent, eθe_\theta the subject-motion embedding, and ϵθ\epsilon_\theta the denoiser prediction.

This architecture enables explicit subject-motion separation, fine-grained motion customization, and scalable adaptation to new scenarios with minimal additional data.

2. Dual-Embedding Semantic Comprehension Mechanism

The dual-embedding mechanism systematically disentangles subject and motion semantics:

  • Input decomposition: Given a prompt like “<subject, motion>”, the MLLM encoder outputs a joint feature that is split into subject (esube_{sub}) and motion (emote_{mot}) components.
  • Residual enhancement & refinement: Each component is further augmented by a trainable residual (esubl,emotle_{sub}^l, e_{mot}^l) and passed through a Zero-convolution (Z\mathcal{Z}). The subject residual is randomly initialized to foster subject variation, while motion residuals are initialized from phrase embeddings (e.g., "a person claps") to ensure semantic grounding.
  • Refiner module: An embedding refiner (R\mathcal{R}) integrates subject and motion representations, supporting bidirectional context.

The composite embedding is: e=[emot+Z(emotl), esub+Z(esubl)],e=e+Z(R(e)).e = [e_{mot} + \mathcal{Z}(e_{mot}^l),\ e_{sub} + \mathcal{Z}(e_{sub}^l)], \qquad e' = e + \mathcal{Z}(\mathcal{R}(e)).

This mechanism is crucial for avoiding semantic confusion—frequent in earlier approaches that use one embedding for both motion and subject—allowing for more accurate motion transfer across arbitrary subjects and supporting cross-domain generalization in both text-to-video (T2V) and image-to-video (I2V) settings.

3. Parameter-Efficient Visual Motion Adaptation

To address the observed limitations of semantic-only subject-motion transfer—especially inadequate motion fidelity and lack of temporal coherence—SynMotion employs trainable low-rank adapters within the video diffusion backbone:

  • Adapter Structure: In every self-/cross-attention block, for each weight W\mathbf{W}_* ({Q,K,V}* \in \{Q, K, V\}), it computes W~=W+BA\tilde{\mathbf{W}_*} = \mathbf{W}_* + \mathbf{B}_* \mathbf{A}_*, with ARr×d\mathbf{A}_*\in \mathbb{R}^{r\times d}, BRd×r\mathbf{B}_*\in \mathbb{R}^{d\times r}, rdr \ll d. All other (original) model weights are frozen.
  • Role: These adapters allow effective, high-fidelity adaptation to the target motion while maintaining subject identity and global content, without catastrophic forgetting.
  • Benefits: They support visually plausible amplitude, timing, and subtlety in motion sequences, yielding better performance in both dynamic and static subject rendering, particularly for rare or complex actions.

This parameter efficiency also permits rapid, scalable adaptation to new subject-motion pairs, supporting practical and industrial use cases.

4. Embedding-Specific Alternate Training with Subject Prior Videos

SynMotion introduces an alternately optimized embedding training strategy to balance subject generalization and motion specificity:

  • Subject Prior Video (SPV) Dataset: A curated set of videos combining diverse subjects (animals, humans, objects) with generic motions. The SPV dataset is used to regularize the subject embedding and prevent overfitting to target-specific features.
  • Alternate Training: During training, with probability α\alpha the model is trained on target motion samples (updating both emotle_{mot}^l and esuble_{sub}^l), and with probability 1α1-\alpha on SPV videos (updating only esuble_{sub}^l, freezing emotle_{mot}^l).
  • Rationale: This division ensures that motion embeddings remain motion-specific, while subject embeddings maintain broader generalization capacity.

A plausible implication is that this mechanism prevents semantic collapse (where the model merges subject and motion roles) and supports robust transfer even for previously unseen or out-of-distribution subjects and motions.

5. MotionBench: A Benchmark for Motion-Customized Video Generation

MotionBench is a rigorously curated dataset and protocol developed to standardize evaluation for motion-customized video generation:

  • Content: Encompasses 16 challenging motion categories, each paired with 6–10 diverse real-world videos. Motions were vetted for difficulty by verifying failure of existing SOTA models (e.g., HunyuanVideo), ensuring genuine evaluation challenge.
  • Prompt Structure: All queries have the format "<subject, motion>", supporting compositional testing (subject-motion recombination).
  • Purpose: Provides a robust testbed for cross-subject generalization, motion transfer specificity, motion consistency, and visual quality across both T2V and I2V paradigms.

This benchmark is necessary because conventional evaluation datasets did not adequately capture the complexity or compositional requirements of flexible motion transfer.

6. Experimental Results and Outperformance

Extensive experiments on MotionBench and in generalized I2V settings demonstrate that SynMotion achieves state-of-the-art (SOTA) results:

  • Motion Accuracy: 68.60% (T2V, MotionBench), significantly higher than all baselines.
  • Subject Accuracy and Consistency: 97.67% and 98.28% (I2V), surpassing other adaptive and subject-centric video models.
  • Dynamic Degree and Imaging Quality: 88.24%, 69.47% (T2V), showing high-quality, temporally coherent outputs.
  • Aesthetics and Generalization: Outperforms models like MotionInversion, DreamBooth, and CogVideoX-I2V in both metrics and generalization, including rare cross-domain subject-motion pairs.

Ablation experiments confirm that each module (dual-embedding, refiner, adapter, alternate training) substantially contributes to overall performance. The model is robust to out-of-distribution queries and supports direct I2V motion transfer, generating dynamic, realistic motion sequences from a single template image.

7. Significance and Position within the Field

SynMotion establishes a comprehensive standard and methodology for motion-specific video synthesis:

Contribution Description Impact
Dual-embedding decomposition Disentangles motion and subject semantics Supports precise transfer and generality
Visual adapters Lightweight, parameter-efficient visual fine-tuning Enhances fidelity/coherence, fast tuning
Alternate embedding training Regularizes and balances subject-motion specificity Prevents semantic/visual interference
MotionBench Standardized, realistic, and challenging benchmark Supports reproducible evaluation
SOTA T2V and I2V performance Quantitative and qualitative advances over baselines Sets new research/industry benchmark

This work provides a rigorous and scalable approach for composing, transferring, and evaluating arbitrary subject-motion pairs in video synthesis, and positions semantic-visual adaptation as a key paradigm in motion-customized generative video models.