UniAVGen: Unified Audio-Video Generation
- UniAVGen is a unified audio–video generation framework that employs a symmetric dual Diffusion Transformer architecture to represent and generate cross-modal signals.
- It implements asymmetric cross-modal interaction, face-aware modulation, and modality-aware classifier-free guidance to enhance temporal alignment and semantic consistency.
- The framework achieves high-fidelity synchronization and quality with sample efficiency, using only 1.3M paired AV samples compared to models requiring up to 30.7M samples.
UniAVGen is a unified audio–video generation framework that addresses the persistent challenge of cross-modal synchronization and semantic consistency in open-source generative models. Built on a structurally symmetric dual Diffusion Transformer (DiT) backbone, UniAVGen implements Asymmetric Cross-Modal Interaction for temporally aligned bidirectional attention, Face-Aware Modulation for spatially selective fusion, and Modality-Aware Classifier-Free Guidance to explicitly amplify cross-modal signals during inference. Its joint synthesis paradigm enables a single model to perform joint audio–video generation, cross-modal continuation, video-to-audio dubbing, and audio-driven video synthesis with markedly fewer paired training examples than prior solutions.
1. Dual-Branch Joint Synthesis Architecture
UniAVGen comprises two parallel DiT streams: one dedicated to video and one to audio. Both streams are constructed on transformer backbones with matched depth, attention heads, and feature dimensionality (Wan 2.2–5B for video; Wan 2.1–1.3B for audio), ensuring efficient representation of shared cross-modal latent space.
At each diffusion timestep , both branches receive:
- a reference latent (from a reference frame or audio segment),
- a conditional latent (enabling continuation and controllability),
- the current noisy latent,
- and modality-specific textual embeddings.
Video Stream
Input video is downsampled to 16 fps and encoded via a pre-trained VAE into latents , with additional reference and conditional latents . These are concatenated temporally:
A umT5-encoded prompt (“desired motion/expression”) is injected into every DiT block by cross-attention. The training objective follows flow matching:
Audio Stream
Audio is resampled at 24 kHz and converted into Mel-spectrogram latents using a VAE. Reference and conditional audio are similarly encoded. The DiT branch input for audio is:
Textual prompts (“the text to be spoken”) are processed via a ConvNeXt stack yielding features . The audio loss mirrors the video:
Both video and audio DiT streams are governed by standard latent diffusion forward processes:
At inference, the learned velocity field is integrated numerically (e.g., Euler–Maruyama) from ; the resulting latents are decoded into video or audio by their respective decoders.
2. Asymmetric Cross-Modal Interaction
UniAVGen’s core synchronization mechanism consists of two modality-specific aligner modules. These modules establish bidirectional, temporally aligned cross-attention between modalities.
Audio→Video Aligner (A2V)
Given video features and audio features , for each frame :
- Aggregate an audio context window of width around :
- Perform cross-attention:
- Aggregate and add residually to :
Video→Audio Aligner (V2A)
For audio token mapping to video frames with interpolation :
- Interpolate video context:
- Compute cross-attention:
- Aggregate and add residually to :
To prevent destabilization after single-modality pretraining, the and output projections are zero-initialized, so cross-modal flow begins with zero residual.
3. Face-Aware Modulation
To further boost lip synchronization and semantic coupling—especially in speech settings—Face-Aware Modulation (FAM) adaptively gates cross-modal interaction according to facial saliency.
FAM Mechanism
- Compute per-layer mask (shape ), using video features :
with learned affine parameters ; is the sigmoid; is elementwise scaling.
- Masks are supervised:
where is the face region from RetinaFace; weighting decays from 0.1 to 0 during training.
- During cross-modal attention:
- A2V: Only face tokens are updated:
- V2A: Only facial features attend back:
A plausible implication is that face-focused modulation efficiently allocates model capacity to the primary driver of audio–video synchronization—oral-facial motion during speech.
4. Modality-Aware Classifier-Free Guidance
Standard classifier-free guidance (CFG) applies a fixed rescaling to conditional and unconditional predictions per modality, potentially muting cross-modal coupling. UniAVGen implements Modality-Aware CFG (MA-CFG) that uses a joint unconditional pass (with cross-modal attention disabled) as a baseline for both streams, explicitly amplifying the cross-modal contribution.
Let
- : video unconditional score (only text prompt),
- : audio unconditional score,
- : joint model score (full cross-modal interactions).
Final inference scores per modality:
for guidance scales , directly boosting cross-modal information flow, and thus strengthening the correlation of emotional/motion cues between audio and video.
5. Training Regime and Data Efficiency
UniAVGen’s training proceeds in three sequential stages:
- Audio-only pretraining: 160K steps, batch 256, learning rate ; optimizes only .
- End-to-end joint training: 30K steps, batch 32, learning rate , jointly optimizes on approximately 1.3M real-human AV video samples.
- Multi-task fine-tuning: 10K steps, same hyperparameters, sampling five tasks in the ratio (joint gen:gen+audio ref:continuation:video→audio:audio→video) as 4:1:1:2:2.
This regime demonstrates marked sample efficiency: UniAVGen requires only 1.3 M paired AV samples, as opposed to 30.7 M in Ovi, and 6.4 M in UniVerse-1.
6. Empirical Performance and Comparative Results
Experimental evaluation is conducted using AudioBox-Aesthetics (Production Quality PQ, Content Usefulness CU), Whisper-large (WER), VBench (Subject Consistency SC, Dynamic Degree DD, Imaging Quality IQ), SyncNet (Lip Sync LS), and Gemini LLM for Timbre and Emotion Consistency (TC/EC). Key results are summarized in the following table:
| Model | PQ | CU | WER | SC | DD | IQ | LS | TC | EC | Training Samples |
|---|---|---|---|---|---|---|---|---|---|---|
| UniAVGen | 7.00 | 6.62 | 0.151 | 0.973 | 0.410 | 0.779 | 5.95 | 0.832 | 0.573 | 1.3 M |
| Ovi | 6.03 | 6.01 | 0.216 | 0.972 | 0.360 | 0.774 | 6.48 | 0.828 | 0.558 | 30.7 M |
| UniVerse-1 | 4.56 | 4.29 | 0.296 | 0.985 | 0.08 | 0.733 | 1.21 | 0.573 | 0.300 | 6.4 M |
UniAVGen outperforms all open-source joint-generation models in audio quality (PQ, CU), timbre and emotion consistency (TC, EC), and achieves state-of-the-art synchronization (LS) with an order-of-magnitude less training data. Qualitative analysis notes that UniAVGen and Ovi yield high-fidelity results for in-distribution human video; where Ovi and UniVerse-1 struggle with stylized or out-of-domain scenarios, UniAVGen maintains alignment and audio–visual coherence.
Ablations indicate:
- Asymmetric, temporally aligned cross-modal interactions (ATI) yield the highest consistency early in training.
- Supervised face-aware masks with decaying substantially improve LS, TC, and EC compared to unsupervised or absent FAM.
- MA-CFG enhances emotional and motion correlation.
- The pretrain–joint–multi-task schedule achieves the fastest and highest convergence in consistency metrics.
7. Limitations and Prospective Directions
Despite its strengths, UniAVGen reveals a number of limitations and opportunities for future improvement:
- All face-awareness is based on 2D spatial masks from supervised detection; occlusions or extreme poses may not be optimally captured.
- The backbone size, while competitive, remains smaller than some proprietary models; further scaling and architectural refinements (dynamic depth, advanced regularization) are needed for continued improvement.
- Current modeling does not exploit multi-speaker scenarios, fine-grained scene context outside facial regions, or longer-form cross-modal temporal dependencies.
A plausible implication is that with further data expansion and semi-supervised mask learning, as well as architectural adaptation for broader genre and speaker coverage, UniAVGen could extend its performance margin and approach the few remaining specialty strengths of closed commercial baselines.