Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GenSync: A Generalized Talking Head Framework for Audio-driven Multi-Subject Lip-Sync using 3D Gaussian Splatting (2505.01928v1)

Published 3 May 2025 in cs.CV

Abstract: We introduce GenSync, a novel framework for multi-identity lip-synced video synthesis using 3D Gaussian Splatting. Unlike most existing 3D methods that require training a new model for each identity , GenSync learns a unified network that synthesizes lip-synced videos for multiple speakers. By incorporating a Disentanglement Module, our approach separates identity-specific features from audio representations, enabling efficient multi-identity video synthesis. This design reduces computational overhead and achieves 6.8x faster training compared to state-of-the-art models, while maintaining high lip-sync accuracy and visual quality.

Summary

  • The paper introduces GenSync, a framework using 3D Gaussian Splatting for audio-driven lip-sync that can synthesize videos for multiple speakers without requiring separate model training per identity.
  • GenSync employs an identity-aware disentanglement module and a fused spatial-audio attention network to separate identity from audio features, enabling efficient multi-identity synthesis without per-frame 3DMM extraction.
  • Experimental results show GenSync trains approximately 6.8 times faster than baseline methods while achieving comparable or improved synchronization accuracy and demonstrating robustness across different identities and audio distributions.

An Overview of GenSync: A Framework for Audio-Driven Lip-Sync

The paper entitled "GenSync: A Generalized Talking Head Framework for Audio-driven Multi-Subject Lip-Sync using 3D Gaussian Splatting" by Anushka Agarwal, Muhammad Yusuf Hassan, and Talha Chafekar introduces an audio-driven framework leveraging 3D Gaussian Splatting for multi-identity lip-synced video synthesis. This research presents GenSync, a notable progression in the area of talking head generation, addressing the limitations of existing methods by synthesizing lip-synced videos for multiple speakers without necessitating separate model training per identity.

The GenSync Framework

GenSync's framework is primarily driven by 3D techniques, overcoming constraints posed by generalized 2D methods such as GANs and Transformers. By employing 3D Gaussian Splatting, the system effectively captures intricate 3D facial geometries, crucial for realistic lip-sync performance. Central to GenSync’s framework is the Disentanglement Module, separating identity-specific features from audio representations. This disentanglement enables efficient multi-identity video synthesis, paving the way for training efficiency and scalability.

Methodological Advances

  • Identity-Aware Disentanglement Module: GenSync employs a factorized identity-audio representation, conditioning Gaussian-based facial motion directly on audio embeddings. This eliminates the need for per-frame 3DMM expression extraction prevalent in previous approaches, enhancing computational efficiency, particularly for real-time synthesis. The identity and audio embeddings undergo a multiplicative transformation using the Hadamard product, preserving non-linear dependencies and independent modality contributions.
  • Fused Spatial-Audio Attention Network: This network computes cross-attention among canonical features and identity-conditioned audio inputs. It allows nuanced capture of how facial regions respond to audio features while maintaining identity-specific influences. The network generates fused spatial-audio embeddings, predicting deformation offsets through a Multi-Layer Perceptron (MLP).

Experimental Insights

GenSync exhibits several noteworthy outcomes. It achieves comparable performance to the baseline GaussianTalker in terms of perceptual metrics such as LPIPS and FID scores while demonstrating improvements in synchronization accuracy. GenSync's considerable reduction in training time, at approximately 6.8 times faster than GaussianTalker, underscores its computational efficiency. These results are validated across a dataset sourced from publicly available videos, using various metrics including LPIPS, FID, and SyncNet scores.

The framework's robustness is further evidenced through experiments involving identity embedding switching and testing on novel audio distributions. These tests highlight GenSync's adaptability in transferring speaking styles across identities and maintaining performance under significant distribution shifts, such as using female speaker audio to drive male speaker video synthesis.

Implications and Future Directions

Practically, GenSync provides a scalable method for generating high-quality lip-synced talking heads applicable across various domains, from virtual avatars to film production. Theoretically, its disentanglement strategy offers a template for future efforts in multi-subject synthesis, potentially extending to more complex non-rigid motions or interaction scenarios.

Future developments could focus on integrating a single-stage training paradigm, further refining computational efficiency. Additionally, expanding the framework to accommodate an extensive array of speaker identities would improve generalizability, offering robust solutions to real-world applications in speech-driven animation. The GenSync model sets a robust foundation for advancing the state-of-the-art in multi-identity talking head synthesis by harmonizing computational demands with accuracy in facial animation.