Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BodyFormer: Semantics-guided 3D Body Gesture Synthesis with Transformer (2310.06851v1)

Published 7 Sep 2023 in cs.CV, cs.AI, and cs.GR

Abstract: Automatic gesture synthesis from speech is a topic that has attracted researchers for applications in remote communication, video games and Metaverse. Learning the mapping between speech and 3D full-body gestures is difficult due to the stochastic nature of the problem and the lack of a rich cross-modal dataset that is needed for training. In this paper, we propose a novel transformer-based framework for automatic 3D body gesture synthesis from speech. To learn the stochastic nature of the body gesture during speech, we propose a variational transformer to effectively model a probabilistic distribution over gestures, which can produce diverse gestures during inference. Furthermore, we introduce a mode positional embedding layer to capture the different motion speeds in different speaking modes. To cope with the scarcity of data, we design an intra-modal pre-training scheme that can learn the complex mapping between the speech and the 3D gesture from a limited amount of data. Our system is trained with either the Trinity speech-gesture dataset or the Talking With Hands 16.2M dataset. The results show that our system can produce more realistic, appropriate, and diverse body gestures compared to existing state-of-the-art approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kunkun Pang (3 papers)
  2. Dafei Qin (6 papers)
  3. Yingruo Fan (5 papers)
  4. Julian Habekost (1 paper)
  5. Takaaki Shiratori (18 papers)
  6. Junichi Yamagishi (178 papers)
  7. Taku Komura (66 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.