Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UniMuMo: Unified Text, Music and Motion Generation (2410.04534v1)

Published 6 Oct 2024 in cs.SD, cs.CV, cs.GR, cs.LG, cs.MM, and eess.AS

Abstract: We introduce UniMuMo, a unified multimodal model capable of taking arbitrary text, music, and motion data as input conditions to generate outputs across all three modalities. To address the lack of time-synchronized data, we align unpaired music and motion data based on rhythmic patterns to leverage existing large-scale music-only and motion-only datasets. By converting music, motion, and text into token-based representation, our model bridges these modalities through a unified encoder-decoder transformer architecture. To support multiple generation tasks within a single framework, we introduce several architectural improvements. We propose encoding motion with a music codebook, mapping motion into the same feature space as music. We introduce a music-motion parallel generation scheme that unifies all music and motion generation tasks into a single transformer decoder architecture with a single training task of music-motion joint generation. Moreover, the model is designed by fine-tuning existing pre-trained single-modality models, significantly reducing computational demands. Extensive experiments demonstrate that UniMuMo achieves competitive results on all unidirectional generation benchmarks across music, motion, and text modalities. Quantitative results are available in the \href{https://hanyangclarence.github.io/unimumo_demo/}{project page}.

Citations (2)

Summary

  • The paper introduces a unified model that bridges text, music, and motion via token-based representations, reducing computational loads by reusing pre-trained models.
  • The paper employs joint music-motion tokenization and parallel generation with beat-based synchronization to align unpaired data effectively.
  • The paper demonstrates competitive performance on diverse benchmarks and opens new research avenues for adaptive multimedia applications.

Overview of UniMuMo: Unified Text, Music, and Motion Generation

The paper introduces UniMuMo, a comprehensive multimodal framework capable of synthesizing text, music, and motion in diverse combinations. This unified model addresses the need for a system that can manage various input conditions across different modalities and produce outputs in the text, music, and motion domains.

Key Contributions

UniMuMo's architecture is centered around a unified encoder-decoder transformer that bridges the modalities of text, music, and motion through token-based representations. The model incorporates a music codebook enabling motion encoding in the same feature space as music. This strategic decision permits effective cross-modal generation, significantly reducing computational demands by leveraging pre-trained single-modality models.

A notable advancement presented in this work is the synchronization of unpaired music and motion data by aligning rhythmic patterns. This approach enables the utilization of existing datasets despite the prevalent absence of comprehensive, time-synchronized datasets that include all three modalities.

Methodology

The core of UniMuMo lies in its ability to encode music and motion through a shared codebook, facilitating a unified feature space. This is achieved through three primary stages:

  1. Music-Motion Joint Tokenization:
    • Utilizes a pre-trained audio tokenizer (Encodec) to encode motion sequences into the same latent space as music without requiring an additional resource-intensive motion autoencoder.
  2. Music-Motion Parallel Generation:
    • Engages in autoregressive generation in a unique parallel generation scheme where aligned music and motion are conditioned and generated together. This design is implemented with a transformer model fine-tuned from a text-to-music model (MusicGen).
  3. Music-Motion Captioning:
    • Employs a fine-tuned T5 decoder to generate text descriptions from music and motion, with adjustments made to self-attention layers for enriched feature extraction.

Experimental Findings

The paper reports extensive experimental evaluations, demonstrating competitive performance on various unidirectional benchmarks across music, motion, and text modalities. Notably, UniMuMo achieves alignment improvements in music-motion tasks through innovative beat-based synchronization techniques.

Implications and Future Directions

Practically, UniMuMo has significant potential in domains where synchronized multimodal content generation is crucial, such as adaptive entertainment systems and interactive media. Theoretically, this paper provides a robust blueprint for developing integrated multimodal generative models that leverage shared feature spaces across modalities.

Future work may explore further optimization of cross-modal attention mechanisms and extensions toward even broader multimodal integrations. Additionally, the development of comprehensive datasets encompassing music, motion, and text will likely enhance models like UniMuMo, allowing for deeper evaluation and potentially novel applications in real-time generative tasks.

In summation, UniMuMo represents a notable step forward in the field of multimodal generation, offering a unified and efficient approach to synthesizing text, music, and motion within a single cohesive model.