Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 175 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 38 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 218 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Motion Anything: Any to Motion Generation (2503.06955v2)

Published 10 Mar 2025 in cs.CV

Abstract: Conditional motion generation has been extensively studied in computer vision, yet two critical challenges remain. First, while masked autoregressive methods have recently outperformed diffusion-based approaches, existing masking models lack a mechanism to prioritize dynamic frames and body parts based on given conditions. Second, existing methods for different conditioning modalities often fail to integrate multiple modalities effectively, limiting control and coherence in generated motion. To address these challenges, we propose Motion Anything, a multimodal motion generation framework that introduces an Attention-based Mask Modeling approach, enabling fine-grained spatial and temporal control over key frames and actions. Our model adaptively encodes multimodal conditions, including text and music, improving controllability. Additionally, we introduce Text-Music-Dance (TMD), a new motion dataset consisting of 2,153 pairs of text, music, and dance, making it twice the size of AIST++, thereby filling a critical gap in the community. Extensive experiments demonstrate that Motion Anything surpasses state-of-the-art methods across multiple benchmarks, achieving a 15% improvement in FID on HumanML3D and showing consistent performance gains on AIST++ and TMD. See our project website https://steve-zeyu-zhang.github.io/MotionAnything

Summary

  • The paper introduces Motion Anything, a framework using attention and multimodal integration for generating controllable motion from text and music inputs.
  • A significant contribution is the Text-Motion-Dance (TMD) dataset, larger than previous ones, bridging gaps in motion generation research resources.
  • Experimental results show Motion Anything improves performance over previous methods, including a 15% Fréchet Inception Distance (FID) gain on HumanML3D.

Motion Anything: Any to Motion Generation introduces a new framework for generating motion sequences that can be controlled using various types of input, such as text and music. The paper addresses two primary challenges:

  • It improves the way key frames and body parts are prioritized during generation by using an Attention-based Mask Modeling approach. This allows the model to focus on dynamic parts of an action during temporal and spatial processing.
  • It integrates conditions from multiple modalities more effectively, resulting in generated motions that are both coherent and controllable.

A significant contribution of the work is the introduction of the Text-Motion-Dance (TMD) dataset, which contains 2,153 pairs of text, music, and dance. This dataset is notably larger than previous collections like AIST++, helping to bridge gaps in available resources for motion generation research.

Experimental results demonstrated by the paper show improvements over previous methods, including a 15% boost in the Fréchet Inception Distance (FID) on the HumanML3D benchmark, as well as consistent gains on AIST++ and the newly introduced TMD dataset.

In summary, the paper presents a comprehensive, multimodal approach to motion generation that leverages advanced attention mechanisms for dynamic motion prioritization and improved integration of text and music cues, along with a robust new dataset to support future research.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com