Multi-granular body modeling with Redundancy-Free Spatiotemporal Fusion for Text-Driven Motion Generation (2503.06897v2)
Abstract: Text-to-motion generation sits at the intersection of multimodal learning and computer graphics and is gaining momentum because it can simplify content creation for games, animation, robotics and virtual reality. Most current methods stack spatial and temporal features in a straightforward way, which adds redundancy and still misses subtle joint-level cues. We introduce HiSTF Mamba, a framework with three parts: Dual-Spatial Mamba, Bi-Temporal Mamba and a Dynamic Spatiotemporal Fusion Module (DSFM). The Dual-Spatial module runs part-based and whole-body models in parallel, capturing both overall coordination and fine-grained joint motion. The Bi-Temporal module scans sequences forward and backward to encode short-term details and long-term dependencies. DSFM removes redundant temporal information, extracts complementary cues and fuses them with spatial features to build a richer spatiotemporal representation. Experiments on the HumanML3D benchmark show that HiSTF Mamba performs well across several metrics, achieving high fidelity and tight semantic alignment between text and motion.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.