Momentum HumanRig: Parametric Body Model
- Momentum HumanRig (MHR) is a parametric, anatomically motivated human body model that decouples skeletal articulation from external deformations to support robust animation and retargeting.
- It leverages a hybrid corrective system using non-linear MLP-driven pose correctives and artist-defined skinning weights to enhance physical plausibility and realism.
- Empirical evaluations show MHR outperforms models like SMPL-X in reconstruction accuracy and joint articulation, making it ideal for AR/VR graphics and robotics applications.
Momentum HumanRig (MHR) is a parametric, anatomically-motivated human body model that enables expressive, physically consistent animation and motion retargeting for both graphics and robotics. Developed by Ferguson et al., MHR explicitly decouples skeletal articulation from external surface deformations, supporting semantic control, non-linear pose correctives, and real-time integration into a broad range of AR/VR, graphics, and robotics pipelines. MHR advances prior art by fusing the skeleton/shape decoupling of ATLAS with a modern corrective-driven rig and production-grade tooling, while enabling robust, temporally consistent motion representations suitable for robot retargeting and physically plausible animation (Ferguson et al., 19 Nov 2025, Tu et al., 25 Dec 2025).
1. Structural Model and Decoupling Paradigm
MHR’s foundational design is the separation of internal skeletal kinematics from external surface geometry. It adopts and extends ATLAS’s paradigm by significantly increasing anatomical coverage—with 127 joints spanning root, spine, limbs, hands, fingers, eyes, jaw—compared to ATLAS’s 77. The external surface deformation is factored into distinct semantic channels: identity (body/head/hand shape), skeletal scale, and fine-scale surface expression (e.g., FACS-style controls), enabling independent, artist-friendly manipulation of each.
MHR formalizes these latent channels as separate coefficient vectors: (identity, body, 20 head, 5 hands), (expressions), (skeleton scale), and (joint pose). The surface mesh is parameterized as
where is the neutral template, , respectively represent shape and expression offsets, and introduces pose-dependent correctives.
MHR further supports multiple levels of detail (LoDs), with artist-driven, hand-edited skinning weights (4 influences/vertex LoDs 1–4; 8 for LoD 0), facilitating deployment from lightweight interactive settings to high-fidelity offline rendering or simulation (Ferguson et al., 19 Nov 2025).
2. Parameterization, Equations, and Kinematic Chain
The skeleton incorporates 127 joints, each defined by translation , rotation SO(3) (Euler-XYZ), and a uniform scale , composed with joint-specific pre-rotation and offset matrices. The hierarchical world transform is
and all degrees of freedom are packed into , mapped to joint parameter vectors . Linear blend skinning (LBS) applies these transforms to the surface:
where blends vertex positions using artist-defined weights , computed from the full joint chain.
Expressions are represented as 72 semantic FACS-style blendshapes (artist sculpted), rather than a purely data-driven or PCA face basis. Sparse skeleton transformation coefficients permit explicit control of limb and segment proportions for nuanced anthropometric scaling.
3. Corrective System: Sparse and Non-Linear Pose-Dependent Deformations
LBS models are known to produce artifacts—most notably linear blending issues such as "candy-wrapper" twists—at highly articulated joints. To address this, MHR incorporates a hybrid corrective system: each joint is assigned a local non-linear corrective , where is the vertex count at a given LoD. These are computed as follows:
- Per-joint and one-ring neighbor 6D rotation deviations are computed.
- Deviations are non-linearly embedded via lightweight multi-layer perceptrons (MLP): .
- A learned, sparsity-regularized per-vertex mask and decode matrix produce the final influence:
with global correction . Masks are initialized by geodesic proximity to the joint segment, regularized for sparsity (L penalty), and further constrained during fit by terms penalizing point-to-surface distance, keypoint reprojection error, mask non-sparsity, and joint-limit violations (Ferguson et al., 19 Nov 2025).
4. Implementation, Pipeline Integration, and Performance
MHR is implemented atop the Momentum library, with C++/Python bindings and PyTorch interoperability. The architecture supports robust export (FBX, GLTF), native rig parameter serialization, and is GPU-accelerated—LBS executes as a compute shader, per-joint MLPs are computationally lightweight (), producing negligible runtime overhead.
Weights and correctives are mapped between mesh resolutions using barycentric mapping or subdivision, ensuring consistent deformation across LoDs. On a standard desktop GPU, MHR achieves over 120 fps for full-body, 18,000-vertex animation inclusive of skinning and pose correctives. This runtime profile is suitable for both interactive AR/VR graphics and real-time robotics control (Ferguson et al., 19 Nov 2025).
5. Motion Retargeting and Physically Plausible Trajectory Recovery
MHR has been leveraged in monocular human motion retargeting pipelines, notably as an intermediate bridge from perception output to humanoid robot control (Tu et al., 25 Dec 2025). Motion captured via visual backbones (e.g., SAM 3D Body) is encoded in a low-dimensional MHR latent stack:
Identity () and skeleton scale () are averaged over the sequence and locked, enforcing bone-length and anthropometric consistency. Per-frame pose and expression latents are refined using a sliding-window optimization that penalizes deviation from initial estimates, temporal jitter (using finite joint differences in position, velocity, rotation, and acceleration with joint-dependent smoothness weights), and maintains boundary coherence.
Soft, differentiable foot-ground contact probabilities are computed per foot as a function of foot height, incorporated into a global optimization (Adam) that solves for physically plausible root trajectories in a fixed Z-up world frame. The loss penalizes foot sliding, penetration, unexpected height under contact, and encourages temporal root smoothness with camera motion priors.
For retargeting, a two-stage inverse kinematics (IK) solver establishes a correspondence between 14 anatomically paired MHR and robot joints, aligns rotation matrices including gravity alignment and axis conventions, then applies height-normalized scaling. Stage 1 is an end-effector IK (damped-least-squares Jacobian), and Stage 2 refines intermediates, respecting robot joint limits. All operations maintain end-to-end differentiability (Tu et al., 25 Dec 2025).
6. Empirical Evaluation and Comparative Results
On the 3DBodyTex dataset (200 high-resolution scans, two poses each), MHR surpasses existing parametric models. Average vertex-to-surface error (mm) with increasing number of shape components is detailed below:
| Model | 2 comps | 4 comps | 8 comps | 16 comps |
|---|---|---|---|---|
| SMPL | 4.46 | 4.43 | 4.39 | 4.32 |
| SMPL-X | 4.76 | 4.71 | 4.65 | 4.55 |
| MHR | 4.76 | 4.53 | 4.13 | 4.11 |
MHR achieves lower reconstruction error than SMPL-X at all component counts and surpasses SMPL beyond four components, with greatest improvements at joints with complex articulation (knees, elbows, shoulders). Qualitatively, MHR exhibits more anatomically plausible soft-tissue bulges and resolves twisting artifacts without the "candy-wrap" effect (Ferguson et al., 19 Nov 2025).
7. Limitations and Future Directions
MHR currently omits explicit modeling of eyeball geometry, teeth, and tongue, and pose correctives/expressions are not yet conditioned on body shape. Planned extensions include: shape-conditioned corrective priors, integration of soft-tissue and clothing simulation, real-time optimization for mobile AR/VR platforms, and support for stylized character deformation. Conditioning correctives on body shape is anticipated to enable more personalized surface deformation. Eyeball and detailed oral articulation are also identified as priorities for future model releases (Ferguson et al., 19 Nov 2025).