Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AMO: Adaptive Motion Optimization for Hyper-Dexterous Humanoid Whole-Body Control (2505.03738v1)

Published 6 May 2025 in cs.RO, cs.AI, and cs.LG

Abstract: Humanoid robots derive much of their dexterity from hyper-dexterous whole-body movements, enabling tasks that require a large operational workspace: such as picking objects off the ground. However, achieving these capabilities on real humanoids remains challenging due to their high degrees of freedom (DoF) and nonlinear dynamics. We propose Adaptive Motion Optimization (AMO), a framework that integrates sim-to-real reinforcement learning (RL) with trajectory optimization for real-time, adaptive whole-body control. To mitigate distribution bias in motion imitation RL, we construct a hybrid AMO dataset and train a network capable of robust, on-demand adaptation to potentially O.O.D. commands. We validate AMO in simulation and on a 29-DoF Unitree G1 humanoid robot, demonstrating superior stability and an expanded workspace compared to strong baselines. Finally, we show that AMO's consistent performance supports autonomous task execution via imitation learning, underscoring the system's versatility and robustness.

Summary

  • The paper presents AMO, a novel framework integrating reinforcement learning with trajectory optimization for adaptive whole-body control of hyper-dexterous humanoid robots using a hybrid dataset.
  • Tested on a 29-DoF humanoid, AMO demonstrated superior stability, an expanded operational workspace, and real-time generalization to out-of-distribution teleoperation commands compared to baselines.
  • AMO's capacity to support autonomous task execution through imitation learning highlights its potential as a versatile framework for both dexterous loco-manipulation and broader autonomous robotic applications.

AMO: Adaptive Motion Optimization for Hyper-Dexterous Humanoid Whole-Body Control

In the paper titled "AMO: Adaptive Motion Optimization for Hyper-Dexterous Humanoid Whole-Body Control," Jialong Li and colleagues present a novel framework designed to address the complexities of whole-body control in humanoid robots. The Adaptive Motion Optimization (AMO) framework integrates reinforcement learning (RL) with trajectory optimization, aiming to facilitate real-time, adaptive motion control for humanoid robots, particularly those with high degrees of freedom and complex dynamics.

The authors highlight the challenges inherent in achieving whole-body dexterity in humanoid robots, which include the non-linear dynamics and contact-rich nature of their motion capabilities. Addressing these issues, AMO provides an integrated solution that combines robust sim-to-real learning approaches with trajectory optimization techniques. The framework employs a hybrid AMO dataset to counteract distribution biases associated with motion imitation in RL, which often result from kinematically viable trajectories that do not consider dynamic constraints. This dataset enables the training of networks capable of adapting to potentially out-of-distribution (O.O.D.) commands, ensuring versatile and robust policy learning.

Numerical results from simulations and real-world experiments validate AMO's effectiveness. The framework was tested on the 29-DoF Unitree G1 humanoid robot, demonstrating superior stability and an expanded operational workspace compared to established baselines such as HOVER and Opt2Skill. The AMO-enabled Unitree G1 robot could perform complex whole-body movements that required coordinated torso orientation adjustments and expansive workspace engagement—tasks that traditional frameworks struggled to handle.

Significant findings include the framework's ability to generalize to O.O.D. teleoperation commands with real-time responsiveness, showcasing its adaptability and robustness. In deploying AMO, the researchers utilized a hybrid motion synthesis method, fusing retargeted arm trajectories with sampled torso orientations to eliminate kinematic bias, coupled with a dynamics-aware trajectory optimizer for generating feasible reference motions. This approach forms the foundation of the AMO dataset, tailoring it specifically for dexterous loco-manipulation.

The paper concludes with discussions on the practical and theoretical implications of AMO, highlighting its capacity to support autonomous task execution through imitation learning. This capability underscores AMO's potential as a versatile whole-body control framework, extending beyond simple locomotion to hyper-dexterous manipulation tasks. Future research directions could explore deeper integration of balance-aware upper-body control mechanisms to enhance the framework's whole-body coordination capabilities.

Overall, AMO represents a significant development in humanoid robot motion control, providing a scalable and adaptable framework that navigates the complexities of high-degree-of-freedom robotic systems. The strong numerical performance and adaptability demonstrated by AMO in real-world settings suggest promising applications across various autonomous robotic platforms.