Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Agile Locomotion and Adaptive Behaviors via RL-augmented MPC (2310.09442v2)

Published 13 Oct 2023 in cs.RO

Abstract: In the context of legged robots, adaptive behavior involves adaptive balancing and adaptive swing foot reflection. While adaptive balancing counteracts perturbations to the robot, adaptive swing foot reflection helps the robot to navigate intricate terrains without foot entrapment. In this paper, we manage to bring both aspects of adaptive behavior to quadruped locomotion by combining RL and MPC while improving the robustness and agility of blind legged locomotion. This integration leverages MPC's strength in predictive capabilities and RL's adeptness in drawing from past experiences. Unlike traditional locomotion controls that separate stance foot control and swing foot trajectory, our innovative approach unifies them, addressing their lack of synchronization. At the heart of our contribution is the synthesis of stance foot control with swing foot reflection, improving agility and robustness in locomotion with adaptive behavior. A haLLMark of our approach is robust blind stair climbing through swing foot reflection. Moreover, we intentionally designed the learning module as a general plugin for different robot platforms. We trained the policy and implemented our approach on the Unitree A1 robot, achieving impressive results: a peak turn rate of 8.5 rad/s, a peak running speed of 3 m/s, and steering at a speed of 2.5 m/s. Remarkably, this framework also allows the robot to maintain stable locomotion while bearing an unexpected load of 10 kg, or 83\% of its body mass. We further demonstrate the generalizability and robustness of the same policy where it realizes zero-shot transfer to different robot platforms like Go1 and AlienGo robots for load carrying. Code is made available for the use of the research community at https://github.com/DRCL-USC/RL_augmented_MPC.git

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Yiyu Chen (8 papers)
  2. Quan Nguyen (85 papers)
Citations (4)

Summary

Learning Agile Locomotion and Adaptive Behaviors via RL-augmented MPC

The paper "Learning Agile Locomotion and Adaptive Behaviors via RL-augmented MPC" presents a novel framework that integrates Reinforcement Learning (RL) with Model Predictive Control (MPC) for enhancing the locomotion capabilities of legged robots, particularly quadrupeds. This hybrid framework aims to synthesize the strengths of RL in experiential learning and MPC's anticipative control, resulting in a more robust and adaptable locomotion system capable of navigating complex and uncertain terrains.

The versatility of the proposed approach lies in its emphasis on unifying stance foot control and swing foot reflection, a departure from traditional control approaches that often treat these functions separately. This integration is achieved through a learning module designed as a general plugin, enhancing the broad applicability of the framework across different robot platforms. The learning module processes history windows of force commands, gait schedules, proprioceptive feedback, and desired velocities to output dynamic compensations and swing foot reflections, countering uncertainties and optimizing reactive motions.

Key Achievements and Numerical Results

The authors provide strong empirical evidence supporting the efficacy of their RL-augmented MPC framework through a series of high-performance maneuvers and adaptability tests on different platforms and terrains. Notably, the Unitree A1 robot achieved a peak turn rate of 8.5 rad/s, a running speed of 3 m/s, and stable steering at 2.5 m/s. Furthermore, the framework demonstrated the capability for the robot to handle an unexpected load of 10 kg, equivalent to 83% of its body mass, maintaining stable locomotion. This adaptability was further exemplified through successful zero-shot policy transfer to other quadrupedal robots, including Go1 and AlienGo.

Theoretical and Practical Implications

From a theoretical perspective, the integration of RL and MPC into a unified framework challenges the conventional decoupled approach to legged robot control, offering a pathway for more seamless adaptation to dynamic environments. The paper highlights the significance of synthesizing predictive capabilities and adaptive behavior, thereby paving the way for further research in perceptive and adaptive control for robotics.

Practically, the generalizability of the learning module suggests potential applications in various domains requiring adaptable robotic systems, such as search-and-rescue, agricultural robotics, and autonomous exploration, where environments are unpredictable and diverse.

Future Directions

The results open several avenues for future research. One direction involves extending the framework to include perceptive capabilities, enabling the robot to proactively plan foot placements by integrating environmental cues. Further development could involve enhancing the computational efficiency of real-time control frameworks, potentially broadening the applicability of this approach to a wider range of robotic forms and scales.

Moreover, exploring how this hybrid framework performs in environments with even more significant dynamic uncertainties could yield insights into the scalability and robustness of adaptive behavior modules in complex robotic systems. Through such efforts, the framework's contribution to developing intelligent, agile, and resilient robots stands to be vastly expanded.

The provision of code and the generalizable nature of the RL component also encourages collaboration and peer engagement, facilitating further advances built upon this research.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Youtube Logo Streamline Icon: https://streamlinehq.com