Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robot Parkour Learning (2309.05665v2)

Published 11 Sep 2023 in cs.RO, cs.AI, cs.CV, and cs.LG

Abstract: Parkour is a grand challenge for legged locomotion that requires robots to overcome various obstacles rapidly in complex environments. Existing methods can generate either diverse but blind locomotion skills or vision-based but specialized skills by using reference animal data or complex rewards. However, autonomous parkour requires robots to learn generalizable skills that are both vision-based and diverse to perceive and react to various scenarios. In this work, we propose a system for learning a single end-to-end vision-based parkour policy of diverse parkour skills using a simple reward without any reference motion data. We develop a reinforcement learning method inspired by direct collocation to generate parkour skills, including climbing over high obstacles, leaping over large gaps, crawling beneath low barriers, squeezing through thin slits, and running. We distill these skills into a single vision-based parkour policy and transfer it to a quadrupedal robot using its egocentric depth camera. We demonstrate that our system can empower two different low-cost robots to autonomously select and execute appropriate parkour skills to traverse challenging real-world environments.

Overview of "Robot Parkour Learning"

The paper "Robot Parkour Learning" presents a novel framework for the autonomous acquisition of diverse parkour skills by low-cost quadrupedal robots, utilizing a vision-based end-to-end learning system. It emphasizes overcoming traditional limitations by creating generalizable skills that adapt to various environmental scenarios without reference motion data. The focus is on leveraging reinforcement learning (RL) to generate robust locomotion skills, enabling agile movements such as climbing obstacles, leaping gaps, crawling under barriers, squeezing through tight spaces, and running.

Methodology

The authors propose a two-stage reinforcement learning approach inspired by direct collocation. The first stage involves RL pre-training with soft dynamics constraints, allowing the robot to penetrate obstacles in simulation. This approach is advantageous for addressing challenging exploration problems by providing a structured learning path. The second stage involves fine-tuning with hard dynamics constraints, refining skills with realistic physics. Each skill is encouraged through a simple reward function designed to sustain forward motion while minimizing mechanical energy use.

To produce a unified policy that can autonomously select and execute parkour skills, the paper utilizes a distillation process employing DAgger. This technique captures individual parkour skills into a single vision-based policy applicable across various terrains using only onboard sensors and computation, demonstrating robust sim-to-real transfer capabilities.

Numerical Results and Claims

The system was tested on two low-cost quadrupedal robots, A1 and Go1, and successfully enabled them to autonomously navigate complex environments. The robots were shown to climb obstacles 1.53 times their height, leap 1.5 times their length, crawl under barriers 76% of their height, and squeeze through slits narrower than their width. These accomplishments are underscored by an open-source platform intended to encourage further exploration and deployment of agile locomotion policies in real-world scenarios.

Implications and Future Directions

This research provides both practical and theoretical insights into robot autonomy in unstructured environments. The demonstrated successes suggest potential applications in search and rescue, surveillance, and exploratory missions where robots can independently assess and react to diverse obstacle challenges. The two-stage RL approach introduces efficiencies in learning complex motor behaviors, offering a foundation for future work on autonomously discovering and mastering new skills through simulated experiences. Future developments may integrate advanced 3D vision and graphics technologies to automate the construction of training environments, enabling the acquisition of new skills directly from large-scale real-world data.

Overall, the work advances the understanding of agile robot locomotion and presents a practical, scalable methodology for deploying parkour skills, pushing the boundaries of low-cost robot capabilities in dynamic environments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ziwen Zhuang (5 papers)
  2. Zipeng Fu (16 papers)
  3. Jianren Wang (23 papers)
  4. Christopher Atkeson (3 papers)
  5. Soeren Schwertfeger (3 papers)
  6. Chelsea Finn (264 papers)
  7. Hang Zhao (156 papers)
Citations (110)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com