Papers
Topics
Authors
Recent
2000 character limit reached

Learning Human-like Locomotion Based on Biological Actuation and Rewards

Published 28 Jan 2024 in cs.GR | (2401.15664v1)

Abstract: We propose a method of learning a policy for human-like locomotion via deep reinforcement learning based on a human anatomical model, muscle actuation, and biologically inspired rewards, without any inherent control rules or reference motions. Our main ideas involve providing a dense reward using metabolic energy consumption at every step during the initial stages of learning and then transitioning to a sparse reward as learning progresses, and adjusting the initial posture of the human model to facilitate the exploration of locomotion. Additionally, we compared and analyzed differences in learning outcomes across various settings other than the proposed method.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (7)
  1. Gyoo-Chul Kang and Yoonsang Lee. 2021. Finite State Machine-Based Motion-Free Learning of Biped Walking. IEEE Access 9 (2021), 20662–20672.
  2. Scalable Muscle-actuated Human Simulation and Control. ACM Trans. Graph. 38, 4 (2019).
  3. Locomotion Control for Many-muscle Humanoids. ACM Trans. Graph. 33, 6 (2014).
  4. DeepMimic: example-guided deep reinforcement learning of physics-based character skills. ACM Trans. Graph. 37, 4 (2018).
  5. Optimizing locomotion controllers using biologically-based actuators and objectives. ACM Trans. Graph. 31, 4 (2012).
  6. Learning Symmetric and Low-energy Locomotion. ACM Trans. Graph. 37, 4 (2018).
  7. F E Zajac. 1989. Muscle and tendon: properties, models, scaling, and application to biomechanics and motor control. Critical Reviews in Biomedical Engineering 17, 4 (1989).

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.