Learning Human-like Locomotion Based on Biological Actuation and Rewards (2401.15664v1)
Abstract: We propose a method of learning a policy for human-like locomotion via deep reinforcement learning based on a human anatomical model, muscle actuation, and biologically inspired rewards, without any inherent control rules or reference motions. Our main ideas involve providing a dense reward using metabolic energy consumption at every step during the initial stages of learning and then transitioning to a sparse reward as learning progresses, and adjusting the initial posture of the human model to facilitate the exploration of locomotion. Additionally, we compared and analyzed differences in learning outcomes across various settings other than the proposed method.
- Gyoo-Chul Kang and Yoonsang Lee. 2021. Finite State Machine-Based Motion-Free Learning of Biped Walking. IEEE Access 9 (2021), 20662–20672.
- Scalable Muscle-actuated Human Simulation and Control. ACM Trans. Graph. 38, 4 (2019).
- Locomotion Control for Many-muscle Humanoids. ACM Trans. Graph. 33, 6 (2014).
- DeepMimic: example-guided deep reinforcement learning of physics-based character skills. ACM Trans. Graph. 37, 4 (2018).
- Optimizing locomotion controllers using biologically-based actuators and objectives. ACM Trans. Graph. 31, 4 (2012).
- Learning Symmetric and Low-energy Locomotion. ACM Trans. Graph. 37, 4 (2018).
- F E Zajac. 1989. Muscle and tendon: properties, models, scaling, and application to biomechanics and motor control. Critical Reviews in Biomedical Engineering 17, 4 (1989).
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Collections
Sign up for free to add this paper to one or more collections.