Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforcement Learning of CPG-regulated Locomotion Controller for a Soft Snake Robot (2207.04899v2)

Published 11 Jul 2022 in cs.RO and math.DS

Abstract: Intelligent control of soft robots is challenging due to the nonlinear and difficult-to-model dynamics. One promising model-free approach for soft robot control is reinforcement learning (RL). However, model-free RL methods tend to be computationally expensive and data-inefficient and may not yield natural and smooth locomotion patterns for soft robots. In this work, we develop a bio-inspired design of a learning-based goal-tracking controller for a soft snake robot. The controller is composed of two modules: An RL module for learning goal-tracking behaviors given the unmodeled and stochastic dynamics of the robot, and a central pattern generator (CPG) with the Matsuoka oscillators for generating stable and diverse locomotion patterns. We theoretically investigate the maneuverability of Matsuoka CPG's oscillation bias, frequency, and amplitude for steering control, velocity control, and sim-to-real adaptation of the soft snake robot. Based on this analysis, we proposed a composition of RL and CPG modules such that the RL module regulates the tonic inputs to the CPG system given state feedback from the robot, and the output of the CPG module is then transformed into pressure inputs to pneumatic actuators of the soft snake robot. This design allows the RL agent to naturally learn to entrain the desired locomotion patterns determined by the CPG maneuverability. We validated the optimality and robustness of the control design in both simulation and real experiments, and performed extensive comparisons with state-of-art RL methods to demonstrate the benefit of our bio-inspired control design.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. C. Majidi, “Soft robotics: A perspective–Current trends and prospects for the future,” Soft Robotics, vol. 1, no. 1, pp. 5–11, 2014.
  2. A. J. Ijspeert, “Central pattern generators for locomotion control in animals and robots: a review,” Neural Networks, vol. 21, no. 4, pp. 642–653, 2008.
  3. A. Roberts, S. Soffe, E. Wolf, M. Yoshida, and F.-Y. Zhao, “Central circuits controlling locomotion in young frog tadpoles,” Annals of the New York Academy of Sciences, vol. 860, no. 1, pp. 19–34, 1998.
  4. R. Yuste, J. N. MacLean, J. Smith, and A. Lansner, “The cortex as a central pattern generator,” Nature Reviews Neuroscience, vol. 6, no. 6, p. 477, 2005.
  5. T. Mori, Y. Nakamura, M.-A. Sato, and S. Ishii, “Reinforcement learning for CPG-driven biped robot,” AAAI Conference on Artificial Intelligence, vol. 4, pp. 623–630, 2004.
  6. G. Endo, J. Morimoto, T. Matsubara, J. Nakanishi, and G. Cheng, “Learning CPG-based biped locomotion with a policy gradient method: Application to a humanoid robot,” The International Journal of Robotics Research, vol. 27, no. 2, pp. 213–228, 2008.
  7. J. Nassour, P. Hénaff, F. Benouezdou, and G. Cheng, “Multi-layered multi-pattern CPG for adaptive locomotion of humanoid robots,” Biological cybernetics, vol. 108, no. 3, pp. 291–303, 2014.
  8. F. Dzeladini, N. Ait-Bouziad, and A. Ijspeert, “CPG-based control of humanoid robot locomotion,” Humanoid Robotics: A Reference, pp. 1–35, 2018.
  9. D. H. Tran, F. Hamker, and J. Nassour, “A humanoid robot learns to recover perturbation during swinging motion,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 50, no. 10, pp. 3701–3712, 2020.
  10. A. Crespi, A. Badertscher, A. Guignard, and A. J. Ijspeert, “Swimming and crawling with an amphibious snake robot,” IEEE International Conference on Robotics and Automation, pp. 3024–3028, 2005.
  11. A. Crespi and A. J. Ijspeert, “Online optimization of swimming and crawling in an amphibious snake robot,” IEEE Transactions on Robotics, vol. 24, no. 1, pp. 75–87, 2008.
  12. J.-K. Ryu, N. Y. Chong, B. J. You, and H. I. Christensen, “Locomotion of snake-like robots using adaptive neural oscillators,” Intelligent Service Robotics, vol. 3, no. 1, p. 1, 2010.
  13. Z. Bing, L. Cheng, G. Chen, F. Röhrbein, K. Huang, and A. Knoll, “Towards autonomous locomotion: CPG-based control of smooth 3D slithering gait transition of a snake-like robot,” Bioinspiration & Biomimetics, vol. 12, no. 3, p. 035001, 2017.
  14. Z. Wang, Q. Gao, and H. Zhao, “CPG-inspired locomotion control for a snake robot basing on nonlinear oscillators,” Journal of Intelligent & Robotic Systems, vol. 85, no. 2, pp. 209–227, 2017.
  15. Z. Bing, Z. Jiang, L. Cheng, C. Cai, K. Huang, and A. Knoll, “End to end learning of a multi-layered SNN based on R-STDP for a target tracking snake-like robot,” International Conference on Robotics and Automation, pp. 9645–9651, 2019.
  16. X. Wu and S. Ma, “Neurally controlled steering for collision-free behavior of a snake robot,” IEEE Transactions on Control Systems Technology, vol. 21, no. 6, pp. 2443–2449, 2013.
  17. G. Sartoretti, W. Paivine, Y. Shi, Y. Wu, and H. Choset, “Distributed learning of decentralized control policies for articulated mobile robots,” IEEE Transactions on Robotics, vol. 35, no. 5, pp. 1109–1122, 2019.
  18. G. Bellegarda and A. Ijspeert, “CPG-RL: Learning central pattern generators for quadruped locomotion,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 12 547–12 554, 2022.
  19. R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour, “Policy gradient methods for reinforcement learning with function approximation,” in Advances in Neural Information Processing Systems, S. Solla, T. Leen, and K. Müller, Eds., vol. 12.   MIT Press, 1999. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/1999/file/464d828b85b0bed98e80ade0a5c43b0f-Paper.pdf
  20. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” CoRR, vol. abs/1707.06347, 2017.
  21. K. Matsuoka, “Sustained oscillations generated by mutually inhibiting neurons with adaptation,” Biological cybernetics, vol. 56, no. 5-6, pp. 367–376, 1985.
  22. T. G. Brown, “The intrinsic factors in the act of progression in the mammal,” Proceedings of The Royal Society B: Biological Sciences, vol. 84, no. 572, pp. 308–319, 1911.
  23. K. Matsuoka, “Mechanisms of frequency and pattern control in the neural rhythm generators,” Biological cybernetics, vol. 56, no. 5-6, pp. 345–353, 1987.
  24. ——, “Analysis of a neural oscillator,” Biological Cybernetics, vol. 104, no. 4-5, pp. 297–304, 2011.
  25. ——, “Frequency responses of a neural oscillator,” 2013. [Online]. Available: https://matsuoka1.jimdofree.com/app/download/7896851691/Frequency_Response_jimdo.pdf?t=1378121835
  26. X. Liu, R. Gasoto, Z. Jiang, C. Onal, and J. Fu, “Learning to locomote with artificial neural-network and CPG-based control in a soft snake robot,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, pp. 7758–7765.
  27. M. Luo, Z. Wan, Y. Sun, E. H. Skorina, W. Tao, F. Chen, L. Gopalka, H. Yang, and C. D. Onal, “Motion planning and iterative learning control of a modular soft robotic snake,” Frontiers in robotics and AI, vol. 7, p. 599242, 2020.
  28. M. Luo, E. H. Skorina, W. Tao, F. Chen, S. Ozel, Y. Sun, and C. D. Onal, “Toward modular soft robotics: Proprioceptive curvature sensing and sliding-mode control of soft bidirectional bending modules,” Soft robotics, vol. 4, no. 2, pp. 117–125, 2017.
  29. M. Luo, M. Agheli, and C. D. Onal, “Theoretical modeling and experimental analysis of a pressure-operated soft robotic snake,” Soft Robotics, vol. 1, no. 2, pp. 136–146, 2014.
  30. R. Gasoto, M. Macklin, X. Liu, Y. Sun, K. Erleben, C. Onal, and J. Fu, “A validated physical model for real-time simulation of soft robotic snakes,” in IEEE International Conference on Robotics and Automation, 2019.
  31. A. J. Ijspeert, J. Hallam, and D. Willshaw, “Evolving swimming controllers for a simulated lamprey with inspiration from neurobiology,” Adaptive Behavior, vol. 7, no. 2, pp. 151–172, 1999.
  32. G. Wang, X. Chen, and S.-K. Han, “Central pattern generator and feedforward neural network-based self-adaptive gait control for a crab-like robot locomoting on complex terrain under two reflex mechanisms,” International Journal of Advanced Robotic Systems, vol. 14, no. 4, p. 1729881417723440, 2017.
  33. J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, “Domain randomization for transferring deep neural networks from simulation to the real world,” International Conference on Intelligent Robots and Systems, pp. 23–30, 2017.
  34. R. S. Sutton, D. Precup, and S. Singh, “Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning,” Artificial Intelligence, vol. 112, no. 1-2, pp. 181–211, 1999.
  35. P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, Y. Wu, and P. Zhokhov, “Openai baselines,” https://github.com/openai/baselines, 2017.
  36. X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel, “Sim-to-real transfer of robotic control with dynamics randomization,” International Conference on Robotics and Automation, pp. 1–8, 2018.
  37. J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V. Tsounis, V. Koltun, and M. Hutter, “Learning agile and dynamic motor skills for legged robots,” Science Robotics, vol. 4, no. 26, 2019.
  38. A. Karpathy and M. Van De Panne, “Curriculum learning for motor skills,” Canadian Conference on Artificial Intelligence, pp. 325–330, 2012.
  39. S. Garrido-Jurado, R. Muñoz-Salinas, F. Madrid-Cuevas, and M. Marín-Jiménez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition, vol. 47, no. 6, pp. 2280–2292, 2014.
  40. S. Chen, B. Tang, and K. Wang, “Twin delayed deep deterministic policy gradient-based intelligent computation offloading for iot,” Digital Communications and Networks, 2022.
Citations (8)

Summary

We haven't generated a summary for this paper yet.