Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Task Learning of Active Fault-Tolerant Controller for Leg Failures in Quadruped robots (2402.08996v1)

Published 14 Feb 2024 in cs.RO

Abstract: Electric quadruped robots used in outdoor exploration are susceptible to leg-related electrical or mechanical failures. Unexpected joint power loss and joint locking can immediately pose a falling threat. Typically, controllers lack the capability to actively sense the condition of their own joints and take proactive actions. Maintaining the original motion patterns could lead to disastrous consequences, as the controller may produce irrational output within a short period of time, further creating the risk of serious physical injuries. This paper presents a hierarchical fault-tolerant control scheme employing a multi-task training architecture capable of actively perceiving and overcoming two types of leg joint faults. The architecture simultaneously trains three joint task policies for health, power loss, and locking scenarios in parallel, introducing a symmetric reflection initialization technique to ensure rapid and stable gait skill transformations. Experiments demonstrate that the control scheme is robust in unexpected scenarios where a single leg experiences concurrent joint faults in two joints. Furthermore, the policy retains the robot's planar mobility, enabling rough velocity tracking. Finally, zero-shot Sim2Real transfer is achieved on the real-world SOLO8 robot, countering both electrical and mechanical failures.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. R. M. Alexander, “The gaits of bipedal and quadrupedal animals,” International Journal of Robotics Research, vol. 3, no. 2, pp. 49–59, 1984.
  2. H. Chai, Y. Li, R. Song, G. Zhang, Q. Zhang, S. Liu, J. Hou, Y. Xin, M. Yuan, and G. Zhang, “A survey of the development of quadruped robots: Joint configuration, dynamic locomotion control method and mobile manipulation approach,” Biomimetic Intelligence and Robotics, vol. 2, no. 1, p. 100029, 2022.
  3. M. Hutter, C. Gehring, M. A. Hoepflinger, M. Bloesch, and R. Siegwart, “Toward combining speed, efficiency, versatility, and robustness in an autonomous quadruped,” Ieee Transactions on Robotics, vol. 30, no. 6, pp. 1427–1440, 2014.
  4. N. Rathod, A. Bratta, M. Focchi, M. Zanon, O. Villarreal, C. Semini, and A. Bemporad, “Model predictive control with environment adaptation for legged locomotion,” IEEE Access, vol. 9, pp. 145 710–145 727, 2021.
  5. Y. Ding, A. Pandala, C. Li, Y.-H. Shin, and H.-W. Park, “Representation-free model predictive control for dynamic motions in quadrupeds,” Ieee Transactions on Robotics, vol. 37, no. 4, pp. 1154–1171, 2021.
  6. J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V. Tsounis, V. Koltun, and M. Hutter, “Learning agile and dynamic motor skills for legged robots,” Science Robotics, vol. 4, no. 26, 2019.
  7. L. Smith, J. C. Kew, T. Li, L. Luu, X. B. Peng, S. Ha, J. Tan, and S. Levine, “Learning and adapting agile locomotion skills by transferring experience,” arXiv preprint arXiv:2304.09834, 2023.
  8. J. Wu, Y. Xue, and C. Qi, “Learning multiple gaits within latent space for quadruped robots,” arXiv preprint arXiv:2308.03014, 2023.
  9. A. Escontrela, X. B. Peng, W. Yu, T. Zhang, A. Iscen, K. Goldberg, P. Abbeel, and Ieee, “Adversarial motion priors make good substitutes for complex reward functions,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), ser. IEEE International Conference on Intelligent Robots and Systems, 2022, Conference Proceedings, pp. 25–32.
  10. Y. Fuchioka, Z. Xie, and M. Van de Panne, “Opt-mimic: Imitation of optimized trajectories for dynamic quadruped behaviors,” in 2023 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2023, Conference Proceedings, pp. 5092–5098.
  11. S. Bohez, S. Tunyasuvunakool, P. Brakel, F. Sadeghi, L. Hasenclever, Y. Tassa, E. Parisotto, J. Humplik, T. Haarnoja, and R. Hafner, “Imitate and repurpose: Learning reusable robot movement skills from human and animal behaviors,” arXiv preprint arXiv:2203.17138, 2022.
  12. C. Li, S. Blaes, P. Kolev, M. Vlastelica, J. Frey, and G. Martius, “Versatile skill control via self-supervised adversarial imitation of unlabeled mixed motions,” in 2023 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2023, Conference Proceedings, pp. 2944–2950.
  13. W. H. Yu, J. Tan, Y. F. Bai, E. Coumans, and S. Ha, “Learning fast adaptation with meta strategy optimization,” Ieee Robotics and Automation Letters, vol. 5, no. 2, pp. 2950–2957, 2020.
  14. S. Choi, G. Ji, J. Park, H. Kim, J. Mun, J. H. Lee, and J. Hwangbo, “Learning quadrupedal locomotion on deformable terrain,” Science Robotics, vol. 8, no. 74, p. eade2256, 2023.
  15. J. Tan, T. Zhang, E. Coumans, A. Iscen, Y. Bai, D. Hafner, S. Bohez, and V. Vanhoucke, “Sim-to-real: Learning agile locomotion for quadruped robots,” in 14th Conference on Robotics - Science and Systems, 2018, Conference Proceedings.
  16. N. Rudin, D. Hoeller, P. Reist, M. Hutter, and M. Hutter, “Learning to walk in minutes using massively parallel deep reinforcement learning,” Arxiv, 2022.
  17. T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter, “Learning robust perceptive locomotion for quadrupedal robots in the wild,” Science Robotics, vol. 7, no. 62, 2022.
  18. U. Asif, “Improving the navigability of a hexapod robot using a fault-tolerant adaptive gait,” International Journal of Advanced Robotic Systems, vol. 9, 2012.
  19. J. M. Yang and J. H. Kim, “Fault-tolerant locomotion of the hexapod robot,” Ieee Transactions on Systems Man and Cybernetics Part B-Cybernetics, vol. 28, no. 1, pp. 109–116, 1998.
  20. Z. J. Chen, Q. X. Xi, F. Gao, and Y. Zhao, “Fault-tolerant gait design for quadruped robots with one locked leg using the g(f) set theory,” Mechanism and Machine Theory, vol. 178, 2022.
  21. J. Cui, Z. Li, J. Qiu, and T. Li, “Fault-tolerant motion planning and generation of quadruped robots synthesised by posture optimization and whole body control,” Complex & Intelligent Systems, pp. 1–13, 2022.
  22. W. Okamoto, H. Kera, and K. Kawamoto, “Reinforcement learning with adaptive curriculum dynamics randomization for fault-tolerant robot control,” arXiv preprint arXiv:2111.10005, 2021.
  23. D. Liu, T. Zhang, J. Yin, and S. See, “Saving the limping: Fault-tolerant quadruped locomotion via reinforcement learning,” arXiv preprint arXiv:2210.00474, 2022.
  24. A. Stewart-Height and D. E. Koditschek, “Technical report on: Tripedal dynamic gaits for a quadruped robot,” arXiv preprint arXiv:2303.02280, 2023.
  25. M. Aractingi, P.-A. Léziart, T. Flayols, J. Perez, T. Silander, and P. Souères, “Controlling the solo12 quadruped robot with deep reinforcement learning,” Scientific Reports, vol. 13, no. 1, p. 11945, 2023.
  26. C. Tessler, Y. Kasten, Y. Guo, S. Mannor, G. Chechik, and X. B. Peng, “Calm: Conditional adversarial latent models for directable virtual characters,” in ACM SIGGRAPH 2023 Conference Proceedings, 2023, Conference Proceedings, pp. 1–9.
  27. X. B. Peng, Y. Guo, L. Halper, S. Levine, and S. Fidler, “Ase: Large-scale reusable adversarial skill embeddings for physically simulated characters,” ACM Transactions On Graphics (TOG), vol. 41, no. 4, pp. 1–17, 2022.
  28. T. Yu, A. Kumar, Y. Chebotar, K. Hausman, S. Levine, and C. Finn, “Conservative data sharing for multi-task offline reinforcement learning,” in 35th Conference on Neural Information Processing Systems (NeurIPS), ser. Advances in Neural Information Processing Systems, vol. 34, 2021, Conference Proceedings.
  29. V. Makoviychuk, L. Wawrzyniak, Y. Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin, A. Allshire, and A. Handa, “Isaac gym: High performance gpu-based physics simulation for robot learning,” arXiv preprint arXiv:2108.10470, 2021.
Citations (1)

Summary

We haven't generated a summary for this paper yet.