Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Control Strategy for Quadruped Robots in Actuator Degradation Scenarios (2312.17606v1)

Published 29 Dec 2023 in cs.RO, cs.AI, and cs.LG

Abstract: Quadruped robots have strong adaptability to extreme environments but may also experience faults. Once these faults occur, robots must be repaired before returning to the task, reducing their practical feasibility. One prevalent concern among these faults is actuator degradation, stemming from factors like device aging or unexpected operational events. Traditionally, addressing this problem has relied heavily on intricate fault-tolerant design, which demands deep domain expertise from developers and lacks generalizability. Learning-based approaches offer effective ways to mitigate these limitations, but a research gap exists in effectively deploying such methods on real-world quadruped robots. This paper introduces a pioneering teacher-student framework rooted in reinforcement learning, named Actuator Degradation Adaptation Transformer (ADAPT), aimed at addressing this research gap. This framework produces a unified control strategy, enabling the robot to sustain its locomotion and perform tasks despite sudden joint actuator faults, relying exclusively on its internal sensors. Empirical evaluations on the Unitree A1 platform validate the deployability and effectiveness of Adapt on real-world quadruped robots, and affirm the robustness and practicality of our approach.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. Online Damage Recovery for Physical Robots with Hierarchical Quality-Diversity. ACM Transactions on Evolutionary Learning 3, 2 (2023), 1–23.
  2. Deep reinforcement learning: A brief survey. IEEE Signal Processing Magazine 34, 6 (2017), 26–38.
  3. Advances in real-world applications for legged robots. Journal of Field Robotics 35, 8 (2018), 1311–1326.
  4. Heterogeneous Multi-Robot Reinforcement Learning. arXiv:2301.07137 [cs.RO]
  5. Priyaranjan Biswal and Prases K Mohanty. 2021. Development of quadruped walking robots: A review. Ain Shams Engineering Journal 12, 2 (2021), 2017–2031.
  6. Adrian Boeing and Thomas Bräunl. 2012. Leveraging multiple simulators for crossing the reality gap. In 2012 12th international conference on control automation robotics & vision (ICARCV). IEEE, 1113–1119.
  7. Openai gym. arXiv preprint arXiv:1606.01540 (2016).
  8. Mitigating Imminent Collision for Multi-Robot Navigation: A TTC-Force Reward Shaping Approach. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems (London, United Kingdom) (AAMAS ’23). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1448–1456.
  9. Decision transformer: Reinforcement learning via sequence modeling. Advances in neural information processing systems 34 (2021), 15084–15097.
  10. Fault-tolerant gait design for quadruped robots with one locked leg using the GF set theory. Mechanism and Machine Theory 178 (2022), 105069.
  11. Fault-tolerant motion planning and generation of quadruped robots synthesised by posture optimization and whole body control. Complex & Intelligent Systems 8, 4 (2022), 2991–3003.
  12. Dynamic locomotion in the mit cheetah 3 through convex model-predictive control. In 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 1–9.
  13. Adversarial motion priors make good substitutes for complex reward functions. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 25–32.
  14. Toward few-shot domain adaptation with perturbation-invariant representation and transferable prototypes. Frontiers of Computer Science 16, 3, Article 163347 (2022). https://doi.org/10.1007/s11704-022-2015-7
  15. Robots solving the urgent problems by themselves: A review. In 2019 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW). IEEE, 1–2.
  16. Jonathan Ho and Stefano Ermon. 2016. Generative adversarial imitation learning. Advances in neural information processing systems 29 (2016).
  17. Carolin Kemper and Michael Kolain. 2022. K9 Police Robots-Strolling Drones, RoboDogs, or Lethal Weapons?. In Accepted paper at WeRobot2022 conference.
  18. Eliahu Khalastchi and Meir Kalech. 2018. A sensor-based approach for fault detection and diagnosis for robotic systems. Autonomous Robots 42 (2018), 1231–1248.
  19. Crossing the reality gap in evolutionary robotics by promoting transferable controllers. In Proceedings of the 12th annual conference on Genetic and evolutionary computation. 119–126.
  20. The DARPA Robotics Challenge Finals: Results and Perspectives. Journal of Field Robotics 34, 2 (2017), 229–240. https://doi.org/10.1002/rob.21683 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/rob.21683
  21. Rma: Rapid motor adaptation for legged robots. arXiv preprint arXiv:2107.04034 (2021).
  22. Sim-to-Real Transfer for Quadrupedal Locomotion via Terrain Transformer. In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 5141–5147.
  23. Learning quadrupedal locomotion over challenging terrain. Science robotics 5, 47 (2020), eabc5986.
  24. Saving the Limping: Fault-tolerant Quadruped Locomotion via Reinforcement Learning. arXiv preprint arXiv:2210.00474 (2022).
  25. Isaac gym: High performance gpu-based physics simulation for robot learning. arXiv preprint arXiv:2108.10470 (2021).
  26. Rapid locomotion via reinforcement learning. arXiv preprint arXiv:2205.02824 (2022).
  27. Learning robust perceptive locomotion for quadrupedal robots in the wild. Science Robotics 7, 62 (2022), eabk2822.
  28. DreamWaQ: Learning robust quadrupedal locomotion with implicit terrain imagination via deep reinforcement learning. In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 5078–5084.
  29. Wataru Okamoto and Kazuhiko Kawamoto. 2020. Reinforcement learning with randomized physical parameters for fault-tolerant robots. In 2020 Joint 11th International Conference on Soft Computing and Intelligent Systems and 21st International Symposium on Advanced Intelligent Systems (SCIS-ISIS). IEEE, 1–4.
  30. Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE international conference on robotics and automation (ICRA). IEEE, 3803–3810.
  31. Learning agile robotic locomotion skills by imitating animals. arXiv preprint arXiv:2004.00784 (2020).
  32. Improving language understanding by generative pre-training. (2018).
  33. A generalist agent. arXiv preprint arXiv:2205.06175 (2022).
  34. Unitree Robotics. 2020. Unitree A1. Online. https://www.unitree.com/a1
  35. Learning Multiple Tasks with Non-Stationary Interdependencies in Autonomous Robots. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems (London, United Kingdom) (AAMAS ’23). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 2547–2549.
  36. Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning. In Proceedings of the 5th Conference on Robot Learning (Proceedings of Machine Learning Research, Vol. 164), Aleksandra Faust, David Hsu, and Gerhard Neumann (Eds.). PMLR, 91–100. https://proceedings.mlr.press/v164/rudin22a.html
  37. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017).
  38. Dec-AIRL: Decentralized Adversarial IRL for Human-Robot Teaming. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems (London, United Kingdom) (AAMAS ’23). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1116–1124.
  39. Sim-to-real: Learning agile locomotion for quadruped robots. arXiv preprint arXiv:1804.10332 (2018).
  40. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 23–30.
  41. Attention is all you need. Advances in neural information processing systems 30 (2017).
  42. Large sequence models for sequential decision-making: a survey. Frontiers of Computer Science 17, 6, Article 176349 (2023). https://doi.org/10.1007/s11704-023-2689-5
  43. Learning Robust and Agile Legged Locomotion Using Adversarial Motion Priors. IEEE Robotics and Automation Letters (2023).
  44. Active Fault-Tolerant Control Integrated with Reinforcement Learning Application to Robotic Manipulator. In 2022 American Control Conference (ACC). IEEE, 2656–2662.
  45. Multi-embodiment Legged Robot Control as a Sequence Modeling Problem. In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 7250–7257.
  46. Preparing for the unknown: Learning a universal policy with online system identification. arXiv preprint arXiv:1702.02453 (2017).
  47. Yu Zhang. 2023. From Abstractions to Grounded Languages for Robust Coordination of Task Planning Robots. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems (London, United Kingdom) (AAMAS ’23). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 2535–2537.
  48. CraftEnv: A Flexible Collective Robotic Construction Environment for Multi-Agent Reinforcement Learning. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems (London, United Kingdom) (AAMAS ’23). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1164–1172.
  49. Gait planning and fault-tolerant control of quadruped robots. In Third International Conference on Mechanical, Electronics, and Electrical and Automation Control (METMS 2023), Vol. 12722. SPIE, 841–846.

Summary

We haven't generated a summary for this paper yet.