Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SpaceOctopus: An Octopus-inspired Motion Planning Framework for Multi-arm Space Robot (2403.08219v1)

Published 13 Mar 2024 in cs.RO

Abstract: Space robots have played a critical role in autonomous maintenance and space junk removal. Multi-arm space robots can efficiently complete the target capture and base reorientation tasks due to their flexibility and the collaborative capabilities between the arms. However, the complex coupling properties arising from both the multiple arms and the free-floating base present challenges to the motion planning problems of multi-arm space robots. We observe that the octopus elegantly achieves similar goals when grabbing prey and escaping from danger. Inspired by the distributed control of octopuses' limbs, we develop a multi-level decentralized motion planning framework to manage the movement of different arms of space robots. This motion planning framework integrates naturally with the multi-agent reinforcement learning (MARL) paradigm. The results indicate that our method outperforms the previous method (centralized training). Leveraging the flexibility of the decentralized framework, we reassemble policies trained for different tasks, enabling the space robot to complete trajectory planning tasks while adjusting the base attitude without further learning. Furthermore, our experiments confirm the superior robustness of our method in the face of external disturbances, changing base masses, and even the failure of one arm.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Progress and development trend of space intelligent robot technology. Space: Science & Technology, 2022.
  2. Adaptive control of space robot system with an attitude controlled base. Space robotics: Dynamics and control, pages 229–268, 1993.
  3. Post-capture attitude control for a tethered space robot–target combination system. Robotica, 33(4):898–919, 2015.
  4. Cat-like jumping and landing of legged robots in low gravity using deep reinforcement learning. IEEE Transactions on Robotics, 38(1):317–328, 2021.
  5. Binyamin Hochner. An embodied view of octopus neurobiology. Current biology, 22(20):R887–R892, 2012.
  6. Space robot motion planning in the presence of nonconserved linear and angular momenta. Multibody System Dynamics, 50:71–96, 2020.
  7. Robust trajectory tracking control of a dual-arm space robot actuated by control moment gyroscopes. Acta Astronautica, 137:287–301, 2017.
  8. Continuous path control of space manipulators mounted on omv. Acta Astronautica, 15(12):981–986, 1987.
  9. Dynamic singularities in free-floating space manipulators. 1993.
  10. Z Vafa and Steven Dubowsky. On the dynamics of manipulators in space using the virtual manipulator approach. In Proceedings. 1987 IEEE International Conference on Robotics and Automation, volume 4, pages 579–585. IEEE, 1987.
  11. Path planning for a space-based manipulator system based on quantum genetic algorithm. Journal of Robotics, 2017, 2017.
  12. R. Kristiansen and P.J. Nicklasson. Satellite attitude control by quaternion-based backstepping. In Proceedings of the 2005, American Control Conference, 2005., pages 907–912 vol. 2, 2005.
  13. Transferring policy of deep reinforcement learning from simulation to reality for robotics. Nature Machine Intelligence, 4(12):1077–1087, 2022.
  14. Reinforcement learning in dual-arm trajectory planning for a free-floating space robot. Aerospace Science and Technology, 98:105657, 2020.
  15. Trajectory optimization for velocity jumps reduction considering the unexpectedness characteristics of space manipulator joint-locked failure. International Journal of Aerospace Engineering, 2016, 2016.
  16. Control of free-floating space robots to capture targets using soft q-learning. In 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), pages 654–660. IEEE, 2018.
  17. Learning to control a free-floating space robot using deep reinforcement learning. In 2019 IEEE International Conference on Unmanned Systems (ICUS), pages 519–523. IEEE, 2019.
  18. A multi-target trajectory planning of a 6-dof free-floating space robot via reinforcement learning. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3724–3730. IEEE, 2021.
  19. An end-to-end trajectory planning strategy for free-floating space robots. In 2021 40th Chinese Control Conference (CCC), pages 4236–4241. IEEE, 2021.
  20. Decentralized motor skill learning for complex robotic systems. IEEE Robotics and Automation Letters, 2023.
  21. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026–5033. IEEE, 2012.
  22. Jeffrey L Elman. Finding structure in time. Cognitive science, 14(2):179–211, 1990.
  23. Multi-agent actor-critic for mixed cooperative-competitive environments, 2020.
  24. A concise introduction to decentralized POMDPs, volume 1. Springer, 2016.
  25. Joint optimization of handover control and power allocation based on multi-agent deep reinforcement learning. IEEE Transactions on Vehicular Technology, 69(11):13124–13138, 2020.
  26. The surprising effectiveness of ppo in cooperative multi-agent games. Advances in Neural Information Processing Systems, 35:24611–24624, 2022.
  27. Reinforcement learning: An introduction. MIT press, 2018.
  28. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wenbo Zhao (35 papers)
  2. Shengjie Wang (29 papers)
  3. Yixuan Fan (7 papers)
  4. Yang Gao (761 papers)
  5. Tao Zhang (481 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com