Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 83 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 174 tok/s Pro
GPT OSS 120B 462 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Leveraging Procedural Generation for Learning Autonomous Peg-in-Hole Assembly in Space (2405.01134v1)

Published 2 May 2024 in cs.RO, cs.AI, and cs.LG

Abstract: The ability to autonomously assemble structures is crucial for the development of future space infrastructure. However, the unpredictable conditions of space pose significant challenges for robotic systems, necessitating the development of advanced learning techniques to enable autonomous assembly. In this study, we present a novel approach for learning autonomous peg-in-hole assembly in the context of space robotics. Our focus is on enhancing the generalization and adaptability of autonomous systems through deep reinforcement learning. By integrating procedural generation and domain randomization, we train agents in a highly parallelized simulation environment across a spectrum of diverse scenarios with the aim of acquiring a robust policy. The proposed approach is evaluated using three distinct reinforcement learning algorithms to investigate the trade-offs among various paradigms. We demonstrate the adaptability of our agents to novel scenarios and assembly sequences while emphasizing the potential of leveraging advanced simulation techniques for robot learning in space. Our findings set the stage for future advancements in intelligent robotic systems capable of supporting ambitious space missions and infrastructure development beyond Earth.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. National Aeronautics and Space Administration (NASA), “Artemis Plan: NASA’s Lunar Exploration Program Overview,” 2020.
  2. X. Zhihui, L. Jinguo, W. Chenchen, and T. Yuchuang, “Review of in-space assembly technologies,” Chinese Journal of Aeronautics, vol. 34, no. 11, 2021.
  3. European Space Agency (ESA), “ESA moves ahead with In-Orbit Servicing missions,” 2023.
  4. J. Xu, Z. Hou, Z. Liu, and H. Qiao, “Compare Contact Model-based Control and Contact Model-free Learning: A Survey of Robotic Peg-in-hole Assembly Strategies,” arXiv preprint arXiv:1904.05240, 2019.
  5. Í. Elguea-Aguinaco et al., “A review on reinforcement learning for contact-rich robotic manipulation tasks,” Robotics and Computer-Integrated Manufacturing, vol. 81, 2023.
  6. I. F. Jasim, P. W. Plapper, and H. Voos, “Contact-State Modelling in Force-Controlled Robotic Peg-in-Hole Assembly Processes of Flexible Objects Using Optimised Gaussian Mixtures,” Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, vol. 231, no. 8, 2017.
  7. H. Lee, S. Park, K. Jang, S. Kim, and J. Park, “Contact State Estimation for Peg-in-Hole Assembly Using Gaussian Mixture Model,” IEEE Robotics and Automation Letters, vol. 7, no. 2, 2022.
  8. T. Tang, H.-C. Lin, Y. Zhao, W. Chen, and M. Tomizuka, “Autonomous Alignment of Peg and Hole by Force/Torque Measurement for Robotic Assembly,” in IEEE International Conference on Automation Science and Engineering (CASE), 2016.
  9. T. Tang et al., “Teach Industrial Robots Peg-Hole-Insertion by Human Demonstration,” in IEEE International Conference on Advanced Intelligent Mechatronics (AIM), 2016.
  10. A. Wan, J. Xu, H. Chen, S. Zhang, and K. Chen, “Optimal Path Planning and Control of Assembly Robots for Hard-Measuring Easy-Deformation Assemblies,” IEEE/ASME Transactions on Mechatronics, vol. 22, no. 4, 2017.
  11. N. J. Cho, S. H. Lee, J. B. Kim, and I. H. Suh, “Learning, Improving, and Generalizing Motor Skills for the Peg-in-Hole Tasks Based on Imitation Learning and Self-Learning,” Applied Sciences, vol. 10, no. 8, 2020.
  12. K. Wang, Y. Zhao, and I. Sakuma, “Learning Robotic Insertion Tasks From Human Demonstration,” IEEE Robotics and Automation Letters, 2023.
  13. T. Inoue, G. De Magistris, A. Munawar, T. Yokoya, and R. Tachibana, “Deep Reinforcement Learning for High Precision Assembly Tasks,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017.
  14. C. C. Beltran-Hernandez, D. Petit, I. G. Ramirez-Alpizar, and K. Harada, “Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep Reinforcement Learning Approach,” Applied Sciences, vol. 10, no. 19, 2020.
  15. X. Li, J. Xiao, W. Zhao, H. Liu, and G. Wang, “Multiple peg-in-hole compliant assembly based on a learning-accelerated deep deterministic policy gradient strategy,” Industrial Robot, vol. 49, no. 1, 2022.
  16. F. Wang, B. Cui, Y. Liu, and B. Ren, “Deep Reinforcement Learning for Peg-in-hole Assembly Task Via Information Utilization Method,” Journal of Intelligent & Robotic Systems, vol. 106, no. 1, 2022.
  17. J. Luo, E. Solowjow, C. Wen, J. A. Ojea, and A. M. Agogino, “Deep Reinforcement Learning for Robotic Assembly of Mixed Deformable and Rigid Objects,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018.
  18. G. Thomas, M. Chien, A. Tamar, J. A. Ojea, and P. Abbeel, “Learning Robotic Assembly from CAD,” in IEEE International Conference on Robotics and Automation (ICRA), 2018.
  19. J. Ding, C. Wang, and C. Lu, “Transferable Force-Torque Dynamics Model for Peg-in-hole Task,” arXiv preprint arXiv:1912.00260, 2019.
  20. S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Computation, vol. 9, no. 8, 1997.
  21. J. Tobin et al., “Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017.
  22. Y. Zhou, C. Barnes, J. Lu, J. Yang, and H. Li, “On the Continuity of Rotation Representations in Neural Networks,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  23. A. Orsula, S. Bøgh, M. Olivares-Mendez, and C. Martinez, “Learning to Grasp on the Moon from 3D Octree Observations with Deep Reinforcement Learning,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022.
  24. Blender Development Team, “Blender 4.0.” [Online]. Available: https://blender.org
  25. NVIDIA Corporation, “NVIDIA Omniverse.” [Online]. Available: https://nvidia.com/omniverse
  26. M. Macklin et al., “Local Optimization for Robust Signed Distance Field Collision,” Proceedings of the ACM on Computer Graphics and Interactive Techniques, vol. 3, no. 1, 2020.
  27. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal Policy Optimization Algorithms,” arXiv preprint arXiv:1707.06347, 2017.
  28. T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor,” in International Conference on Machine Learning (ICML), 2018.
  29. D. Hafner, J. Pasukonis, J. Ba, and T. Lillicrap, “Mastering Diverse Domains through World Models,” arXiv preprint arXiv:2301.04104, 2023.
  30. V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, 2015.

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube