Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DeRi-IGP: Learning to Manipulate Rigid Objects Using Deformable Objects via Iterative Grasp-Pull (2309.04843v4)

Published 9 Sep 2023 in cs.RO

Abstract: Robotic manipulation of rigid objects via deformable linear objects (DLO) such as ropes is an emerging field of research with applications in various rigid object transportation tasks. A few methods that exist in this field suffer from limited robot action and operational space, poor generalization ability, and expensive model-based development. To address these challenges, we propose a universally applicable moving primitive called Iterative Grasp-Pull (IGP). We also introduce a novel vision-based neural policy that learns to parameterize the IGP primitive to manipulate DLO and transport their attached rigid objects to the desired goal locations. Additionally, our decentralized algorithm design allows collaboration among multiple agents to manipulate rigid objects using DLO. We evaluated the effectiveness of our approach in both simulated and real-world environments for a variety of soft-rigid body manipulation tasks. In the real world, we also demonstrate the effectiveness of our decentralized approach through human-robot collaborative transportation of rigid objects to given goal locations. We also showcase the large operational space of IGP primitive by solving distant object acquisition tasks. Lastly, we compared our approach with several model-based and learning-based baseline methods. The results indicate that our method surpasses other approaches by a significant margin. The project supplementary material and videos are available at: https://sites.google.com/view/deri-igp/home

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. B. Donald, L. Gariepy, and D. Rus, “Distributed manipulation of multiple objects using ropes,” in Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), vol. 1, 2000, pp. 450–457 vol.1.
  2. P. Corke, J. Trevelyan, B. Donald, L. Gariepy, and D. Rus, “Experiments in constrained prehensile manipulation: Distributed manipulation with ropes,” in Experimental Robotics VI.   Springer, 2000, pp. 25–36.
  3. T. Maneewarn and P. Detudom, “Mechanics of cooperative nonprehensile pulling by multiple robots,” in 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2005, pp. 2004–2009.
  4. Z. Wang and A. H. Qureshi, “Deri-bot: Learning to collaboratively manipulate rigid objects via deformable objects,” IEEE Robotics and Automation Letters, vol. 8, no. 10, pp. 6355–6362, 2023.
  5. Y. Wu, W. Yan, T. Kurutach, L. Pinto, and P. Abbeel, “Learning to manipulate deformable objects without demonstrations,” arXiv preprint arXiv:1910.13439, 2019.
  6. J. Sanchez, J. A. Corrales Ramon, B. C. BOUZGARROU, and Y. Mezouar, “Robotic manipulation and sensing of deformable objects in domestic and industrial applications: A survey,” The International Journal of Robotics Research, vol. 37, pp. 688 – 716, 06 2018.
  7. J. E. Hopcroft, J. K. Kearney, and D. B. Krafft, “A case study of flexible object manipulation,” The International Journal of Robotics Research, vol. 10, pp. 41 – 50, 1991.
  8. T. Morita, J. Takamatsu, K. Ogawara, H. Kimura, and K. Ikeuchi, “Knot planning from observation,” vol. 3, 10 2003, pp. 3887 – 3892 vol.3.
  9. W. Wang and D. Balkcom, “Tying knot precisely,” in 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 3639–3646.
  10. W. H. Lui and A. Saxena, “Tangled: Learning to untangle ropes with rgb-d perception,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013, pp. 837–844.
  11. J. Grannen, P. Sundaresan, B. Thananjeyan, J. Ichnowski, A. Balakrishna, M. Hwang, V. Viswanath, M. Laskey, J. Gonzalez, and K. Goldberg, “Untangling dense knots by learning task-relevant keypoints,” in Conference on Robot Learning, 2020.
  12. Y. She, S. Wang, S. Dong, N. Sunil, A. Rodriguez, and E. Adelson, “Cable manipulation with a tactile-reactive gripper,” The International Journal of Robotics Research, vol. 40, no. 12-14, pp. 1385–1401, 2021.
  13. A. Nair, D. Chen, P. Agrawal, P. Isola, P. Abbeel, J. Malik, and S. Levine, “Combining self-supervised learning and imitation for vision-based rope manipulation,” in 2017 IEEE international conference on robotics and automation (ICRA).   IEEE, 2017, pp. 2146–2153.
  14. D. Pathak, P. Mahmoudieh, G. Luo, P. Agrawal, D. Chen, Y. Shentu, E. Shelhamer, J. Malik, A. A. Efros, and T. Darrell, “Zero-shot visual imitation,” in ICLR, 2018.
  15. J. Schulman, J. Ho, C. Lee, and P. Abbeel, “Learning from demonstrations through the use of non-rigid registration,” in Robotics Research: The 16th International Symposium ISRR.   Springer, 2016, pp. 339–354.
  16. P. Sundaresan, J. Grannen, B. Thananjeyan, A. Balakrishna, M. Laskey, K. Stone, J. E. Gonzalez, and K. Goldberg, “Learning rope manipulation policies using dense object descriptors trained on synthetic depth data,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 9411–9418.
  17. A. Wang, T. Kurutach, K. Liu, P. Abbeel, and A. Tamar, “Learning robotic manipulation through visual planning and acting,” in Robotics: science and systems, 2019.
  18. M. Yan, Y. Zhu, N. Jin, and J. Bohg, “Self-supervised learning of state estimation for manipulating deformable linear objects,” IEEE robotics and automation letters, vol. 5, no. 2, pp. 2372–2379, 2020.
  19. W. Yan, A. Vangipuram, P. Abbeel, and L. Pinto, “Learning predictive representations for deformable objects using contrastive estimation,” in Conference on Robot Learning.   PMLR, 2021, pp. 564–574.
  20. C. Chi, B. Burchfiel, E. Cousineau, S. Feng, and S. Song, “Iterative residual policy for goal-conditioned dynamic manipulation of deformable objects,” in Proceedings of Robotics: Science and Systems (RSS), 2022.
  21. H. Zhang, J. Ichnowski, D. Seita, J. Wang, H. Huang, and K. Goldberg, “Robots of the lost arc: Self-supervised learning to dynamically manipulate fixed-endpoint cables,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 4560–4567.
  22. Y. Wu, W. Yan, T. Kurutach, L. Pinto, and P. Abbeel, “Learning to manipulate deformable objects without demonstrations,” 07 2020.
  23. A. Zeng, S. Song, J. Lee, A. Rodriguez, and T. Funkhouser, “Tossingbot: Learning to throw arbitrary objects with residual physics,” IEEE Transactions on Robotics, vol. 36, no. 4, pp. 1307–1319, 2020.
  24. Z. Wang and N. Papanikolopoulos, “Spatial action maps augmented with visit frequency maps for exploration tasks,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2021, pp. 3175–3181.
  25. X. Chen, A. N. Iyer, Z. Wang, and A. H. Qureshi, “Efficient q-learning over visit frequency maps for multi-agent exploration of unknown environments,” arXiv preprint arXiv:2307.16318, 2023.
  26. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  27. F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain.” Psychological review, vol. 65, no. 6, p. 386, 1958.
  28. L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 801–818.
  29. E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine for model-based control,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2012, pp. 5026–5033.
  30. I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” in International Conference on Learning Representations, 2017.

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com