Papers
Topics
Authors
Recent
Search
2000 character limit reached

Multi Actor-Critic DDPG for Robot Action Space Decomposition: A Framework to Control Large 3D Deformation of Soft Linear Objects

Published 7 Dec 2023 in cs.RO | (2312.04308v2)

Abstract: Robotic manipulation of deformable linear objects (DLOs) has great potential for applications in diverse fields such as agriculture or industry. However, a major challenge lies in acquiring accurate deformation models that describe the relationship between robot motion and DLO deformations. Such models are difficult to calculate analytically and vary among DLOs. Consequently, manipulating DLOs poses significant challenges, particularly in achieving large deformations that require highly accurate global models. To address these challenges, this paper presents MultiAC6: a new multi Actor-Critic framework for robot action space decomposition to control large 3D deformations of DLOs. In our approach, two deep reinforcement learning (DRL) agents orient and position a robot gripper to deform a DLO into the desired shape. Unlike previous DRL-based studies, MultiAC6 is able to solve the sim-to-real gap, achieving large 3D deformations up to 40 cm in real-world settings. Experimental results also show that MultiAC6 has a 66\% higher success rate than a single-agent approach. Further experimental studies demonstrate that MultiAC6 generalizes well, without retraining, to DLOs with different lengths or materials.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. J. Sanchez, J. Corrales, et al., “Robotic manipulation and sensing of deformable objects in domestic and industrial applications: A survey,” IJRR, vol. 37, no. 7, pp. 688–716, 2018.
  2. J. Zhu, B. Navarro, et al., “Dual-arm robotic manipulation of flexible cables,” in IEEE/RSJ IROS, pp. 479–484, 2018.
  3. R. Lagneau, A. Krupa, and M. Marchal, “Automatic shape control of deformable wires based on model-free visual servoing,” IEEE RA-L, vol. 5, no. 4, pp. 5252–5259, 2020.
  4. P. Mitrano, D. McConachie, and D. Berenson, “Learning where to trust unreliable models in an unstructured world for deformable object manipulation,” Science Robotics, vol. 6, no. 54, p. eabd8170, 2021.
  5. T. Botterill, S. Paulin, et al., “A robot system for pruning grape vines,” Journal of Field Robotics, vol. 34, no. 6, pp. 1100–1122, 2017.
  6. O. Aghajanzadeh, M. Aranda, et al., “Adaptive Deformation Control for Elastic Linear Objects,” Frontiers in Robotics and AI, vol. 9, pp. 1–13, 2022.
  7. M. Yu, K. Lv, et al., “Global model learning for large deformation control of elastic deformable linear objects: An efficient and adaptive approach,” IEEE T-RO, vol. 39, no. 1, pp. 417–436, 2023.
  8. J. Zhu, A. Cherubini, et al., “Challenges and outlook in robotic manipulation of deformable objects,” IEEE RAM, vol. 29, no. 3, pp. 67–77, 2022.
  9. O. Aghajanzadeh, M. Aranda, G. López-Nicolás, R. Lenain, and Y. Mezouar, “An offline geometric model for controlling the shape of elastic linear objects,” in IEEE/RSJ IROS, pp. 2175–2181, 2022.
  10. N. Lv, J. Liu, and Y. Jia, “Dynamic modeling and control of deformable linear objects for single-arm and dual-arm robot manipulations,” IEEE T-RO, vol. 38, no. 4, pp. 2341–2353, 2022.
  11. S. Jin, C. Wang, and M. Tomizuka, “Robust deformation model approximation for robotic cable manipulation,” in IEEE/RSJ IROS, pp. 6586–6593, 2019.
  12. D. Navarro-Alarcon, H. M. Yip, et al., “Automatic 3-D manipulation of soft objects by robotic arms with an adaptive deformation model,” IEEE T-RO, vol. 32, no. 2, pp. 429–441, 2016.
  13. M. Shetab-Bushehri, M. Aranda, et al., “Lattice-based shape tracking and servoing of elastic objects,” IEEE T-RO, pp. 1–18, 2023.
  14. R. Laezza and Y. Karayiannidis, “Learning shape control of elastoplastic deformable linear objects,” in IEEE ICRA, pp. 4438–4444, 2021.
  15. M. H. Daniel Zakaria, M. Aranda, et al., “Robotic Control of the Deformation of Soft Linear Objects Using Deep Reinforcement Learning,” in IEEE CASE, pp. 1516–1522, 2022.
  16. L. Pecyna, S. Dong, and S. Luo, “Visual-tactile multimodality for following deformable linear objects using reinforcement learning,” in IEEE/RSJ IROS, pp. 3987–3994, 2022.
  17. H. Han, G. Paul, and T. Matsubara, “Model-based reinforcement learning approach for deformable linear object manipulation,” in IEEE CASE, pp. 750–755, 2017.
  18. Y. Wu, W. Yan, et al., “Learning to manipulate deformable objects without demonstrations,” in Robotics: Science and Systems, 2020.
  19. H. Wang and K. Wong, “A collaborative multi-agent reinforcement learning framework for dialog action decomposition,” in EMNLP, pp. 7882–7889, 2021.
  20. D. Seita, P. Florence, et al., “Learning to rearrange deformable cables, fabrics, and bags with goal-conditioned transporter networks,” in IEEE ICRA, pp. 4568–4575, 2021.
  21. S. Chen, Y. Liu, et al., “Diffsrl: Learning dynamical state representation for deformable object manipulation with differentiable simulation,” IEEE RA-L, vol. 7, no. 4, pp. 9533–9540, 2022.
  22. C. Chi, B. Burchfiel, et al., “Iterative residual policy for goal-conditioned dynamic manipulation of deformable objects,” in Robotics: Science and Systems XVIII, 2022.
  23. W. Zhao, J. P. Queralta, and T. Westerlund, “Sim-to-real transfer in deep reinforcement learning for robotics: A survey,” in IEEE SSCI, pp. 737–744, 2020.
  24. T. Bretl and Z. McCarthy, “Quasi-static manipulation of a Kirchhoff elastic rod based on a geometric analysis of equilibrium configurations,” IJRR, vol. 33, no. 1, pp. 48–68, 2014.
  25. A. Borum, D. Matthews, and T. Bretl, “State estimation and tracking of deforming planar elastic rods,” in IEEE ICRA, pp. 4127–4132, 2014.
  26. R. S. Sutton and A. G. Barto, Reinforcement learning - an introduction. Adaptive computation and machine learning, MIT Press, 1998.
  27. T. P. Lillicrap, J. J. Hunt, et al., “Continuous control with deep reinforcement learning,” in ICLR, 2016.
  28. R. Jangir, G. Alenyà, and C. Torras, “Dynamic cloth manipulation with deep reinforcement learning,” in IEEE ICRA, pp. 4630–4636, 2020.
  29. R. Liu and J. Zou, “The effects of memory replay in reinforcement learning,” in IEEE Allerton, pp. 478–485, 2018.
  30. O. Nachum, M. Norouzi, et al., “Bridging the gap between value and policy based reinforcement learning,” in NeurIPS, pp. 2775–2785, 2017.
  31. Y. Li, C. Pan, et al., “Efficient bimanual handover and rearrangement via symmetry-aware actor-critic learning,” in IEEE ICRA, pp. 3867–3874, 2023.
  32. L. Marzari, A. Pore, et al., “Towards hierarchical task decomposition using deep reinforcement learning for pick and place subtasks,” in IEEE ICAR, pp. 640–645, 2021.
  33. L. Chen, Z. Jiang, et al., “Deep reinforcement learning based trajectory planning under uncertain constraints,” Frontiers Neurorobotics, vol. 16, p. 883562, 2022.
  34. V. Mnih, A. P. Badia, et al., “Asynchronous methods for deep reinforcement learning,” in ICML, vol. 48, pp. 1928–1937, 2016.
  35. M. H. Daniel Zakaria, S. Lengagne, J. A. C. Ramón, and Y. Mezouar, “General framework for the optimization of the human-robot collaboration decision-making process through the ability to change performance metrics,” Frontiers in Robotics and AI, vol. 8, 2021.
  36. E. Coumans and Y. Bai, “Pybullet, a python module for physics simulation for games, robotics and machine learning.” http://pybullet.org, 2016–2021.
  37. V. Makoviychuk, L. Wawrzyniak, et al., “Isaac Gym: High Performance GPU Based Physics Simulation For Robot Learning,” in NeurIPS, 2021.
  38. D. J. Berndt and J. Clifford, “Using dynamic time warping to find patterns in time series,” in Knowledge Discovery in Databases: Papers from the 1994 AAAI Workshop. Technical Report WS-94-03, pp. 359–370, AAAI Press, 1994.
Citations (4)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.