Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ShakingBot: Dynamic Manipulation for Bagging (2304.04558v3)

Published 7 Apr 2023 in cs.RO

Abstract: Bag manipulation through robots is complex and challenging due to the deformability of the bag. Based on dynamic manipulation strategy, we propose a new framework, ShakingBot, for the bagging tasks. ShakingBot utilizes a perception module to identify the key region of the plastic bag from arbitrary initial configurations. According to the segmentation, ShakingBot iteratively executes a novel set of actions, including Bag Adjustment, Dual-arm Shaking, and One-arm Holding, to open the bag. The dynamic action, Dual-arm Shaking, can effectively open the bag without the need to account for the crumpled configuration.Then, we insert the items and lift the bag for transport. We perform our method on a dual-arm robot and achieve a success rate of 21/33 for inserting at least one item across various initial bag configurations. In this work, we demonstrate the performance of dynamic shaking actions compared to the quasi-static manipulation in the bagging task. We also show that our method generalizes to variations despite the bag's size, pattern, and color.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. H. Zhang, J. Ichnowski, D. Seita, J. Wang, H. Huang, and K. Goldberg, “Robots of the lost arc: Self-supervised learning to dynamically manipulate fixed-endpoint cables,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 4560–4567, IEEE, 2021.
  2. Y. She, S. Wang, S. Dong, N. Sunil, A. Rodriguez, and E. Adelson, “Cable manipulation with a tactile-reactive gripper,” The International Journal of Robotics Research, vol. 40, no. 12-14, pp. 1385–1401, 2021.
  3. A. Wang, T. Kurutach, P. Abbeel, and A. Tamar, “Learning robotic manipulation through visual planning and acting,” in Robotics: Science and Systems XV, University of Freiburg, Freiburg im Breisgau, Germany, June 22-26, 2019 (A. Bicchi, H. Kress-Gazit, and S. Hutchinson, eds.), 2019.
  4. V. Lim, H. Huang, L. Y. Chen, J. Wang, J. Ichnowski, D. Seita, M. Laskey, and K. Goldberg, “Real2sim2real: Self-supervised learning of physical single-step dynamic actions for planar robot casting,” in 2022 International Conference on Robotics and Automation (ICRA), pp. 8282–8289, IEEE, 2022.
  5. J. Zhu, B. Navarro, R. Passama, P. Fraisse, A. Crosnier, and A. Cherubini, “Robotic manipulation planning for shaping deformable linear objects withenvironmental contacts,” IEEE Robotics and Automation Letters, vol. 5, no. 1, pp. 16–23, 2019.
  6. T. Weng, S. M. Bajracharya, Y. Wang, K. Agrawal, and D. Held, “Fabricflownet: Bimanual cloth manipulation with a flow-based policy,” in Conference on Robot Learning, pp. 192–202, PMLR, 2022.
  7. K. Mo, C. Xia, X. Wang, Y. Deng, X. Gao, and B. Liang, “Foldsformer: Learning sequential multi-step cloth manipulation with space-time attention,” IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 760–767, 2022.
  8. L. Y. Chen, H. Huang, E. Novoseller, D. Seita, J. Ichnowski, M. Laskey, R. Cheng, T. Kollar, and K. Goldberg, “Efficiently learning single-arm fling motions to smooth garments,” in The International Symposium of Robotics Research, pp. 36–51, Springer, 2022.
  9. R. Hoque, D. Seita, A. Balakrishna, A. Ganapathi, A. K. Tanwani, N. Jamali, K. Yamane, S. Iba, and K. Goldberg, “Visuospatial foresight for multi-step, multi-task fabric manipulation,” in Robotics: Science and Systems XVI, Virtual Event / Corvalis, Oregon, USA, July 12-16, 2020 (M. Toussaint, A. Bicchi, and T. Hermans, eds.), 2020.
  10. X. Lin, Y. Wang, Z. Huang, and D. Held, “Learning visible connectivity dynamics for cloth smoothing,” in Conference on Robot Learning, pp. 256–266, PMLR, 2022.
  11. L. Y. Chen, B. Shi, D. Seita, R. Cheng, T. Kollar, D. Held, and K. Goldberg, “Autobag: Learning to open plastic bags and insert objects,” CoRR, vol. abs/2210.17217, 2022.
  12. M. T. Mason and K. M. Lynch, “Dynamic manipulation,” in Proceedings of 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’93), vol. 1, pp. 152–159, IEEE, 1993.
  13. J. Hietala, D. Blanco-Mulero, G. Alcan, and V. Kyrki, “Learning visual feedback control for dynamic cloth folding,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1455–1462, IEEE, 2022.
  14. Y. Wu, W. Yan, T. Kurutach, L. Pinto, and P. Abbeel, “Learning to manipulate deformable objects without demonstrations,” in Robotics: Science and Systems XVI, Virtual Event / Corvalis, Oregon, USA, July 12-16, 2020 (M. Toussaint, A. Bicchi, and T. Hermans, eds.), 2020.
  15. R. Jangir, G. Alenya, and C. Torras, “Dynamic cloth manipulation with deep reinforcement learning,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 4630–4636, IEEE, 2020.
  16. B. Jia, Z. Pan, Z. Hu, J. Pan, and D. Manocha, “Cloth manipulation using random-forest-based imitation learning,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 2086–2093, 2019.
  17. R. Lee, D. Ward, V. Dasagi, A. Cosgun, J. Leitner, and P. Corke, “Learning arbitrary-goal fabric folding with one hour of real robot experience,” in Conference on Robot Learning, pp. 2317–2327, PMLR, 2021.
  18. C. Gao, Z. Li, H. Gao, and F. Chen, “Iterative interactive modeling for knotting plastic bags,” in Conference on Robot Learning, pp. 571–582, PMLR, 2023.
  19. A. Nair, D. Chen, P. Agrawal, P. Isola, P. Abbeel, J. Malik, and S. Levine, “Combining self-supervised learning and imitation for vision-based rope manipulation,” in 2017 IEEE international conference on robotics and automation (ICRA), pp. 2146–2153, IEEE, 2017.
  20. H. Nakagaki, K. Kitagi, T. Ogasawara, and H. Tsukune, “Study of insertion task of a flexible wire into a hole by using visual tracking observed by stereo vision,” in Proceedings of IEEE international conference on robotics and automation, vol. 4, pp. 3209–3214, IEEE, 1996.
  21. H. Nakagaki, K. Kitagaki, T. Ogasawara, and H. Tsukune, “Study of deformation and insertion tasks of a flexible wire,” in Proceedings of International Conference on Robotics and Automation, vol. 3, pp. 2397–2402, IEEE, 1997.
  22. H. Ha and S. Song, “Flingbot: The unreasonable effectiveness of dynamic manipulation for cloth unfolding,” in Conference on Robot Learning, pp. 24–33, PMLR, 2022.
  23. N. Gu, R. He, and L. Yu, “Defnet: Deconstructed fabric folding strategy based on latent space roadmap and flow-based policy,” arXiv preprint arXiv:2303.00323, 2023.
  24. H. Kazerooni and C. Foley, “A robotic mechanism for grasping sacks,” IEEE transactions on automation science and engineering, vol. 2, no. 2, pp. 111–120, 2005.
  25. A. Kirchheim, M. Burwinkel, and W. Echelmeyer, “Automatic unloading of heavy sacks from containers,” in 2008 IEEE International Conference on automation and logistics, pp. 946–951, IEEE, 2008.
  26. R. Hellman, C. Tekin, M. Schaar, and V. Santos, “Functional contour-following via haptic perception and reinforcement learning,” IEEE Transactions on Haptics, vol. 11, no. 1, pp. 61–72, 2018.
  27. E. Klingbeil, D. Rao, B. Carpenter, V. Ganapathi, A. Y. Ng, and O. Khatib, “Grasping with application to an autonomous checkout robot,” in 2011 IEEE international conference on robotics and automation, pp. 2837–2844, IEEE, 2011.
  28. J. E. Hopcroft, J. K. Kearney, and D. B. Krafft, “A case study of flexible object manipulation,” The International Journal of Robotics Research, vol. 10, no. 1, pp. 41–50, 1991.
  29. T. Morita, J. Takamatsu, K. Ogawara, H. Kimura, and K. Ikeuchi, “Knot planning from observation,” in 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), vol. 3, pp. 3887–3892, IEEE, 2003.
  30. S. Zimmermann, R. Poranne, and S. Coros, “Dynamic manipulation of deformable objects with implicit integration,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 4209–4216, 2021.
  31. C. Chi, B. Burchfiel, E. Cousineau, S. Feng, and S. Song, “Iterative residual policy: for goal-conditioned dynamic manipulation of deformable objects,” CoRR, vol. abs/2203.00663, 2022.
  32. X. Lin, Y. Wang, J. Olkin, and D. Held, “Softgym: Benchmarking deep reinforcement learning for deformable object manipulation,” in Conference on Robot Learning, pp. 432–448, PMLR, 2021.
  33. Z. Xu, C. Chi, B. Burchfiel, E. Cousineau, S. Feng, and S. Song, “Dextairity: Deformable manipulation can be a breeze,” arXiv preprint arXiv:2203.01197, 2022.
  34. D. Seita, A. Ganapathi, R. Hoque, M. Hwang, E. Cen, A. K. Tanwani, A. Balakrishna, B. Thananjeyan, J. Ichnowski, N. Jamali, et al., “Deep imitation learning of sequential fabric smoothing from an algorithmic supervisor,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 9651–9658, IEEE, 2020.
  35. A. Canberk, C. Chi, H. Ha, B. Burchfiel, E. Cousineau, S. Feng, and S. Song, “Cloth funnels: Canonicalized-alignment for multi-purpose garment manipulation,” in International Conference of Robotics and Automation (ICRA), 2022.
  36. G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, “Openai gym,” CoRR, vol. abs/1606.01540, 2016.
  37. E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine for model-based control,” in 2012 IEEE/RSJ international conference on intelligent robots and systems, pp. 5026–5033, IEEE, 2012.
  38. L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proceedings of the European conference on computer vision (ECCV), pp. 801–818, 2018.
  39. D. Seita, N. Jamali, M. Laskey, A. K. Tanwani, R. Berenstein, P. Baskaran, S. Iba, J. Canny, and K. Goldberg, “Deep transfer learning of pick points on fabric for robot bed-making,” in The International Symposium of Robotics Research, pp. 275–290, Springer, 2019.
  40. J. Qian, T. Weng, L. Zhang, B. Okorn, and D. Held, “Cloth region segmentation for robust grasp selection,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 9553–9560, IEEE, 2020.
  41. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241, Springer, 2015.
  42. H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2881–2890, 2017.
  43. V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.
  44. C. Harris, M. Stephens, et al., “A combined corner and edge detector,” in Alvey vision conference, vol. 15, pp. 10–5244, Citeseer, 1988.
  45. J. Canny, “A computational approach to edge detection,” IEEE Transactions on pattern analysis and machine intelligence, no. 6, pp. 679–698, 1986.
Citations (5)

Summary

We haven't generated a summary for this paper yet.