Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Virtual Reality Framework for Human-Robot Collaboration in Cloth Folding (2305.07493v2)

Published 12 May 2023 in cs.RO

Abstract: We present a virtual reality (VR) framework to automate the data collection process in cloth folding tasks. The framework uses skeleton representations to help the user define the folding plans for different classes of garments, allowing for replicating the folding on unseen items of the same class. We evaluate the framework in the context of automating garment folding tasks. A quantitative analysis is performed on 3 classes of garments, demonstrating that the framework reduces the need for intervention by the user. We also compare skeleton representations with RGB and binary images in a classification task on a large dataset of clothing items, motivating the use of the framework for other classes of garments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. J. Sanchez, J.-A. Corrales, B.-C. Bouzgarrou, and Y. Mezouar, “Robotic manipulation and sensing of deformable objects in domestic and industrial applications: a survey,” The International Journal of Robotics Research, vol. 37, no. 7, pp. 688–716, 2018.
  2. J. Zhu, A. Cherubini, C. Dune, D. Navarro-Alarcon, F. Alambeigi, D. Berenson, F. Ficuciello, K. Harada, J. Kober, X. Li, et al., “Challenges and outlook in robotic manipulation of deformable objects,” IEEE Robotics & Automation Magazine, vol. 29, no. 3, pp. 67–77, 2022.
  3. S. Tirumala, T. Weng, D. Seita, O. Kroemer, Z. Temel, and D. Held, “Learning to singulate layers of cloth using tactile feedback,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7773–7780, 2022.
  4. A. Longhini, M. Moletta, A. Reichlin, M. C. Welle, D. Held, Z. Erickson, and D. Kragic, “Edo-net: Learning elastic properties of deformable objects from graph dynamics,” arXiv preprint arXiv:2209.08996, 2022.
  5. Z. Huang, X. Lin, and D. Held, “Mesh-based dynamics with occlusion reasoning for cloth manipulation,” arXiv preprint arXiv:2206.02881, 2022.
  6. M. Wozniak, C. T. Chang, M. B. Luebbers, B. Ikeda, M. Walker, E. Rosen, and T. R. Groechel, “Virtual, augmented, and mixed reality for human-robot interaction (vam-hri),” in Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’23, (New York, NY, USA), p. 938–940, Association for Computing Machinery, 2023.
  7. K. Chandan, V. Kudalkar, X. Li, and S. Zhang, “Arroch: Augmented reality for robots collaborating with a human,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 3787–3793, IEEE, 2021.
  8. B. Waymouth, A. Cosgun, R. Newbury, T. Tran, W. P. Chan, T. Drummond, and E. Croft, “Demonstrating cloth folding to robots: Design and evaluation of a 2d and a 3d user interface,” in 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), pp. 155–160, IEEE, 2021.
  9. J. Zubizarreta, I. Aguinaga, and A. Amundarain, “A framework for augmented reality guidance in industry,” The International Journal of Advanced Manufacturing Technology, vol. 102, no. 9, pp. 4095–4108, 2019.
  10. R. Hetrick, N. Amerson, B. Kim, E. Rosen, E. de Visser, and E. Phillips, “Comparing virtual reality interfaces for the teleoperation of robots,” pp. 1–7, 04 2020.
  11. Z. Makhataeva and H. A. Varol, “Augmented reality for robotics: A review,” Robotics, vol. 9, no. 2, p. 21, 2020.
  12. M. Wonsick and T. Padir, “A systematic review of virtual reality interfaces for controlling and interacting with robots,” Applied Sciences, vol. 10, no. 24, p. 9051, 2020.
  13. F. Nenna and L. Gamberini, “The influence of gaming experience, gender and other individual factors on robot teleoperations in vr,” in Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction, pp. 945–949, 2022.
  14. M. Mara, K. Meyer, M. Heiml, H. Pichler, R. Haring, B. Krenn, S. Gross, B. Reiterer, and T. Layer-Wagner, “Cobot studio vr: A virtual reality game environment for transdisciplinary research on interpretability and trust in human-robot collaboration,” 2021.
  15. C. Barentine, A. McNay, R. Pfaffenbichler, A. Smith, E. Rosen, and E. Phillips, “A vr teleoperation suite with manipulation assist,” in Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pp. 442–446, 2021.
  16. S. Xu, S. Moore, and A. Cosgun, “Shared-control robotic manipulation in virtual reality,” arXiv preprint arXiv:2205.10564, 2022.
  17. A. Reichlin, G. L. Marchetti, H. Yin, A. Ghadirzadeh, and D. Kragic, “Back to the manifold: Recovering from out-of-distribution states,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8660–8666, 2022.
  18. F. Kennel-Maushart, R. Poranne, and S. Coros, “Multi-arm payload manipulation via mixed reality,” in 2022 International Conference on Robotics and Automation (ICRA), pp. 11251–11257, IEEE, 2022.
  19. V. Ortenzi, M. Filipovica, D. Abdlkarim, T. Pardi, C. Takahashi, A. M. Wing, M. Di Luca, and K. J. Kuchenbecker, “Robot, pass me the tool: Handle visibility facilitates task-oriented handovers.,” in HRI, pp. 256–264, 2022.
  20. Q. Wang, Y. Cheng, W. Jiao, M. T. Johnson, and Y. Zhang, “Virtual reality human-robot collaborative welding: a case study of weaving gas tungsten arc welding,” Journal of Manufacturing Processes, vol. 48, pp. 210–217, 2019.
  21. M. Ostanin and A. Klimchik, “Interactive robot programing using mixed reality,” IFAC-PapersOnLine, vol. 51, no. 22, pp. 50–55, 2018.
  22. M. K. Wozniak, R. Stower, P. Jensfelt, and A. Pereira, “What you see is (not) what you get: A vr framework for correcting robot errors,” arXiv preprint arXiv:2301.04919, 2023.
  23. P.-C. Yang, K. Sasaki, K. Suzuki, K. Kase, S. Sugano, and T. Ogata, “Repeatable folding task by humanoid robot worker using deep learning,” IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 397–403, 2016.
  24. J. Borràs, A. Boix-Granell, S. Foix, and C. Torras, “A virtual reality framework for fast dataset creation applied to cloth manipulation with automatic semantic labelling,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2023.
  25. X. Lin, Y. Wang, Z. Huang, and D. Held, “Learning visible connectivity dynamics for cloth smoothing,” in Conference on Robot Learning, pp. 256–266, PMLR, 2022.
  26. M. Lippi, P. Poklukar, M. C. Welle, A. Varava, H. Yin, A. Marino, and D. Kragic, “Enabling visual action planning for object manipulation through latent space roadmap,” IEEE Transactions on Robotics, 2022.
  27. R. Hoque, D. Seita, A. Balakrishna, A. Ganapathi, A. K. Tanwani, N. Jamali, K. Yamane, S. Iba, and K. Goldberg, “Visuospatial foresight for physical sequential fabric manipulation,” Autonomous Robots, pp. 1–25, 2021.
  28. O. Gustavsson, T. Ziegler, M. C. Welle, J. Bütepage, A. Varava, and D. Kragic, “Cloth manipulation based on category classification and landmark detection,” International Journal of Advanced Robotic Systems, vol. 19, no. 4, p. 17298806221110445, 2022.
  29. J. Schulman, A. Lee, J. Ho, and P. Abbeel, “Tracking deformable objects with point clouds,” in 2013 IEEE International Conference on Robotics and Automation, pp. 1130–1137, 2013.
  30. C. Elbrechter, R. Haschke, and H. Ritter, “Folding paper with anthropomorphic robot hands using real-time physics-based modeling,” in 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012), pp. 210–215, 2012.
  31. V. Petrík and V. Kyrki, “Feedback-based fabric strip folding,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 773–778, IEEE, 2019.
  32. M. Lippi, M. C. Welle, P. Poklukar, A. Marino, and D. Kragic, “Augment-connect-explore: a paradigm for visual action planning with data scarcity,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 754–761, IEEE, 2022.
  33. Y. Avigal, L. Berscheid, T. Asfour, T. Kröger, and K. Goldberg, “Speedfolding: Learning efficient bimanual folding of garments,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1–8, IEEE, 2022.
  34. T.-C. Lee, R. L. Kashyap, and C. N. Chu, “Building skeleton models via 3-d medial surface/axis thinning algorithms,” CVGIP Graph. Model. Image Process., vol. 56, pp. 462–478, 1994.
  35. H. Sundar, D. Silver, N. Gagvani, and S. Dickinson, “Skeleton based shape matching and retrieval,” in 2003 Shape Modeling International., pp. 130–139, 2003.
  36. F. Reinders, M. E. D. Jacobson, and F. H. Post, “Skeleton graph generation for feature shape description,” in Data Visualization 2000 (W. C. de Leeuw and R. van Liere, eds.), (Vienna), pp. 73–82, Springer Vienna, 2000.
  37. Y. Ge, R. Zhang, X. Wang, X. Tang, and P. Luo, “Deepfashion2: A versatile benchmark for detection, pose estimation, segmentation and re-identification of clothing images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5337–5345, 2019.
  38. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning, pp. 1597–1607, PMLR, 2020.
  39. K. Hassani and A. H. K. Ahmadi, “Contrastive multi-view representation learning on graphs,” CoRR, vol. abs/2006.05582, 2020.
Citations (10)

Summary

We haven't generated a summary for this paper yet.