Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
117 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can robots mold soft plastic materials by shaping depth images? (2306.09848v1)

Published 16 Jun 2023 in cs.RO

Abstract: Can robots mold soft plastic materials by shaping depth images? The short answer is no: current day robots can't. In this article, we address the problem of shaping plastic material with an anthropomorphic arm/hand robot, which observes the material with a fixed depth camera. Robots capable of molding could assist humans in many tasks, such as cooking, scooping or gardening. Yet, the problem is complex, due to its high-dimensionality at both perception and control levels. To address it, we design three alternative data-based methods for predicting the effect of robot actions on the material. Then, the robot can plan the sequence of actions and their positions, to mold the material into a desired shape. To make the prediction problem tractable, we rely on two original ideas. First, we prove that under reasonable assumptions, the shaping problem can be mapped from point cloud to depth image space, with many benefits (simpler processing, no need for registration, lower computation time and memory requirements). Second, we design a novel, simple metric for quickly measuring the distance between two depth images. The metric is based on the inherent point cloud representation of depth images, which enables direct and consistent comparison of image pairs through a non-uniform scaling approach, and therefore opens promising perspectives for designing \textit{depth image -- based} robot controllers. We assess our approach in a series of unprecedented experiments, where a robotic arm/hand molds flour from initial to final shapes, either with its own dataset, or by transfer learning from a human dataset. We conclude the article by discussing the limitations of our framework and those of current day hardware, which make human-like robot molding a challenging open research problem.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. S. Duenser, R. Poranne, B. Thomaszewski, and S. Coros, “Robocut: Hot-wire cutting with robot-controlled flexible rods,” ACM Trans. Graph., vol. 39, no. 4, aug 2020.
  2. Z. Ma, S. Duenser, C. Schumacher, R. Rust, M. Bacher, F. Gramazio, M. Kohler, and S. Coros, “Stylized robotic clay sculpting,” Computers & Graphics, vol. 98, pp. 150–164, 2021.
  3. N. Xuejuan, L. Jingtai, S. Lei, L. Zheng, and C. Xinwei, “Robot 3D sculpturing based on extracted nurbs,” in IEEE Int. Conf. on Robotics and Biomimetics (ROBIO), 2007.
  4. L. Pagliarini and H. Hautop Lund, “The development of robot art,” Artificial Life and Robotics, vol. 13, no. 2, pp. 401–405, 2009.
  5. A. Cherubini, V. Ortenzi, A. Cosgun, R. Lee, and P. Corke, “Model-free vision-based shaping of deformable plastic materials,” Int. Journal of Robotics Research, vol. 39, no. 14, pp. 1739–1759, 2020.
  6. F. Chaumette and S. Hutchinson, “Visual servo control, Part I: Basic approaches,” IEEE Robotics and Automation Magazine, vol. 13, no. 4, pp. 82–90, 2006.
  7. A. Anand, H. S. Koppula, T. Joachims, and A. Saxena, “Contextually guided semantic labeling and search for three-dimensional point clouds,” Int. Journal of Robotics Research, vol. 32, no. 1, pp. 19–34, 2013.
  8. H. Hwang, S. Hyung, S. Yoon, and K. Roh, “Robust descriptors for 3d point clouds using geometric and photometric local feature,” in IEEE/RSJ Int. Conf. on Robots and Intelligent Systems, 2012.
  9. F. Engelmann, T. Kontogianni, J. Schult, and B. Leibe, “Know what your neighbors do: 3d semantic segmentation of point clouds,” in Computer Vision –ECCV 2018 Workshops, L. Leal-Taixé and S. Roth, Eds.   Springer International Publishing, 2019, pp. 395–409.
  10. B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar, “Yale-cmu-berkeley dataset for robotic manipulation research,” Int. Journal of Robotics Research, vol. 36, no. 3, pp. 261–268, 2017.
  11. J. Sung, S. H. Jin, and A. Saxena, “Robobarista: Object part based transfer of manipulation trajectories from crowd-sourcing in 3d pointclouds,” in Int. Symposium on Robotics Research, 2015.
  12. J. Mahler, M. Matl, X. Liu, A. Li, D. Gealy, and K. Goldberg, “Dex-net 3.0: Computing robust vacuum suction grasp targets in point clouds using a new analytic model and deep learning,” in IEEE Int. Conf. on Robotics and Automation, 2018.
  13. I. Ziamtsov and S. Navlakha, “Machine learning approaches to improve three basic plant phenotyping tasks using three-dimensional point clouds,” Plant Physiology, vol. 181, no. 4, pp. 1425–1440, 2019.
  14. Z. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” Int. Journal of Computer Vision, vol. 13(12), pp. 119–152, 1994.
  15. N. Fioraio, K. Konolige, and W. Garage, “Realtime visual and point cloud slam nicola fioraio willow garage,” in Proc. of the RGB-D workshop on advanced reasoning with depth cameras at Robotics: Science and Systems (RSS), 2011.
  16. Y. Liu, C. Wang, Z. Song, and M. Wang, “Efficient global point cloud registration by matching rotation invariant features through translation search,” in European Conference on Computer Vision, 2018.
  17. H. Yang, J. Shi, and L. Carlone, “TEASER: Fast and certifiable point cloud registration,” IEEE Trans. on Robotics, vol. 37, no. 2, pp. 314–333, 2021.
  18. J. Ma, J. Wu, J. Zhao, J. Jiang, H. Zhou, and Q. Z. Sheng, “Non-rigid point set registration with robust transformation learning under manifold regularization,” IEEE Trans. on Neural Networks and Learning Systems, vol. 30, no. 12, pp. 3584–3597, 2019.
  19. R. A. Newcombe, D. Fox, and S. M. Seitz, “DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time,” in 2015 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 343–352.
  20. F. F. Khalil, P. Curtis, and P. Payeur, “Visual monitoring of surface deformations on objects manipulated with a robotic hand,” in IEEE Int. Workshop on Robotic and Sensors Environments (ROSE), 2010.
  21. M. Staffa, S. Rossi, M. Giordano, M. De Gregorio, and B. Siciliano, “Segmentation performance in tracking deformable objects via WNNs,” in IEEE Int. Conf. on Robotics and Automation, 2015, pp. 2462–2467.
  22. J. Schulman, A. Lee, J. Ho, and P. Abbeel, “Tracking deformable objects with point clouds,” in IEEE Int. Conf. on Robotics and Automation, 2013.
  23. S. Elliott and M. Cakmak, “Robotic cleaning through dirt rearrangement planning with learned transition models,” in IEEE Int. Conf. on Robotics and Automation, 2018.
  24. C. Schenck, J. Tompson, D. Fox, and S. Levine, “Learning robotic manipulation of granular media,” in Conf. on Robot Learning (CoRL), 2017.
  25. Y. Li, J. Wu, R. Tedrake, J. Tenenbaum, and A. Torralba, “Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids,” arXiv preprint arXiv:1810.01566, 2018.
  26. H. Fan, H. Su, and L. J. Guibas, “A point set generation network for 3d object reconstruction from a single image,” CoRR, vol. abs/1612.00603, 2016. [Online]. Available: http://arxiv.org/abs/1612.00603
  27. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention.   Springer, 2015, pp. 234–241.
  28. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” CVPR, 2017.
  29. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. on Systems, Man, and Cybernetics, vol. 9(1), pp. 62–66, 1979.
  30. V. Jain and H. S. Seung, “Natural image denoising with convolutional networks,” in Advances in Neural Information Processing Systems, vol. 21, 2009, pp. 769–776.
  31. M. Moll and L. E. Kavraki, “Path planning for deformable linear objects,” IEEE Trans. on Robotics, vol. 22, no. 4, pp. 625–636, 2006.
  32. G. Dulac-Arnold, R. Evans, H. van Hasselt, P. Sunehag, T. Lillicrap, J. Hunt, T. Mann, T. Weber, T. Degris, and B. Coppin, “Deep reinforcement learning in large discrete action spaces,” arXiv preprint arXiv:1512.07679, 2015.
  33. M. Laskey, C. Powers, R. Joshi, A. Poursohi, and K. Goldberg, “Learning robust bed making using deep imitation learning with DART,” arXiv preprint arXiv:1711.02525, 2017.
  34. A. Cherubini, R. Passama, B. Navarro, M. Sorour, A. Khelloufi, O. Mazhar, S. Tarbouriech, J. Zhu, O. Tempier, A. Crosnier, P. Fraisse, and S. Ramdani, “A collaborative robot for the factory of the future: Bazar,” Int. Journal of Advanced Manufacturing Technology, vol. 105(9), pp. 3643––3659, December 2019.
  35. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in 3rd Int. Conf. on Learning Representations, ICLR, 2015.
  36. J. Shintake, V. Cacucciolo, D. Floreano, and H. Shea, “Soft robotic grippers,” Advanced Materials, vol. 30(29), p. 1707035, 2018.
  37. C. Della Santina, G. Grioli, M. Catalano, A. Brando, and A. Bicchi, “Dexterity augmentation on a synergistic hand: The pisa/iit softhand+,” in 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), 2015, pp. 497–503.
  38. Z. Kappassov, J.-A. Corrales, and V. Perdereau, “Tactile sensing in dexterous robot hands -— review,” Robotics and Autonomous Systems, vol. 74, pp. 195–220, 2015.
Citations (1)

Summary

We haven't generated a summary for this paper yet.