Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SliceIt! -- A Dual Simulator Framework for Learning Robot Food Slicing (2404.02569v2)

Published 3 Apr 2024 in cs.RO and cs.AI

Abstract: Cooking robots can enhance the home experience by reducing the burden of daily chores. However, these robots must perform their tasks dexterously and safely in shared human environments, especially when handling dangerous tools such as kitchen knives. This study focuses on enabling a robot to autonomously and safely learn food-cutting tasks. More specifically, our goal is to enable a collaborative robot or industrial robot arm to perform food-slicing tasks by adapting to varying material properties using compliance control. Our approach involves using Reinforcement Learning (RL) to train a robot to compliantly manipulate a knife, by reducing the contact forces exerted by the food items and by the cutting board. However, training the robot in the real world can be inefficient, and dangerous, and result in a lot of food waste. Therefore, we proposed SliceIt!, a framework for safely and efficiently learning robot food-slicing tasks in simulation. Following a real2sim2real approach, our framework consists of collecting a few real food slicing data, calibrating our dual simulation environment (a high-fidelity cutting simulator and a robotic simulator), learning compliant control policies on the calibrated simulation environment, and finally, deploying the policies on the real robot.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. X. Mu, Y. Xue, and Y.-B. Jia, “Robotic cutting: Mechanics and control of knife motion,” in IEEE International Conference on Robotics and Automation, 2019, pp. 3066–3072.
  2. I. Lenz, R. A. Knepper, and A. Saxena, “Deepmpc: Learning deep latent features for model predictive control.” in Robotics: Science and Systems, vol. 10, 2015, p. 25.
  3. N. Jakobi, P. Husbands, and I. Harvey, “Noise and the reality gap: The use of simulation in evolutionary robotics,” in Advances in Artificial Life: Third European Conference on Artificial Life, 1995, pp. 704–720.
  4. E. Heiden, M. Macklin, Y. Narang, D. Fox, A. Garg, and F. Ramos, “Disect: a differentiable simulator for parameter inference and control in robotic cutting,” Autonomous Robots, pp. 1–30, 2023.
  5. N. Koenig and A. Howard, “Design and use paradigms for gazebo, an open-source multi-robot simulator,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 3, 2004, pp. 2149–2154.
  6. P. Chang and T. Padif, “Sim2real2sim: Bridging the gap between simulation and real-world in flexible object manipulation,” in IEEE International Conference on Robotic Computing, 2020, pp. 56–62.
  7. D. Zhou, M. Claffee, K.-M. Lee, and G. Mcmurray, “Cutting, ’by pressing and slicing’, applied to the robotic cut of bio-materials, part ii: Force during slicing and pressing cuts,” in IEEE International Conference on Robotics and Automation, 2006, pp. 2256 – 2261.
  8. D. Zhou and G. McMurray, “Slicing cuts on food materials using robotic-controlled razor blade,” Modelling and Simulation in Engineering, vol. 2011, pp. 36–36, 2011.
  9. P. Long, W. Khalil, and P. Martinet, “Robotic cutting of soft materials using force control & image moments,” in International Conference on Control Automation Robotics & Vision, 2014, pp. 474–479.
  10. A. Yamaguchi and C. G. Atkeson, “Combining finger vision and optical tactile sensing: Reducing and handling errors while cutting vegetables,” in IEEE-RAS International Conference on Humanoid Robots, 2016, pp. 1045–1051.
  11. I. Mitsioni, Y. Karayiannidis, J. A. Stork, and D. Kragic, “Data-driven model predictive control for the contact-rich task of food cutting,” in IEEE-RAS International Conference on Humanoid Robots, 2019, pp. 244–250.
  12. C. Yang, C. Zeng, C. Fang, W. He, and Z. Li, “A dmps-based framework for robot learning and generalization of humanlike variable impedance skills,” IEEE/ASME Transactions on Mechatronics, vol. 23, no. 3, pp. 1193–1203, 2018.
  13. E. Anarossi, H. Tahara, N. Komeno, and T. Matsubara, “Deep segmented dmp networks for learning discontinuous motions,” arXiv preprint arXiv:2309.00320, 2023.
  14. A. Padalkar, M. Nieuwenhuisen, S. Schneider, and D. Schulz, “Learning to close the gap: Combining task frame formalism and reinforcement learning for compliant vegetable cutting,” in International Conference on Informatics in Control, Automation and Robotics, 2020, pp. 221–231.
  15. Z. Xu, Z. Xian, X. Lin, C. Chi, Z. Huang, C. Gan, and S. Song, “Roboninja: Learning an adaptive cutting policy for multi-material objects,” arXiv preprint arXiv:2302.11553, 2023.
  16. B. Calli, A. Walsman, A. Singh, S. Srinivasa, P. Abbeel, and A. M. Dollar, “Benchmarking in manipulation research: Using the yale-cmu-berkeley object and model set,” IEEE Robotics & Automation Magazine, vol. 22, no. 3, pp. 36–52, 2015.
  17. M. Wołczyk, M. Zając, R. Pascanu, Ł. Kuciński, and P. Miłoś, “Continual world: A robotic benchmark for continual reinforcement learning,” Advances in Neural Information Processing Systems, vol. 34, pp. 28 496–28 510, 2021.
  18. S. James, Z. Ma, D. R. Arrojo, and A. J. Davison, “Rlbench: The robot learning benchmark & learning environment,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3019–3026, 2020.
  19. B. Delhaisse, L. Rozo, and D. G. Caldwell, “Pyrobolearn: A python framework for robot learning practitioners,” in Conference on Robot Learning, 2020, pp. 1348–1358.
  20. Y. Zhu, J. Wong, A. Mandlekar, R. Martín-Martín, A. Joshi, S. Nasiriany, and Y. Zhu, “robosuite: A modular simulation framework and benchmark for robot learning,” arXiv preprint arXiv:2009.12293, 2020.
  21. J. Collins, M. Robson, J. Yamada, M. Sridharan, K. Janik, and I. Posner, “RAMP: A benchmark for evaluating robotic assembly manipulation and planning,” arXiv preprint arXiv:2305.09644, 2023.
  22. A. B. Clark, L. Cramphorn-Neal, M. Rachowiecki, and A. Gregg-Smith, “Household clothing set and benchmarks for characterising end-effector cloth manipulation,” in IEEE International Conference on Robotics and Automation, 2023, pp. 9211–9217.
  23. C. Mower, T. Stouraitis, J. Moura, C. Rauch, L. Yan, N. Z. Behabadi, M. Gienger, T. Vercauteren, C. Bergeles, and S. Vijayakumar, “ROS-Pybullet interface: A framework for reliable contact simulation and human-robot interaction,” in Conference on Robot Learning, 2023, pp. 1411–1423.
  24. C. C. Beltran-Hernandez, D. Petit, I. G. Ramirez-Alpizar, and K. Harada, “Variable compliance control for robotic peg-in-hole assembly: A deep-reinforcement-learning approach,” Applied Sciences, vol. 10, no. 19, p. 6923, 2020.
  25. M. Eaton, “Bridging the reality gap a dual simulator approach to the evolution of whole-body motion for the nao humanoid robot,” in International Joint Conference on Computational Intelligence, 2016, pp. 186–192.
  26. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations, 2015.
  27. J. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl, “Algorithms for hyper-parameter optimization,” Advances in Neural Information Processing Systems, vol. 24, 2011.
  28. T. Akiba, S. Sano, T. Yanase, T. Ohta, and M. Koyama, “Optuna: A next-generation hyperparameter optimization framework,” in ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019, pp. 2623–2631.
  29. M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, A. Y. Ng et al., “ROS: an open-source robot operating system,” in ICRA Workshop on Open Source Software, vol. 3, no. 3.2, 2009, p. 5.
  30. T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in International Conference on Machine Learning, 2018, pp. 1861–1870.
  31. T. Schaul, J. Quan, I. Antonoglou, and D. Silver, “Prioritized experience replay,” arXiv preprint arXiv:1511.05952, 2015.
  32. S. Scherzinger, A. Roennau, and R. Dillmann, “Forward dynamics compliance control (FDCC): A new approach to cartesian compliance for robotic manipulators,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017, pp. 4568–4575.
  33. C. C. Beltran-Hernandez, D. Petit, I. G. Ramirez-Alpizar, T. Nishi, S. Kikuchi, T. Matsubara, and K. Harada, “Learning force control for contact-rich manipulation tasks with rigid position-controlled robots,” IEEE Robotics and Automation Letters, pp. 1–1, 2020.
  34. S. Bai, J. Z. Kolter, and V. Koltun, “An empirical evaluation of generic convolutional and recurrent networks for sequence modeling,” arXiv:1803.01271, 2018.
  35. F. von Drigalski, C. C. Beltran-Hernandez, C. Nakashima, Z. Hu, S. Akizuki, T. Ueshiba, M. Hashimoto, K. Kasaura, Y. Domae, W. Wan et al., “Team o2ac at the world robot summit 2020: towards jigless, high-precision assembly,” Advanced Robotics, vol. 36, no. 22, pp. 1213–1227, 2022.
Citations (1)

Summary

We haven't generated a summary for this paper yet.