MORPHeus: a Multimodal One-armed Robot-assisted Peeling System with Human Users In-the-loop (2404.06570v2)
Abstract: Meal preparation is an important instrumental activity of daily living~(IADL). While existing research has explored robotic assistance in meal preparation tasks such as cutting and cooking, the crucial task of peeling has received less attention. Robot-assisted peeling, conventionally a bimanual task, is challenging to deploy in the homes of care recipients using two wheelchair-mounted robot arms due to ergonomic and transferring challenges. This paper introduces a robot-assisted peeling system utilizing a single robotic arm and an assistive cutting board, inspired by the way individuals with one functional hand prepare meals. Our system incorporates a multimodal active perception module to determine whether an area on the food is peeled, a human-in-the-loop long-horizon planner to perform task planning while catering to a user's preference for peeling coverage, and a compliant controller to peel the food items. We demonstrate the system on 12 food items representing the extremes of different shapes, sizes, skin thickness, surface textures, skin vs flesh colors, and deformability.
- “Morpheus: a multimodal one-armed robot-assisted peeling system with human users in-the-loop.” https://emprise.cs.cornell.edu/morpheus/, 2023. Accessed: 2024-03-06.
- U. S. C. Bureau., “Americans with disabilities: 2014,” 2014.
- E. K. Gordon, X. Meng, M. Barnes, T. Bhattacharjee, and S. S. Srinivasa, “Learning from failures in robot-assisted feeding: Using online learning to develop manipulation strategies for bite acquisition,” CoRR, vol. abs/1908.07088, 2019.
- D. Gallenberger, T. Bhattacharjee, Y. Kim, and S. S. Srinivasa, “Transfer depends on acquisition: Analyzing manipulation strategies for robotic feeding,” in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 267–276, 2019.
- R. Feng, Y. Kim, G. Lee, E. K. Gordon, M. Schmittle, S. Kumar, T. Bhattacharjee, and S. S. Srinivasa, “Robot-assisted feeding: Generalizing skewering strategies across food items on a realistic plate,” 2019.
- R. Jenamani, D. Stabile, Z. Liu, A. Anwar, K. Dimitropoulou, and T. Bhattacharjee, “Robot-assisted inside-mouth bite transfer using robust mouth perception and physical interaction-aware control,” in ACM/IEEE International Conference on Human Robot Interaction, 2024.
- R. Ye, W. Xu, H. Fu, R. K. Jenamani, V. Nguyen, C. Lu, K. Dimitropoulou, and T. Bhattacharjee, “Rcareworld: A human-centric simulation world for caregiving robots,” IROS, 2022.
- J. Ondras, A. Anwar, T. Wu, F. Bu, M. Jung, J. J. Ortiz, and T. Bhattacharjee, “Human-robot commensality: Bite timing prediction for robot-assisted feeding in groups,” in 2022 SoCal Robotics Symposium, 2022.
- T. Xu, T. Zhao, J. G. Cruz-Garza, T. Bhattacharjee, and S. Kalantari, “Evaluating human-in-the-loop assistive feeding robots under different levels of autonomy with vr simulation and physiological sensors,” in 14th International Conference on Social Robotics, 2022.
- T. Bhattacharjee, E. Gordon, R. Scalise, M. Cabrera, A. Caspi, M. Cakmak, and S. Srinivasa, “Is more autonomy always better? exploring preferences of users with mobility impairments in robot-assisted feeding,” in ACM/IEEE International Conference on Human-Robot Interaction, 2020.
- R. Feng, Y. Kim, G. Lee, E. Gordon, M. Schmittle, S. Kumar, T. Bhattacharjee, and S. Srinivasa, “Robot-assisted feeding: Generalizing skewering strategies across food items on a plate,” in International Symposium on Robotics Research, 2019.
- Z. Xu, Z. Xian, X. Lin, C. Chi, Z. Huang, C. Gan, and S. Song, “Roboninja: Learning an adaptive cutting policy for multi-material objects,” 2023.
- E. Heiden, M. Macklin, Y. Narang, D. Fox, A. Garg, and F. Ramos, “Disect: A differentiable simulation engine for autonomous robotic cutting,” 2021.
- X. Mu, Y. Xue, and Y.-B. Jia, “Robotic cutting: Mechanics and control of knife motion,” 2019 International Conference on Robotics and Automation (ICRA), pp. 3066–3072, 2019.
- H. Fu, W. Xu, R. Ye, H. Xue, W. Du, Z. Yu, T. Tang, Y. Li, C. Lu, and J. Zhang, “Demonstrating rfuniverse: A multiphysics simulation platform for embodied ai,” in Proceedings of Robotics: Science and Systems (RSS 2023), (Daegu, Republic of Korea), 2023.
- J. Liu, Y. Chen, Z. Dong, S. Wang, S. Calinon, M. Li, and F. Chen, “Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects,” May 2022.
- J. sup Yi, T. A. Luong, H. Chae, M. S. Ahn, D. Noh, H. N. Tran, M. Doh, E. Auh, N. Pico, F. Yumbla, D. W. Hong, and H. Moon, “An online task-planning framework using mixed integer programming for multiple cooking tasks using a dual-arm robot,” Applied Sciences, 2022.
- K. Takata, T. Kiyokawa, I. G. Ramirez-Alpizar, N. Yamanobe, W. Wan, and K. Harada, “Efficient task/motion planning for a dual-arm robot from language instructions and cooking images,” 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 12058–12065, 2022.
- C. Dong, L. Yu, M. Takizawa, S. Kudoh, and T. Suehiro, “Food peeling method for dual-arm cooking robot,” in 2021 IEEE/SICE International Symposium on System Integration (SII), pp. 801–806, Jan. 2021.
- T. Bhattacharjee, E. K. Gordon, R. Scalise, M. E. Cabrera, A. Caspi, M. Cakmak, and S. S. Srinivasa, “Is more autonomy always better? exploring preferences of users with mobility impairments in robot-assisted feeding,” in 2020 15th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 181–190, 2020.
- Kinova Robotics, “JACO Robotic Arm - Assistive Technologies.” https://assistive.kinovarobotics.com/product/jaco-robotic-arm, 2024. Accessed: 2024-03-28.
- R. by USA Today, “Etac one-handed cutting board review: Customized adaptation beyond food.” https://tinyurl.com/a-cutting-board, Accessed on 2024-03-20.
- H. Kim, Y. Ohmura, and Y. Kuniyoshi, “Goal-conditioned dual-action imitation learning for dexterous dual-arm robot manipulation,” IEEE Transactions on Robotics, pp. 1–20, 2024.
- “Precision da vinci robotic surgery peels grape skin.” www.youtube.com/watch?v=cpPofyZbvDw, [Online; Accessed 11th January, 2024.].
- A. Straižys, M. Burke, and S. Ramamoorthy, “Surfing on an uncertain edge: Precision cutting of soft tissue using torque-based medium classification,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 4623–4629, 2020.
- Y. Watanabe, K. Nagahama, K. Yamazaki, K. Okada, and M. Inaba, “Cooking behavior with handling general cooking tools based on a system integration for a life-sized humanoid robot,” Paladyn, vol. 4, Jan. 2013.
- Z. Zhao, W. S. Lee, and D. Hsu, “Large language models as commonsense knowledge for large-scale task planning,” 2023.
- J. Wu, R. Antonova, A. Kan, M. Lepert, A. Zeng, S. Song, J. Bohg, S. Rusinkiewicz, and T. Funkhouser, “Tidybot: Personalized robot assistance with large language models,” 2023.
- J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” 2019.
- M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-Voss, W. H. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. N. Carr, J. Leike, J. Achiam, V. Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and W. Zaremba, “Evaluating large language models trained on code,” CoRR, vol. abs/2107.03374, 2021.
- S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. Diab, X. Li, X. V. Lin, T. Mihaylov, M. Ott, S. Shleifer, K. Shuster, D. Simig, P. S. Koura, A. Sridhar, T. Wang, and L. Zettlemoyer, “Opt: Open pre-trained transformer language models,” 2022.
- T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language models are few-shot learners,” 2020.
- OpenAI, “Gpt-4 technical report,” 2023.
- L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike, and R. Lowe, “Training language models to follow instructions with human feedback,” 2022.
- H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, “Llama: Open and efficient foundation language models,” 2023.
- A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, and N. Fiedel, “Palm: Scaling language modeling with pathways,” 2022.
- W. Huang, P. Abbeel, D. Pathak, and I. Mordatch, “Language models as zero-shot planners: Extracting actionable knowledge for embodied agents,” CoRR, vol. abs/2201.07207, 2022.
- I. Singh, V. Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, and A. Garg, “Progprompt: Generating situated robot task plans using large language models,” 2022.
- S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor, “Chatgpt for robotics: Design principles and model abilities,” 2023.
- T. Silver, S. Dan, K. Srinivas, J. Tenenbaum, L. Kaelbling, and M. Katz, “Generalized planning in PDDL domains with pretrained large language models,” arXiv preprint, 2023.
- V. Pallagani, B. Muppasani, K. Murugesan, F. Rossi, L. Horesh, B. Srivastava, F. Fabiano, and A. Loreggia, “Plansformer: Generating symbolic plans using transformers,” 2022.
- K. M. Collins, C. Wong, J. Feng, M. Wei, and J. B. Tenenbaum, “Structured, flexible, and robust: benchmarking and improving large language models towards more human-like behavior in out-of-distribution reasoning tasks,” 2022.
- K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg, “Text2motion: From natural language instructions to feasible plans,” 2023.
- Y. Xie, C. Yu, T. Zhu, J. Bai, Z. Gong, and H. Soh, “Translating natural language to planning goals with large-language models,” 2023.
- B. Liu, Y. Jiang, X. Zhang, Q. Liu, S. Zhang, J. Biswas, and P. Stone, “Llm+p: Empowering large language models with optimal planning proficiency,” 2023.
- “Realsense d435i.” https://www.intelrealsense.com/depth-camera-d435i/,[Online; Retrieved on 16th September, 2023].
- “Loadcell force sensor.” https://www.te.com/usa-en/product-20009605-16.html,[Online; Retrieved on 16th September, 2023].
- “Piezo contact microphone pickup.” https://tinyurl.com/contact-microphone,[Online; Retrieved on 16th September, 2023].
- “Arduino uno rev3.” https://docs.arduino.cc/hardware/uno-rev3,[Online; Retrieved on 16th September, 2023].
- Z. Liu, A. Bahety, and S. Song, “Reflect: Summarizing robot experiences for failure explanation and correction,” 2023.
- Z. Gou, Z. Shao, Y. Gong, Y. Shen, Y. Yang, N. Duan, and W. Chen, “Critic: Large language models can self-correct with tool-interactive critiquing,” 2023.
- C. Muise, V. Belle, and S. A. McIlraith, “Computing contingent plans via fully observable non-deterministic planning,” in The 28th AAAI Conference on Artificial Intelligence, 2014.