World Models for General Surgical Grasping (2405.17940v1)
Abstract: Intelligent vision control systems for surgical robots should adapt to unknown and diverse objects while being robust to system disturbances. Previous methods did not meet these requirements due to mainly relying on pose estimation and feature tracking. We propose a world-model-based deep reinforcement learning framework "Grasp Anything for Surgery" (GAS), that learns a pixel-level visuomotor policy for surgical grasping, enhancing both generality and robustness. In particular, a novel method is proposed to estimate the values and uncertainties of depth pixels for a rigid-link object's inaccurate region based on the empirical prior of the object's size; both depth and mask images of task objects are encoded to a single compact 3-channel image (size: 64x64x3) by dynamically zooming in the mask regions, minimizing the information loss. The learned controller's effectiveness is extensively evaluated in simulation and in a real robot. Our learned visuomotor policy handles: i) unseen objects, including 5 types of target grasping objects and a robot gripper, in unstructured real-world surgery environments, and ii) disturbances in perception and control. Note that we are the first work to achieve a unified surgical control system that grasps diverse surgical objects using different robot grippers on real robots in complex surgery scenes (average success rate: 69%). Our system also demonstrates significant robustness across 6 conditions including background variation, target disturbance, camera pose variation, kinematic control error, image noise, and re-grasping after the gripped target object drops from the gripper. Videos and codes can be found on our project page: https://linhongbin.github.io/gas/.
- C. D’Ettorre et al., “Automated pick-up of suturing needles for robotic surgical assistance,” in 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018, pp. 1370–1377.
- S. Lu, T. Shkurti, and M. C. Çavuşoğlu, “Dual-arm needle manipulation with the da vinci® surgical robot,” in 2020 International Symposium on Medical Robotics (ISMR). IEEE, 2020, pp. 43–49.
- O. Özgüner, T. Shkurti, S. Lu, W. Newman, and M. C. Çavuşoğlu, “Visually guided needle driving and pull for autonomous suturing,” in 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE). IEEE, 2021, pp. 242–248.
- K. L. Schwaner, I. Iturrate, J. K. Andersen, P. T. Jensen, and T. R. Savarimuthu, “Autonomous bi-manual surgical suturing based on skills learned from demonstration,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021, pp. 4017–4024.
- J. Xu et al., “Surrol: An open-source reinforcement learning centered and dvrk compatible platform for surgical robot learning,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021, pp. 1821–1828.
- Z.-Y. Chiu et al., “Bimanual regrasping for suture needles using reinforcement learning for rapid motion planning,” in 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021, pp. 7737–7743.
- R. Bendikas, V. Modugno, D. Kanoulas, F. Vasconcelos, and D. Stoyanov, “Learning needle pick-and-place without expert demonstrations,” IEEE Robotics and Automation Letters, 2023.
- Y. Long, W. Wei, T. Huang, Y. Wang, and Q. Dou, “Human-in-the-loop embodied intelligence with interactive simulation environment for surgical robot learning,” IEEE Robotics and Automation Letters, 2023.
- T. Huang, K. Chen, B. Li, Y.-H. Liu, and Q. Dou, “Demonstration-guided reinforcement learning with efficient exploration for task automation of surgical robot,” arXiv preprint arXiv:2302.09772, 2023.
- S. Sen, A. Garg, D. V. Gealy, S. McKinley, Y. Jen, and K. Goldberg, “Automating multi-throw multilateral surgical suturing with a mechanical needle guide and sequential convex optimization,” in 2016 IEEE international conference on robotics and automation (ICRA). IEEE, 2016, pp. 4178–4185.
- P. Sundaresan et al., “Automated extraction of surgical needles from tissue phantoms,” in 2019 IEEE 15th International Conference on Automation Science and Engineering (CASE). IEEE, 2019, pp. 170–177.
- A. Wilcox et al., “Learning to localize, grasp, and hand over unmodified surgical needles,” in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 9637–9643.
- N. Joglekar, F. Liu, R. Orosco, and M. Yip, “Suture thread spline reconstruction from endoscopic images for robotic surgery with reliability-driven keypoint detection,” in 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023, pp. 4747–4753.
- B. Lu, H. K. Chu, K. Huang, and J. Lai, “Surgical suture thread detection and 3-d reconstruction using a model-free approach in a calibrated stereo visual system,” IEEE/ASME Transactions on Mechatronics, vol. 25, no. 2, pp. 792–803, 2019.
- B. Kehoe, G. Kahn, J. Mahler, J. Kim, A. Lee, A. Lee, K. Nakagawa, S. Patil, W. D. Boyd, P. Abbeel et al., “Autonomous multilateral debridement with the raven surgical robot,” in 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014, pp. 1432–1439.
- D. Seita, S. Krishnan, R. Fox, S. McKinley, J. Canny, and K. Goldberg, “Fast and reliable autonomous surgical debridement with cable-driven robots using a two-phase calibration procedure,” in 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018, pp. 6651–6658.
- K. Fan, Z. Chen, G. Ferrigno, and E. De Momi, “Learn from safe experience: Safe reinforcement learning for task automation of surgical robot,” IEEE Transactions on Artificial Intelligence, 2024.
- M. Hwang, J. Ichnowski, B. Thananjeyan, D. Seita, S. Paradis, D. Fer, T. Low, and K. Goldberg, “Automating surgical peg transfer: calibration with deep learning can exceed speed, accuracy, and consistency of humans,” IEEE Transactions on Automation Science and Engineering, vol. 20, no. 2, pp. 909–922, 2022.
- F. Zhong et al., “Dual-arm robotic needle insertion with active tissue deformation for autonomous suturing,” IEEE Robotics and Automation Letters, vol. 4, no. 3, pp. 2669–2676, 2019.
- D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrishnan, V. Vanhoucke et al., “Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation,” arXiv preprint arXiv:1806.10293, 2018.
- Y. Seo, J. Kim, S. James, K. Lee, J. Shin, and P. Abbeel, “Multi-view masked world models for visual robotic manipulation,” arXiv preprint arXiv:2302.02408, 2023.
- D. Ha and J. Schmidhuber, “World models,” arXiv preprint arXiv:1803.10122, 2018.
- P. Wu, A. Escontrela, D. Hafner, P. Abbeel, and K. Goldberg, “Daydreamer: World models for physical robot learning,” in Conference on Robot Learning. PMLR, 2023, pp. 2226–2240.
- D. P. Kingma et al., “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
- D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson, “Learning latent dynamics for planning from pixels,” in International conference on machine learning. PMLR, 2019, pp. 2555–2565.
- D. Hafner, T. Lillicrap, M. Norouzi, and J. Ba, “Mastering atari with discrete world models,” arXiv preprint arXiv:2010.02193, 2020.
- P. Lancaster, N. Hansen, A. Rajeswaran, and V. Kumar, “Modem-v2: Visuo-motor world models for real-world robot manipulation,” arXiv preprint arXiv:2309.14236, 2023.
- A. Haider and H. Hel-Or, “What can we learn from depth camera sensor noise?” Sensors, vol. 22, no. 14, p. 5448, 2022.
- H. Lin, B. Li, X. Chu, Q. Dou, Y. Liu, and K. W. S. Au, “End-to-end learning of deep visuomotor policy for needle picking,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2023, pp. 8487–8494.
- P. M. Scheikl et al., “Sim-to-real transfer for visual reinforcement learning of deformable object manipulation for robot-assisted surgery,” IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 560–567, 2022.
- J. Schulman et al., “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
- H. A. Kadi and K. Terzic, “Planet-pick: Effective cloth flattening based on latent dynamic planning,” arXiv preprint arXiv:2303.01345, 2023.
- R. Mendonca, S. Bahl, and D. Pathak, “Alan: Autonomously exploring robotic agents in the real world,” arXiv preprint arXiv:2302.06604, 2023.
- Z. Mandi, H. Bharadhwaj, V. Moens, S. Song, A. Rajeswaran, and V. Kumar, “Cacti: A framework for scalable multi-task multi-scene visual imitation learning,” arXiv preprint arXiv:2212.05711, 2022.
- N. Hansen, Y. Lin, H. Su, X. Wang, V. Kumar, and A. Rajeswaran, “Modem: Accelerating visual model-based reinforcement learning with demonstrations,” arXiv preprint arXiv:2212.05698, 2022.
- R. Mendonca, S. Bahl, and D. Pathak, “Structured world models from human videos,” arXiv preprint arXiv:2308.10901, 2023.
- J. Yang, M. Gao, Z. Li, S. Gao, F. Wang, and F. Zheng, “Track anything: Segment anything meets videos,” arXiv preprint arXiv:2304.11968, 2023.
- M. Allan, A. Shvets, T. Kurmann, Z. Zhang, R. Duggal, Y.-H. Su, N. Rieke, I. Laina, N. Kalavakonda, S. Bodenstedt et al., “2017 robotic instrument segmentation challenge,” arXiv preprint arXiv:1902.06426, 2019.
- A. A. Shvets, A. Rakhlin, A. A. Kalinin, and V. I. Iglovikov, “Automatic instrument segmentation in robot-assisted surgery using deep learning,” in 2018 17th IEEE international conference on machine learning and applications (ICMLA). IEEE, 2018, pp. 624–628.
- L. C. Garcia-Peraza-Herrera, L. Fidon, C. D’Ettorre, D. Stoyanov, T. Vercauteren, and S. Ourselin, “Image compositing for segmentation of surgical tools without manual annotations,” IEEE transactions on medical imaging, vol. 40, no. 5, pp. 1450–1460, 2021.
- A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 4015–4026.
- H. K. Cheng and A. G. Schwing, “Xmem: Long-term video object segmentation with an atkinson-shiffrin memory model,” in European Conference on Computer Vision. Springer, 2022, pp. 640–658.
- D. Hafner, T. Lillicrap, J. Ba, and M. Norouzi, “Dream to control: Learning behaviors by latent imagination,” arXiv preprint arXiv:1912.01603, 2019.
- D. Horváth, G. Erdős, Z. Istenes, T. Horváth, and S. Földi, “Object detection using sim2real domain randomization for robotic applications,” IEEE Transactions on Robotics, vol. 39, no. 2, pp. 1225–1243, 2022.
- P. Kazanzides et al., “An Open-Source Research Kit for the da Vinci® Surgical System,” in IEEE/RSJ Int. Conf. Robot. and Autom., 2014, pp. 6434–6439.
- J. Schrittwieser et al., “Mastering atari, go, chess and shogi by planning with a learned model,” Nature, vol. 588, no. 7839, pp. 604–609, 2020.
- S. Nair, A. Rajeswaran, V. Kumar, C. Finn, and A. Gupta, “R3m: A universal visual representation for robot manipulation,” arXiv preprint arXiv:2203.12601, 2022.
- Hongbin Lin (19 papers)
- Bin Li (514 papers)
- Chun Wai Wong (2 papers)
- Juan Rojas (21 papers)
- Xiangyu Chu (13 papers)
- Kwok Wai Samuel Au (9 papers)