Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HemoSet: The First Blood Segmentation Dataset for Automation of Hemostasis Management (2403.16286v2)

Published 24 Mar 2024 in eess.IV and cs.CV

Abstract: Hemorrhaging occurs in surgeries of all types, forcing surgeons to quickly adapt to the visual interference that results from blood rapidly filling the surgical field. Introducing automation into the crucial surgical task of hemostasis management would offload mental and physical tasks from the surgeon and surgical assistants while simultaneously increasing the efficiency and safety of the operation. The first step in automation of hemostasis management is detection of blood in the surgical field. To propel the development of blood detection algorithms in surgeries, we present HemoSet, the first blood segmentation dataset based on bleeding during a live animal robotic surgery. Our dataset features vessel hemorrhage scenarios where turbulent flow leads to abnormal pooling geometries in surgical fields. These pools are formed in conditions endemic to surgical procedures -- uneven heterogeneous tissue, under glossy lighting conditions and rapid tool movement. We benchmark several state-of-the-art segmentation models and provide insight into the difficulties specific to blood detection. We intend for HemoSet to spur development of autonomous blood suction tools by providing a platform for training and refining blood segmentation models, addressing the precision needed for such robotics.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. P. Vávra, J. Roman, P. Zonča, et al., “Recent development of augmented reality in surgery: a review,” Journal of healthcare engineering, vol. 2017, 2017.
  2. M. Yip and N. Das, “Robot autonomy for surgery,” in The Encyclopedia of MEDICAL ROBOTICS: Volume 1 Minimally Invasive Surgical Robotics.   World Scientific, 2019, pp. 281–313.
  3. M. Kawka, T. M. Gall, C. Fang, et al., “Intraoperative video analysis and machine learning models will change the future of surgical training,” Intelligent Surgery, vol. 1, pp. 13–15, 2022.
  4. G. Marullo, L. Tanzi, L. Ulrich, et al., “A multi-task convolutional neural network for semantic segmentation and event detection in laparoscopic surgery,” Journal of Personalized Medicine, vol. 13, no. 3, p. 413, 2023.
  5. F. Richter, S. Shen, F. Liu, et al., “Autonomous robotic suction to clear the surgical field for hemostasis using image-based blood flow detection,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1383–1390, 2021.
  6. J. Huang, F. Liu, F. Richter, et al., “Model-predictive control of blood suction for surgical hemostasis using differentiable fluid simulations,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 12 380–12 386.   IEEE, 2021.
  7. J. A. B. Noguera, “Controllilng cognitive demands with semi-autonomous suction framework for robotic-assisted surgery,” Ph.D. dissertation, Purdue University Graduate School, 2021.
  8. A. Kirillov, E. Mintun, N. Ravi, et al., “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
  9. C. Schenck and D. Fox, “Perceiving and reasoning about liquids using fully convolutional networks,” The International Journal of Robotics Research, vol. 37, no. 4-5, pp. 452–471, 2018.
  10. A. Yamaguchi and C. G. Atkeson, “Stereo vision of liquid and particle flow for robot pouring,” in 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), pp. 1173–1180.   IEEE, 2016.
  11. F. Richter, R. K. Orosco, and M. C. Yip, “Image based reconstruction of liquids from 2d surface detections,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13 811–13 820, 2022.
  12. S. Gur, L. Wolf, L. Golgher, et al., “Unsupervised microvascular image segmentation using an active contours mimicking neural network,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 10 722–10 731, 2019.
  13. M. Haft-Javaherian, L. Fang, V. Muse, et al., “Deep convolutional neural networks for segmenting 3d in vivo multiphoton images of vasculature in alzheimer disease mouse models,” PloS one, vol. 14, no. 3, p. e0213539, 2019.
  14. A. Karargyris and N. Bourbakis, “Detection of small bowel polyps and ulcers in wireless capsule endoscopy videos,” IEEE Transactions on biomedical engineering, vol. 58, no. 10, pp. 2777–2786, 2011.
  15. J. Tang, Y. Gong, L. Xu, et al., “Bleeding contour detection for craniotomy,” Biomedical Signal Processing and Control, vol. 73, p. 103419, 2022.
  16. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241.   Springer, 2015.
  17. X. Li, H. Chen, X. Qi, et al., “H-denseunet: hybrid densely connected unet for liver and tumor segmentation from ct volumes,” IEEE transactions on medical imaging, vol. 37, no. 12, pp. 2663–2674, 2018.
  18. S. Guan, A. A. Khan, S. Sikdar, et al., “Fully dense unet for 2-d sparse photoacoustic tomography artifact removal,” IEEE journal of biomedical and health informatics, vol. 24, no. 2, pp. 568–576, 2019.
  19. Y. Weng, T. Zhou, Y. Li, et al., “Nas-unet: Neural architecture search for medical image segmentation,” IEEE access, vol. 7, pp. 44 247–44 257, 2019.
  20. N. Siddique, S. Paheding, C. P. Elkin, et al., “U-net and its variants for medical image segmentation: A review of theory and applications,” Ieee Access, vol. 9, pp. 82 031–82 057, 2021.
  21. Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, et al., “Unet++: A nested u-net architecture for medical image segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4, pp. 3–11.   Springer, 2018.
  22. Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, et al., “Unet++: Redesigning skip connections to exploit multiscale features in image segmentation,” IEEE transactions on medical imaging, vol. 39, no. 6, pp. 1856–1867, 2019.
  23. Y. Lu, X. Qin, H. Fan, et al., “Wbc-net: A white blood cell segmentation network based on unet++ and resnet,” Applied Soft Computing, vol. 101, p. 107006, 2021.
  24. A. Tulsani, P. Kumar, and S. Pathan, “Automated segmentation of optic disc and optic cup for glaucoma assessment using improved unet++ architecture,” Biocybernetics and Biomedical Engineering, vol. 41, no. 2, pp. 819–832, 2021.
  25. L.-C. Chen, Y. Zhu, G. Papandreou, et al., “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proceedings of the European conference on computer vision (ECCV), pp. 801–818, 2018.
  26. A. Roy Choudhury, R. Vanguri, S. R. Jambawalikar, et al., “Segmentation of brain tumors using deeplabv3+,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part II 4, pp. 154–167.   Springer, 2019.
  27. R. Azad, M. Asadi-Aghbolaghi, M. Fathy, et al., “Attention deeplabv3+: Multi-level context attention mechanism for skin lesion segmentation,” in European conference on computer vision, pp. 251–266.   Springer, 2020.
  28. T. Fan, G. Wang, Y. Li, et al., “Ma-net: A multi-scale attention network for liver and tumor segmentation,” IEEE Access, vol. 8, pp. 179 656–179 665, 2020.
  29. K. Elkarazle, V. Raman, P. Then, et al., “Improved colorectal polyp segmentation using enhanced ma-net and modified mix-vit transformer,” IEEE Access, vol. 11, pp. 69 295–69 309, 2023.
  30. E. Xie, W. Wang, Z. Yu, et al., “Segformer: Simple and efficient design for semantic segmentation with transformers,” Advances in Neural Information Processing Systems, vol. 34, pp. 12 077–12 090, 2021.
  31. X. Huang, Z. Deng, D. Li, et al., “Missformer: An effective medical image segmentation transformer,” arXiv preprint arXiv:2109.07162, 2021.
  32. Q. Liu, Z. Xu, Y. Jiao, et al., “isegformer: interactive segmentation via transformers with application to 3d knee mr images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 464–474.   Springer, 2022.
  33. G. H. Ballantyne and F. Moll, “The da vinci telerobotic surgical system: the virtual operative field and telepresence surgery,” Surgical Clinics, vol. 83, no. 6, pp. 1293–1304, 2003.
  34. S. Lin, A. J. Miao, J. Lu, et al., “Semantic-super: A semantic-aware surgical perception framework for endoscopic tissue classification, reconstruction, and tracking,” arXiv preprint arXiv:2210.16674, 2022.
  35. K. Wada, “labelme: Image polygonal annotation with python,” https://github.com/wkentaro/labelme, 2018.
  36. S. K. Warfield, K. H. Zou, and W. M. Wells, “Simultaneous truth and performance level estimation (staple): an algorithm for the validation of image segmentation,” IEEE transactions on medical imaging, vol. 23, no. 7, pp. 903–921, 2004.
  37. C.-Y. Lee, S. Xie, P. Gallagher, et al., “Deeply-supervised nets,” in Artificial intelligence and statistics, pp. 562–570.   PMLR, 2015.
  38. A. Paszke, S. Gross, F. Massa, et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.
  39. R. Wightman, “Pytorch image models,” https://github.com/rwightman/pytorch-image-models, 2019.
  40. P. Iakubovskii, “Segmentation models pytorch,” 2019.
  41. Z. Zhang and M. Sabuncu, “Generalized cross entropy loss for training deep neural networks with noisy labels,” Advances in neural information processing systems, vol. 31, 2018.
  42. K. He, X. Zhang, S. Ren, et al., “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
  43. A. A. Taha and A. Hanbury, “Metrics for evaluating 3d medical image segmentation: analysis, selection, and tool,” BMC medical imaging, vol. 15, no. 1, pp. 1–28, 2015.
Citations (2)

Summary

We haven't generated a summary for this paper yet.