Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dense 3D Reconstruction Through Lidar: A Comparative Study on Ex-vivo Porcine Tissue (2401.10709v1)

Published 19 Jan 2024 in eess.IV, cs.CV, and cs.RO

Abstract: New sensing technologies and more advanced processing algorithms are transforming computer-integrated surgery. While researchers are actively investigating depth sensing and 3D reconstruction for vision-based surgical assistance, it remains difficult to achieve real-time, accurate, and robust 3D representations of the abdominal cavity for minimally invasive surgery. Thus, this work uses quantitative testing on fresh ex-vivo porcine tissue to thoroughly characterize the quality with which a 3D laser-based time-of-flight sensor (lidar) can perform anatomical surface reconstruction. Ground-truth surface shapes are captured with a commercial laser scanner, and the resulting signed error fields are analyzed using rigorous statistical tools. When compared to modern learning-based stereo matching from endoscopic images, time-of-flight sensing demonstrates higher precision, lower processing delay, higher frame rate, and superior robustness against sensor distance and poor illumination. Furthermore, we report on the potential negative effect of near-infrared light penetration on the accuracy of lidar measurements across different tissue samples, identifying a significant measured depth offset for muscle in contrast to fat and liver. Our findings highlight the potential of lidar for intraoperative 3D perception and point toward new methods that combine complementary time-of-flight and spectral imaging.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (61)
  1. F. Chadebecq, L. B. Lovat, and D. Stoyanov, “Artificial intelligence and automation in endoscopy and surgery,” Nature Reviews Gastroenterology & Hepatology, vol. 20, pp. 171–182, 2023.
  2. D. Kitaguchi, N. Takeshita, H. Hasegawa, and M. Ito, “Artificial intelligence-based computer vision in surgery: Recent advances and future perspectives,” Annals of Gastroenterological Surgery, vol. 6, pp. 29–36, 1 2022.
  3. C. R. Garrow, K.-F. Kowalewski, L. Li, M. Wagner, M. W. Schmidt, S. Engelhardt, D. A. Hashimoto, H. G. Kenngott, S. Bodenstedt, S. Speidel, B. P. Müller-Stich, and F. Nickel, “Machine learning for surgical phase recognition: A systematic review,” Annals of Surgery, vol. 273, 2021.
  4. N. T. Clancy, G. Jones, L. Maier-Hein, D. S. Elson, and D. Stoyanov, “Surgical spectral imaging,” Medical Image Analysis, vol. 63, p. 101699, 2020.
  5. C. Huang, O. Mees, A. Zeng, and W. Burgard, “Visual language maps for robot navigation,” Preprint, 2023.
  6. K. Bhattacharya, A. S. Bhattacharya, N. Bhattacharya, V. D. Yagnik, P. Garg, and S. Kumar, “ChatGPT in surgical practice – a new kid on the block,” Indian Journal of Surgery, 2023.
  7. D. Amparore, A. Pecoraro, E. Checcucci, S. D. Cillis, F. Piramide, G. Volpi, A. Piana, P. Verri, S. Granato, M. Sica, M. Manfredi, C. Fiori, R. Autorino, and F. Porpiglia, “3D imaging technologies in minimally invasive kidney and prostate cancer surgery: Which is the urologists’ perception?” Minerva Urol. Nephrol., vol. 74, pp. 178–185, 2022.
  8. X. Chen, I. Vizzo, T. Läbe, J. Behley, and C. Stachniss, “Range image-based LiDAR localization for autonomous vehicles,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 5802–5808.
  9. H. Yao, R. Qin, and X. Chen, “Unmanned aerial vehicle for remote sensing applications – a review,” Remote Sensing, vol. 11, no. 12, 2019.
  10. S. Khattak, H. Nguyen, F. Mascarich, T. Dang, and K. Alexis, “Complementary multi–modal sensor fusion for resilient robot pose estimation in subterranean environments,” in Proceedings of the International Conference on Unmanned Aircraft Systems (ICUAS), 2020, pp. 1024–1029.
  11. J. Nubert, S. Khattak, and M. Hutter, “Graph-based multi-sensor fusion for consistent localization of autonomous construction robots,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2022, pp. 10 048–10 054.
  12. J. Nubert, E. Walther, S. Khattak, and M. Hutter, “Learning-based localizability estimation for robust lidar localization,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 17–24.
  13. H. Hirschmuller, “Accurate and efficient stereo processing by semi-global matching and mutual information,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, 2005, pp. 807–814.
  14. A. Saxena, M. Sun, and A. Y. Ng, “Make3D: Learning 3D scene structure from a single still image,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 5, pp. 824–840, 2009.
  15. L. Ladicky, J. Shi, and M. Pollefeys, “Pulling things out of perspective,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 89–96, 2014.
  16. Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time-series,” in The Handbook of Brain Theory and Neural Networks, 1995, vol. 3361, no. 10, p. 1995.
  17. F. Khan, S. Hussain, S. Basak, M. Moustafa, and P. Corcoran, “A review of benchmark datasets and training loss functions in neural depth estimation,” IEEE Access, vol. 9, pp. 148 479–148 503, 2021.
  18. Y. Ming, X. Meng, C. Fan, and H. Yu, “Deep learning for monocular depth estimation: A review,” Neurocomputing, vol. 438, pp. 14–33, 2021.
  19. A. Masoumian, H. A. Rashwan, J. Cristiano, M. S. Asif, and D. Puig, “Monocular depth estimation using deep learning: A review,” Sensors, vol. 22, no. 14, 2022.
  20. M. Poggi, F. Tosi, K. Batsos, P. Mordohai, and S. Mattoccia, “On the synergies between machine learning and binocular stereo for depth estimation from images: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 9, pp. 5314–5334, 2022.
  21. H. Laga, L. V. Jospin, F. Boussaid, and M. Bennamoun, “A survey on deep learning techniques for stereo-based depth estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 4, pp. 1738–1764, 2022.
  22. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” Preprint, 2021.
  23. R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 10 684–10 695.
  24. M. Oquab, T. Darcet, T. Moutakanni, H. Vo, M. Szafraniec, V. Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby, M. Assran, N. Ballas, W. Galuba, R. Howes, P.-Y. Huang, S.-W. Li, I. Misra, M. Rabbat, V. Sharma, G. Synnaeve, H. Xu, H. Jegou, J. Mairal, P. Labatut, A. Joulin, and P. Bojanowski, “DINOv2: Learning robust visual features without supervision,” arXiv preprint arXiv:2304.07193, 2023.
  25. Z. Cheng, Y. Zhang, and C. Tang, “Swin-Depth: Using transformers and multi-scale fusion for monocular-based depth estimation,” IEEE Sensors Journal, vol. 21, no. 23, pp. 26 912–26 920, 2021.
  26. Z. Li, X. Liu, N. Drenkow, A. Ding, F. X. Creighton, R. H. Taylor, and M. Unberath, “Revisiting stereo depth estimation from a sequence-to-sequence perspective with transformers,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 10 2021, pp. 6197–6206.
  27. T. M. Ward, P. Mascagni, Y. Ban, G. Rosman, N. Padoy, O. Meireles, and D. A. Hashimoto, “Computer vision in surgery,” Surgery, vol. 169, no. 5, pp. 1253–1256, 2021.
  28. A. Avinash, A. E. Abdelaal, P. Mathur, and S. E. Salcudean, “A “pickup”’ stereoscopic camera with visual-motor aligned control for the da Vinci surgical system: a preliminary study,” International Journal of Computer Assisted Radiology and Surgery, vol. 14, no. 7, pp. 1197–1206, 2019.
  29. P. Mascagni, D. Alapatt, L. Sestini, M. S. Altieri, A. Madani, Y. Watanabe, A. Alseidi, J. A. Redan, S. Alfieri, G. Costamagna, I. Boškoski, N. Padoy, and D. A. Hashimoto, “Computer vision in surgery: from potential to clinical value,” Nature Partner Journals Digital Medicine, vol. 5, no. 1, p. 163, 2022.
  30. “Hamlyn centre surgical dataset,” http://hamlyn.doc.ic.ac.uk/vision/, accessed: 2023-06-05.
  31. V. Penza, A. S. Ciullo, S. Moccia, L. S. Mattos, and E. De Momi, “EndoAbS dataset: Endoscopic abdominal stereo image dataset for benchmarking 3D stereo reconstruction algorithms,” International Journal of Medical Robotics and Computer Assisted Surgery, vol. 14, no. 5, p. e1926, 10 2018.
  32. M. Allan, J. Mcleod, C. Wang, J. C. Rosenthal, Z. Hu, N. Gard, P. Eisert, K. X. Fu, T. Zeffiro, W. Xia, Z. Zhu, H. Luo, F. Jia, X. Zhang, X. Li, L. Sharan, T. Kurmann, S. Schmid, R. Sznitman, D. Psychogyios, M. Azizian, D. Stoyanov, L. Maier-Hein, and S. Speidel, “Stereo correspondence and reconstruction of endoscopic data challenge,” Preprint, 1 2021.
  33. P. J. E. Edwards, D. Psychogyios, S. Speidel, L. Maier-Hein, and D. Stoyanov, “SERV-CT: A disparity dataset from cone-beam cone-beam ct for validation of endoscopic 3d reconstruction,” Medical Image Analysis, p. 102302, 2021.
  34. J. Cartucho, S. Tukra, Y. Li, D. S. Elson, and S. Giannarou, “VisionBlender: a tool to efficiently generate computer vision datasets for robotic surgery,” Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, vol. 9, no. 4, pp. 331–338, 2021.
  35. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Communications of the ACM, vol. 63, no. 11, pp. 139–144, 2020.
  36. A. Rau, P. J. E. Edwards, O. F. Ahmad, P. Riordan, M. Janatka, L. B. Lovat, and D. Stoyanov, “Implicit domain adaptation with conditional generative adversarial networks for depth prediction in endoscopy,” International Journal of Computer Assisted Radiology and Surgery, vol. 14, no. 7, pp. 1167–1176, 2019.
  37. B. Huang, J.-Q. Zheng, A. Nguyen, D. Tuch, K. Vyas, S. Giannarou, and D. S. Elson, “Self-supervised generative adversarial network for depth estimation in laparoscopic images,” in Proceedings of the Medical Image Computing and Computer Assisted Intervention (MICCAI), 2021, pp. 227–237.
  38. D. Psychogyios, E. Mazomenos, F. Vasconcelos, and D. Stoyanov, “MSDESIS: Multitask stereo disparity estimation and surgical instrument segmentation,” IEEE Transactions on Medical Imaging, vol. 41, no. 11, pp. 3218–3230, 2022.
  39. S. Lin, A. J. Miao, J. Lu, S. Yu, Z.-Y. Chiu, F. Richter, and M. C. Yip, “Semantic-SuPer: A semantic-aware surgical perception framework for endoscopic tissue classification, reconstruction, and tracking,” Preprint, 10 2022.
  40. L. Lipson, Z. Teed, and J. Deng, “RAFT-Stereo: Multilevel recurrent field transforms for stereo matching,” in Proceedings of the International Conference on 3D Vision (3DV), 2021.
  41. “Comparing some recent stereo algorithms in the wild,” https://nicolas.burrus.name/stereo-comparison/, accessed: 2023-06-21.
  42. X. Maurice, C. Albitar, C. Doignon, and M. De Mathelin, “A structured light-based laparoscope with real-time organs’ surface reconstruction for minimally invasive surgery,” in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS), 2012, pp. 5769–5772.
  43. C. Schmalz, F. Forster, A. Schick, and E. Angelopoulou, “An endoscopic 3D scanner based on structured light,” Medical Image Analysis, vol. 16, no. 5, pp. 1063–1072, 7 2012.
  44. P. Edgcumbe, P. Pratt, G.-Z. Yang, C. Nguan, and R. Rohling, “Pico Lantern: A pick-up projector for augmented reality in laparoscopic surgery,” in Proceedings of the Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2014, pp. 432–439.
  45. A. Reiter, A. Sigaras, D. Fowler, and P. K. Allen, “Surgical structured light for 3D minimally invasive surgical imaging,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2014, pp. 1282–1287.
  46. H. N. D. Le, H. Nguyen, Z. Wang, J. Opfermann, S. Leonard, A. Krieger, and J. U. Kang, “Demonstration of a laparoscopic structured-illumination three-dimensional imaging system for guiding reconstructive bowel anastomosis,” Journal of Biomedical Optics, vol. 23, no. 5, p. 056009, 5 2018.
  47. R. Stolyarov, V. Buharin, M. Val, C. Beurskens, M. DeMaio, S. Srinivasan, T. Calef, and P. Kim, “Sub-millimeter precision 3D measurement through a standard endoscope with time of flight,” in Proc. SPIE, vol. 11949, 3 2022, p. 119490E.
  48. A. Weld, J. Cartucho, C. Xu, J. Davids, and S. Giannarou, “Regularising disparity estimation via multi task learning with structured light reconstruction,” Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, pp. 1–9, 12 2022.
  49. J. Penne, K. Höller, M. Stürmer, T. Schrauder, A. Schneider, R. Engelbrecht, H. Feußner, B. Schmauss, and J. Hornegger, “Time-of-flight 3-D endoscopy,” in Proceedings of the Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2009, pp. 467–474.
  50. S. Haase, C. Forman, T. Kilgus, R. Bammer, L. Maier-Hein, and J. Hornegger, “ToF/RGB sensor fusion for 3-D endoscopy,” Current Medical Imaging Reviews, vol. 9, pp. 113–119, 05 2013.
  51. A. Roberti, N. Piccinelli, F. Falezza, G. De Rossi, S. Bonora, F. Setti, P. Fiorini, and R. Muradore, “A time-of-flight stereoscopic endoscope for anatomical 3D reconstruction,” in International Symposium on Medical Robotics (ISMR), 2021.
  52. G. Caccianiga and K. J. Kuchenbecker, “Dense 3D reconstruction through lidar: A new perspective on computer-integrated surgery,” in The Hamlyn Symposium on Medical Robotics: MedTech Reimagined, 1 2022, pp. 63–64.
  53. M.-P. Forte, R. Gourishetti, B. Javot, T. Engler, E. T. Gomez, and K. J. Kuchenbecker, “Design of interactive AR functions for robotic surgery and evaluation in dry-lab lymphadenectomy,” International Journal of Medical Robotics and Computer Asissted Surgery, pp. 1–21, 11 2021.
  54. M. Kalia, P. Mathur, N. Navab, and S. E. Salcudean, “Marker-less real-time intra-operative camera and hand-eye calibration procedure for surgical augmented reality,” Healthcare Technology Letters, vol. 6, no. 6, pp. 255–260, 2019.
  55. T. Tuna, J. Nubert, Y. Nava, S. Khattak, and M. Hutter, “X-ICP: Localizability-aware lidar registration for robust localization in extreme environments,” arXiv preprint arXiv:2211.16335, 2022.
  56. M. Lin, H. C. Lucas, and G. Shmueli, “Research commentary: Too big to fail: Large samples and the p-value problem,” Information Systems Research, vol. 24, no. 4, pp. 906–917, 2013.
  57. J. O. Wobbrock, L. Findlater, D. Gergle, and J. J. Higgins, “The Aligned Rank Transform for nonparametric factorial analyses using only anova procedures,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI).   New York, NY, USA: Association for Computing Machinery, 2011, pp. 143–146.
  58. L. A. Elkin, M. Kay, J. J. Higgins, and J. O. Wobbrock, “An Aligned Rank Transform procedure for multifactor contrast tests,” in Proceedings of the Annual ACM Symposium on User Interface Software and Technology (UIST), ser. UIST ’21.   New York, NY, USA: Association for Computing Machinery, 2021, pp. 754–768.
  59. H. Ding, J. Q. Lu, K. M. Jacobs, and X.-H. Hu, “Determination of refractive indices of porcine skin tissues and Intralipid at eight wavelengths between 325 and 1557 nm,” Journal of the Optical Society of America A, vol. 22, no. 6, pp. 1151–1157, 2005.
  60. F. Bergmann, F. Foschum, L. Marzel, and A. Kienle, “Ex vivo determination of broadband absorption and effective scattering coefficients of porcine tissue,” Photonics, vol. 8, no. 9, 2021.
  61. R. S. Decker, A. Shademan, J. D. Opfermann, S. Leonard, P. C. W. Kim, and A. Krieger, “Biocompatible near-infrared three-dimensional tracking system,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 3, pp. 549–556, 2017.
Citations (1)

Summary

We haven't generated a summary for this paper yet.