Papers
Topics
Authors
Recent
Search
2000 character limit reached

DD-VNB: A Depth-based Dual-Loop Framework for Real-time Visually Navigated Bronchoscopy

Published 4 Mar 2024 in cs.CV | (2403.01683v2)

Abstract: Real-time 6 DOF localization of bronchoscopes is crucial for enhancing intervention quality. However, current vision-based technologies struggle to balance between generalization to unseen data and computational speed. In this study, we propose a Depth-based Dual-Loop framework for real-time Visually Navigated Bronchoscopy (DD-VNB) that can generalize across patient cases without the need of re-training. The DD-VNB framework integrates two key modules: depth estimation and dual-loop localization. To address the domain gap among patients, we propose a knowledge-embedded depth estimation network that maps endoscope frames to depth, ensuring generalization by eliminating patient-specific textures. The network embeds view synthesis knowledge into a cycle adversarial architecture for scale-constrained monocular depth estimation. For real-time performance, our localization module embeds a fast ego-motion estimation network into the loop of depth registration. The ego-motion inference network estimates the pose change of the bronchoscope in high frequency while depth registration against the pre-operative 3D model provides absolute pose periodically. Specifically, the relative pose changes are fed into the registration process as the initial guess to boost its accuracy and speed. Experiments on phantom and in-vivo data from patients demonstrate the effectiveness of our framework: 1) monocular depth estimation outperforms SOTA, 2) localization achieves an accuracy of Absolute Tracking Error (ATE) of 4.7 $\pm$ 3.17 mm in phantom and 6.49 $\pm$ 3.88 mm in patient data, 3) with a frame-rate approaching video capture speed, 4) without the necessity of case-wise network retraining. The framework's superior speed and accuracy demonstrate its promising clinical potential for real-time bronchoscopic navigation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. “Cancer statistics, 2023” In CA: a cancer journal for clinicians 73.1 Wiley Online Library, 2023, pp. 17–48
  2. Selma Metintaş “Epidemiology of Lung Cancer” In Airway diseases Springer, 2023, pp. 1–45
  3. “Interventional bronchoscopy” In American journal of respiratory and critical care medicine 202.1 American Thoracic Society, 2020, pp. 29–50
  4. “Ion: technology and techniques for shape-sensing robotic-assisted bronchoscopy” In The Annals of thoracic surgery 113.1 Elsevier, 2022, pp. 308–315
  5. “The feasibility of using the “artery sign” for pre-procedural planning in navigational bronchoscopy for parenchymal pulmonary lesion sampling” In Diagnostics 12.12 MDPI, 2022, pp. 3059
  6. “Sensitivity and safety of electromagnetic navigation bronchoscopy for lung cancer diagnosis: systematic review and meta-analysis” In Chest 158.4 Elsevier, 2020, pp. 1753–1769
  7. “Shape sensing techniques for continuum robots in minimally invasive surgery: A survey” In IEEE Transactions on Biomedical Engineering 64.8 IEEE, 2016, pp. 1665–1678
  8. “Deep monocular 3D reconstruction for assisted navigation in bronchoscopy” In International journal of computer assisted radiology and surgery 12 Springer, 2017, pp. 1089–1099
  9. “Generative localization with uncertainty estimation through video-CT data for bronchoscopic biopsy” In IEEE Robotics and Automation Letters 5.1 IEEE, 2019, pp. 258–265
  10. “Autonomous driving in the lung using deep learning for localization” In arXiv preprint arXiv:1907.08136, 2019
  11. “Tracking of a bronchoscope using epipolar geometry analysis and intensity-based image registration of real and virtual endoscopic images” In Medical Image Analysis 6.3 Elsevier, 2002, pp. 321–336
  12. “Selective image similarity measure for bronchoscope tracking based on image registration” In Medical Image Analysis 13.4 Elsevier, 2009, pp. 621–633
  13. “Context-aware depth and pose estimation for bronchoscopic navigation” In IEEE Robotics and Automation Letters 4.2 IEEE, 2019, pp. 732–739
  14. “Visually navigated bronchoscopy using three cycle-consistent generative adversarial network for depth estimation” In Medical image analysis 73 Elsevier, 2021, pp. 102164
  15. “Unsupervised learning of depth and ego-motion from video” In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1851–1858
  16. “Long-term temporally consistent unpaired video translation from simulated surgical 3D data” In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3343–3353
  17. “EndoSLAM dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos” In Medical image analysis 71 Elsevier, 2021, pp. 102058
  18. “Landmark Based Bronchoscope Localization for Needle Insertion Under Respiratory Deformation” In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 6593–6600 IEEE
  19. “Feature-based Visual Odometry for Bronchoscopy: A Dataset and Benchmark” In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 6557–6564 IEEE
  20. “Offsetnet: Deep learning for localization in the lung using rendered images” In 2019 international conference on robotics and automation (ICRA), 2019, pp. 5046–5052 IEEE
  21. “Adversarial domain feature adaptation for bronchoscopic depth estimation” In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part IV 24, 2021, pp. 300–310 Springer
  22. “BronchoPose: an analysis of data and model configuration for vision-based bronchoscopy pose estimation” In Computer Methods and Programs in Biomedicine 228 Elsevier, 2023, pp. 107241
  23. Abhinav Valada, Noha Radwan and Wolfram Burgard “Deep auxiliary learning for visual localization and odometry” In 2018 IEEE international conference on robotics and automation (ICRA), 2018, pp. 6939–6946 IEEE
  24. “Understanding the limitations of cnn-based absolute camera pose regression” In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 3302–3312
  25. “Augmenting colonoscopy using extended and directional cyclegan for lossy image translation” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 4696–4705
  26. “Unpaired image-to-image translation using cycle-consistent adversarial networks” In Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223–2232
  27. “Least squares generative adversarial networks” In Proceedings of the IEEE international conference on computer vision, 2017, pp. 2794–2802
  28. “Unsupervised scale-consistent depth and ego-motion learning from monocular video” In Advances in neural information processing systems 32, 2019
  29. “Deepvo: Towards end-to-end visual odometry with deep recurrent convolutional neural networks” In 2017 IEEE international conference on robotics and automation (ICRA), 2017, pp. 2043–2050 IEEE
  30. “Vinet: Visual-inertial odometry as a sequence-to-sequence learning problem” In Proceedings of the AAAI Conference on Artificial Intelligence 31.1, 2017
  31. “Flownet 2.0: Evolution of optical flow estimation with deep networks” In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2462–2470
  32. Roger Fletcher and Michael JD Powell “A rapidly convergent descent method for minimization” In The computer journal 6.2 Oxford University Press, 1963, pp. 163–168
  33. “Vision–kinematics interaction for robotic-assisted bronchoscopy navigation” In IEEE Transactions on Medical Imaging 41.12 IEEE, 2022, pp. 3600–3610
Citations (3)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.