Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

3D Vessel Reconstruction from Sparse-View Dynamic DSA Images via Vessel Probability Guided Attenuation Learning (2405.10705v1)

Published 17 May 2024 in eess.IV and cs.CV

Abstract: Digital Subtraction Angiography (DSA) is one of the gold standards in vascular disease diagnosing. With the help of contrast agent, time-resolved 2D DSA images deliver comprehensive insights into blood flow information and can be utilized to reconstruct 3D vessel structures. Current commercial DSA systems typically demand hundreds of scanning views to perform reconstruction, resulting in substantial radiation exposure. However, sparse-view DSA reconstruction, aimed at reducing radiation dosage, is still underexplored in the research community. The dynamic blood flow and insufficient input of sparse-view DSA images present significant challenges to the 3D vessel reconstruction task. In this study, we propose to use a time-agnostic vessel probability field to solve this problem effectively. Our approach, termed as vessel probability guided attenuation learning, represents the DSA imaging as a complementary weighted combination of static and dynamic attenuation fields, with the weights derived from the vessel probability field. Functioning as a dynamic mask, vessel probability provides proper gradients for both static and dynamic fields adaptive to different scene types. This mechanism facilitates a self-supervised decomposition between static backgrounds and dynamic contrast agent flow, and significantly improves the reconstruction quality. Our model is trained by minimizing the disparity between synthesized projections and real captured DSA images. We further employ two training strategies to improve our reconstruction quality: (1) coarse-to-fine progressive training to achieve better geometry and (2) temporal perturbed rendering loss to enforce temporal consistency. Experimental results have demonstrated superior quality on both 3D vessel reconstruction and 2D view synthesis.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. K. Ruedinger, S. Schafer, M. Speidel, and C. Strother, “4d-dsa: development and current neurovascular applications,” American Journal of Neuroradiology, vol. 42, no. 2, pp. 214–220, 2021.
  2. C. Sandoval-Garcia, K. Royalty, P. Yang, D. Niemann, A. Ahmed, B. Aagaard-Kienitz, M. K. Başkaya, S. Schafer, and C. Strother, “4d dsa a new technique for arteriovenous malformation evaluation: a feasibility study,” Journal of neurointerventional surgery, 2015.
  3. S. Lang, P. Gölitz, T. Struffert, J. Rösch, K. Rössler, M. Kowarschik, C. Strother, and A. Doerfler, “4d dsa for dynamic visualization of cerebral vasculature: a single-center experience in 26 cases,” American Journal of Neuroradiology, vol. 38, no. 6, pp. 1169–1176, 2017.
  4. C. Sandoval-Garcia, K. Royalty, B. Aagaard-Kienitz, S. Schafer, P. Yang, and C. Strother, “A comparison of 4d dsa with 2d and 3d dsa in the analysis of normal vascular structures in a canine model,” American Journal of Neuroradiology, vol. 36, no. 10, pp. 1959–1963, 2015.
  5. L. A. Feldkamp, L. C. Davis, and J. W. Kress, “Practical cone-beam algorithm,” Josa a, vol. 1, no. 6, pp. 612–619, 1984.
  6. R. Fahrig, A. Fox, S. Lownie, and D. Holdsworth, “Use of a c-arm system to generate true three-dimensional computed rotational angiograms: preliminary in vitro and in vivo results.,” American journal of neuroradiology, vol. 18, no. 8, pp. 1507–1514, 1997.
  7. L. Shen, W. Zhao, and L. Xing, “Patient-specific reconstruction of volumetric computed tomography images from a single projection view via deep learning,” Nature biomedical engineering, vol. 3, no. 11, pp. 880–888, 2019.
  8. Y. Kasten, D. Doktofsky, and I. Kovler, “End-to-end convolutional neural network for 3d reconstruction of knee bones from bi-planar x-ray images,” in Machine Learning for Medical Image Reconstruction: Third International Workshop, MLMIR 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 8, 2020, Proceedings 3, pp. 123–133, Springer, 2020.
  9. X. Ying, H. Guo, K. Ma, J. Wu, Z. Weng, and Y. Zheng, “X2ct-gan: reconstructing ct from biplanar x-rays with generative adversarial networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10619–10628, 2019.
  10. Z. Liu, Y. Fang, C. Li, H. Wu, Y. Liu, Z. Cui, and D. Shen, “Geometry-aware attenuation field learning for sparse-view cbct reconstruction,” arXiv preprint arXiv:2303.14739, 2023.
  11. Y. Lin, Z. Luo, W. Zhao, and X. Li, “Learning deep intensity field for extremely sparse-view cbct reconstruction,” arXiv preprint arXiv:2303.06681, 2023.
  12. H. Zhao, Z. Zhou, F. Wu, D. Xiang, H. Zhao, W. Zhang, L. Li, Z. Li, J. Huang, H. Hu, et al., “Self-supervised learning enables 3d digital subtraction angiography reconstruction from ultra-sparse 2d projection views: a multicenter study,” Cell Reports Medicine, vol. 3, no. 10, 2022.
  13. P. Bifulco, M. Cesarelli, R. Allen, M. Romano, A. Fratini, and G. Pasquariello, “2d-3d registration of ct vertebra volume to fluoroscopy projection: a calibration model assessment,” EURASIP Journal on Advances in Signal Processing, vol. 2010, pp. 1–8, 2009.
  14. J. Alakuijala, U. Jaske, S. Sallinen, H. Hehminen, and J. Laitinen, “Reconstruction of digital radiographs by texture mapping, ray casting and splatting,” in Proceedings of 18th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 2, pp. 643–645, IEEE, 1996.
  15. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021.
  16. D. Rückert, Y. Wang, R. Li, R. Idoughi, and W. Heidrich, “Neat: Neural adaptive tomography,” ACM Transactions on Graphics (TOG), vol. 41, no. 4, pp. 1–13, 2022.
  17. R. Zha, Y. Zhang, and H. Li, “Naf: neural attenuation fields for sparse-view cbct reconstruction,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 442–452, Springer, 2022.
  18. Y. Fang, L. Mei, C. Li, Y. Liu, W. Wang, Z. Cui, and D. Shen, “Snaf: Sparse-view cbct reconstruction with neural attenuation fields,” arXiv preprint arXiv:2211.17048, 2022.
  19. Z. Zhou, H. Zhao, J. Fang, D. Xiang, L. Chen, L. Wu, F. Wu, W. Liu, C. Zheng, and X. Wang, “Tiavox: Time-aware attenuation voxels for sparse-view 4d dsa reconstruction,” arXiv preprint arXiv:2309.02318, 2023.
  20. C. Sun, M. Sun, and H.-T. Chen, “Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5459–5469, 2022.
  21. K. Gao, Y. Gao, H. He, D. Lu, L. Xu, and J. Li, “Nerf: Neural radiance field in 3d vision, a comprehensive review,” arXiv preprint arXiv:2210.00379, 2022.
  22. A. Chen, Z. Xu, A. Geiger, J. Yu, and H. Su, “Tensorf: Tensorial radiance fields,” in European Conference on Computer Vision, pp. 333–350, Springer, 2022.
  23. T. Müller, A. Evans, C. Schied, and A. Keller, “Instant neural graphics primitives with a multiresolution hash encoding,” ACM Transactions on Graphics (ToG), vol. 41, no. 4, pp. 1–15, 2022.
  24. M. Niemeyer, J. T. Barron, B. Mildenhall, M. S. Sajjadi, A. Geiger, and N. Radwan, “Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5480–5490, 2022.
  25. M. Kim, S. Seo, and B. Han, “Infonerf: Ray entropy minimization for few-shot neural volume rendering,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12912–12921, 2022.
  26. J. Yang, M. Pavone, and Y. Wang, “Freenerf: Improving few-shot neural rendering with free frequency regularization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8254–8263, 2023.
  27. P. Wang, L. Liu, Y. Liu, C. Theobalt, T. Komura, and W. Wang, “Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction,” arXiv preprint arXiv:2106.10689, 2021.
  28. Z. Li, T. Müller, A. Evans, R. H. Taylor, M. Unberath, M.-Y. Liu, and C.-H. Lin, “Neuralangelo: High-fidelity neural surface reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8456–8465, 2023.
  29. Y. Wang, Q. Han, M. Habermann, K. Daniilidis, C. Theobalt, and L. Liu, “Neus2: Fast learning of neural implicit surfaces for multi-view reconstruction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3295–3306, 2023.
  30. J. T. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla, and P. P. Srinivasan, “Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5855–5864, 2021.
  31. J. T. Barron, B. Mildenhall, D. Verbin, P. P. Srinivasan, and P. Hedman, “Mip-nerf 360: Unbounded anti-aliased neural radiance fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5470–5479, 2022.
  32. W. Hu, Y. Wang, L. Ma, B. Yang, L. Gao, X. Liu, and Y. Ma, “Tri-miprf: Tri-mip representation for efficient anti-aliasing neural radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 19774–19783, 2023.
  33. J. T. Barron, B. Mildenhall, D. Verbin, P. P. Srinivasan, and P. Hedman, “Zip-nerf: Anti-aliased grid-based neural radiance fields,” arXiv preprint arXiv:2304.06706, 2023.
  34. A. Pumarola, E. Corona, G. Pons-Moll, and F. Moreno-Noguer, “D-nerf: Neural radiance fields for dynamic scenes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10318–10327, 2021.
  35. J. Fang, T. Yi, X. Wang, L. Xie, X. Zhang, W. Liu, M. Nießner, and Q. Tian, “Fast dynamic radiance fields with time-aware neural voxels,” in SIGGRAPH Asia 2022 Conference Papers, pp. 1–9, 2022.
  36. S. Fridovich-Keil, G. Meanti, F. R. Warburg, B. Recht, and A. Kanazawa, “K-planes: Explicit radiance fields in space, time, and appearance,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12479–12488, 2023.
  37. S. Park, M. Son, S. Jang, Y. C. Ahn, J.-Y. Kim, and N. Kang, “Temporal interpolation is all you need for dynamic neural radiance fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4212–4221, 2023.
  38. A. C. Kak and M. Slaney, Principles of computerized tomographic imaging. SIAM, 2001.
  39. W. E. Lorensen and H. E. Cline, “Marching cubes: A high resolution 3d surface construction algorithm,” in Seminal graphics: pioneering efforts that shaped the field, pp. 347–353, 1998.
  40. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com