Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 133 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 34 tok/s Pro
GPT-4o 61 tok/s Pro
Kimi K2 194 tok/s Pro
GPT OSS 120B 430 tok/s Pro
Claude Sonnet 4.5 39 tok/s Pro
2000 character limit reached

Learning Task-Specific Sampling Strategy for Sparse-View CT Reconstruction (2409.01544v1)

Published 3 Sep 2024 in eess.IV and cs.CV

Abstract: Sparse-View Computed Tomography (SVCT) offers low-dose and fast imaging but suffers from severe artifacts. Optimizing the sampling strategy is an essential approach to improving the imaging quality of SVCT. However, current methods typically optimize a universal sampling strategy for all types of scans, overlooking the fact that the optimal strategy may vary depending on the specific scanning task, whether it involves particular body scans (e.g., chest CT scans) or downstream clinical applications (e.g., disease diagnosis). The optimal strategy for one scanning task may not perform as well when applied to other tasks. To address this problem, we propose a deep learning framework that learns task-specific sampling strategies with a multi-task approach to train a unified reconstruction network while tailoring optimal sampling strategies for each individual task. Thus, a task-specific sampling strategy can be applied for each type of scans to improve the quality of SVCT imaging and further assist in performance of downstream clinical usage. Extensive experiments across different scanning types provide validation for the effectiveness of task-specific sampling strategies in enhancing imaging quality. Experiments involving downstream tasks verify the clinical value of learned sampling strategies, as evidenced by notable improvements in downstream task performance. Furthermore, the utilization of a multi-task framework with a shared reconstruction network facilitates deployment on current imaging devices with switchable task-specific modules, and allows for easily integrate new tasks without retraining the entire model.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. R. Fahrig, R. Dixon, T. Payne, R. L. Morin, A. Ganguly, and N. Strobel, “Dose and image quality for a cone-beam c-arm ct system,” Medical Physics, vol. 33, no. 12, pp. 4541–4550, 2006.
  2. W. Wu, D. Hu, C. Niu, H. Yu, V. Vardhanabhuti, and G. Wang, “Drone: Dual-domain residual-based optimization network for sparse-view ct reconstruction,” IEEE Transactions on Medical Imaging, vol. 40, no. 11, pp. 3002–3014, 2021.
  3. Y. Han and J. C. Ye, “Framing u-net via deep convolutional framelets: Application to sparse-view ct,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1418–1429, 2018.
  4. E. Kang, J. Min, and J. C. Ye, “A deep convolutional neural network using directional wavelets for low-dose x-ray ct reconstruction,” Medical Physics, vol. 44, no. 10, pp. e360–e375, 2017.
  5. G. Zang, M. Aly, R. Idoughi, P. Wonka, and W. Heidrich, “Super-resolution and sparse view ct reconstruction,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 137–153.
  6. Z. Zhao, Y. Sun, and P. Cong, “Sparse-view ct reconstruction via generative adversarial networks,” in 2018 IEEE Nuclear Science Symposium and Medical Imaging Conference Proceedings (NSS/MIC), 2018, pp. 1–5.
  7. J. He, Y. Yang, Y. Wang, D. Zeng, Z. Bian, H. Zhang, J. Sun, Z. Xu, and J. Ma, “Optimizing a parameterized plug-and-play admm for iterative low-dose ct reconstruction,” IEEE transactions on medical imaging, vol. 38, no. 2, pp. 371–382, 2018.
  8. H. Chen, Y. Zhang, Y. Chen, J. Zhang, W. Zhang, H. Sun, Y. Lv, P. Liao, J. Zhou, and G. Wang, “Learn: Learned experts’ assessment-based reconstruction network for sparse-data ct,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1333–1347, 2018.
  9. W. Xia, Z. Yang, Z. Lu, Z. Wang, and Y. Zhang, “Regformer: A local-nonlocal regularization-based model for sparse-view ct reconstruction,” IEEE Transactions on Radiation and Plasma Medical Sciences, 2023.
  10. X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial ct scanners still employ traditional, filtered back-projection for image reconstruction?” Inverse problems, vol. 25, no. 12, p. 123009, 2009.
  11. A. Nagy and A. Kuba, “Reconstruction of binary matrices from fan-beam projections,” Acta Cybernetica, vol. 17, no. 2, pp. 359–383, 2005.
  12. L. Varga, P. Balázs, and A. Nagy, “Direction-dependency of a binary tomographic reconstruction algorithm,” in International Symposium Computational Modeling of Objects Represented in Images (CompIMAGE).   Springer, 2010, pp. 242–253.
  13. L. Varga, P. Balazs, and A. Nagy, “Projection selection algorithms for discrete tomography,” in Advanced Concepts for Intelligent Vision Systems: 12th International Conference, ACIVS 2010, Sydney, Australia, December 13-16, 2010, Proceedings, Part I 12.   Springer, 2010, pp. 390–401.
  14. F. Bouhaouel, F. Bauer, and C. U. Grosse, “Task-specific acquisition trajectories optimized using observer models,” in 10th Conference on Industrial Computed Tomography (iCT 2020), 2020.
  15. A. Fischer, T. Lasser, M. Schrapp, J. Stephan, and P. B. Noël, “Object specific trajectory optimization for industrial x-ray computed tomography,” Scientific reports, vol. 6, no. 1, p. 19135, 2016.
  16. Z. Shen, Y. Wang, D. Wu, X. Yang, and B. Dong, “Learning to scan: A deep reinforcement learning approach for personalized scanning in ct imaging,” arXiv preprint arXiv:2006.02420, 2020.
  17. C. Wang, K. Shang, H. Zhang, S. Zhao, D. Liang, and S. K. Zhou, “Active ct reconstruction with a learned sampling policy,” arXiv preprint arXiv:2211.01670, 2022.
  18. L. Yang, R. Ge, S. Feng, and D. Zhang, “Learning projection views for sparse-view ct reconstruction,” ser. MM ’22.   New York, NY, USA: Association for Computing Machinery, 2022, p. 2645–2653. [Online]. Available: https://doi.org/10.1145/3503161.3548204
  19. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4509–4522, 2017.
  20. D. Hu, J. Liu, T. Lv, Q. Zhao, Y. Zhang, G. Quan, J. Feng, Y. Chen, and L. Luo, “Hybrid-domain neural network processing for sparse-view ct reconstruction,” IEEE Transactions on Radiation and Plasma Medical Sciences, vol. 5, no. 1, pp. 88–98, 2021.
  21. H. Lee, J. Lee, and S. Cho, “View-Interpolation of Sparsely Sampled Sinogram using Convolutional Neural Network,” in Medical Imaging 2017: Image Processing, International Society for Optics and Photonics.   SPIE, 2017, pp. 617 – 624.
  22. H. Chen, Y. Zhang, M. K. Kalra, F. Lin, Y. Chen, P. Liao, J. Zhou, and G. Wang, “Low-dose ct with a residual encoder-decoder convolutional neural network,” IEEE Transactions on Medical Imaging, vol. 36, no. 12, pp. 2524–2535, 2017.
  23. L. Yang, Z. Li, R. Ge, J. Zhao, H. Si, and D. Zhang, “Low-dose ct denoising via sinogram inner-structure transformer,” IEEE Transactions on Medical Imaging, pp. 1–1, 2022.
  24. B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, and M. S. Rosen, “Image reconstruction by domain-transform manifold learning,” Nature, vol. 555, no. 7697, pp. 487–492, 2018.
  25. J. He, Y. Wang, and J. Ma, “Radon inversion via deep learning,” IEEE Transactions on Medical Imaging, vol. 39, no. 6, pp. 2076–2087, 2020.
  26. J. Caballero, W. Bai, A. N. Price, D. Rueckert, and J. V. Hajnal, “Application-driven mri: joint reconstruction and segmentation from undersampled mri data,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2014: 17th International Conference, Boston, MA, USA, September 14-18, 2014, Proceedings, Part I 17.   Springer, 2014, pp. 106–113.
  27. D. Wu, K. Kim, B. Dong, G. E. Fakhri, and Q. Li, “End-to-end lung nodule detection in computed tomography,” in Machine Learning in Medical Imaging: 9th International Workshop, MLMI 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Proceedings 9.   Springer, 2018, pp. 37–45.
  28. H. Lee, C. Huang, S. Yune, S. H. Tajmir, M. Kim, and S. Do, “Machine friendly machine learning: interpretation of computed tomography without image reconstruction,” Scientific reports, vol. 9, no. 1, p. 15540, 2019.
  29. L. Sun, Z. Fan, X. Ding, Y. Huang, and J. Paisley, “Joint cs-mri reconstruction and segmentation with a unified deep network,” in Information Processing in Medical Imaging: 26th International Conference, IPMI 2019, Hong Kong, China, June 2–7, 2019, Proceedings 26.   Springer, 2019, pp. 492–504.
  30. Z. Wang, W. Xia, Z. Lu, Y. Huang, Y. Liu, H. Chen, J. Zhou, and Y. Zhang, “One network to solve them all: A sequential multi-task joint learning network framework for mr imaging pipeline,” in Machine Learning for Medical Image Reconstruction: 4th International Workshop, MLMIR 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, October 1, 2021, Proceedings 4.   Springer, 2021, pp. 76–85.
  31. Q. Huang, D. Yang, J. Yi, L. Axel, and D. Metaxas, “Fr-net: Joint reconstruction and segmentation in compressed sensing cardiac mri,” in Functional Imaging and Modeling of the Heart: 10th International Conference, FIMH 2019, Bordeaux, France, June 6–8, 2019, Proceedings 10.   Springer, 2019, pp. 352–360.
  32. H. Bao, L. Dong, and F. Wei, “Beit: Bert pre-training of image transformers,” arXiv preprint arXiv:2106.08254, 2021.
  33. A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen, and I. Sutskever, “Zero-shot text-to-image generation,” in International Conference on Machine Learning (ICML).   PMLR, 2021, pp. 8821–8831.
  34. R. Girdhar, M. Singh, N. Ravi, L. van der Maaten, A. Joulin, and I. Misra, “Omnivore: A single model for many visual modalities,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 102–16 112.
  35. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-assisted Intervention (MICCAI).   Springer, 2015, pp. 234–241.
  36. T. Sun, Y. Shao, X. Li, P. Liu, H. Yan, X. Qiu, and X. Huang, “Learning sparse sharing architectures for multiple tasks,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 05, 2020, pp. 8936–8943.
  37. M. Crawshaw, “Multi-task learning with deep neural networks: A survey,” arXiv preprint arXiv:2009.09796, 2020.
  38. X. Sun, R. Panda, R. Feris, and K. Saenko, “Adashare: Learning what to share for efficient deep multi-task learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 8728–8740, 2020.
  39. T. R. Moen, B. Chen, D. R. Holmes III, X. Duan, Z. Yu, L. Yu, S. Leng, J. G. Fletcher, and C. H. McCollough, “Low-dose CT image and projection dataset,” Medical Physics, vol. 48, no. 2, pp. 902–911, 2021.
  40. W. Van Aarle, W. J. Palenstijn, J. Cant, E. Janssens, F. Bleichrodt, A. Dabravolski, J. De Beenhouwer, K. J. Batenburg, and J. Sijbers, “Fast and flexible X-ray tomography using the ASTRA toolbox,” Optics Express, vol. 24, no. 22, pp. 25 129–25 147, 2016.
  41. W. Ning, S. Lei, J. Yang, Y. Cao, P. Jiang, Q. Yang, J. Zhang, X. Wang, F. Chen, Z. Geng et al., “Open resource of clinical data from patients with pneumonia for the prediction of covid-19 outcomes via deep learning,” Nature biomedical engineering, vol. 4, no. 12, pp. 1197–1207, 2020.
  42. Y. Fang, S. Wu, S. Zhang, C. Huang, T. Zeng, X. Xing, S. Walsh, and G. Yang, “Dynamic multimodal information bottleneck for multimodality classification,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 7696–7706.
  43. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in pytorch,” 2017.
  44. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  45. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4700–4708.
  46. Z. Zhang, X. Liang, X. Dong, Y. Xie, and G. Cao, “A sparse-view ct reconstruction method based on combination of densenet and deconvolution,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1407–1417, 2018.
  47. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10 012–10 022.
  48. C. Ma, Z. Li, Y. Zhang, J. Zhang, and H. Shan, “Freeseed: Frequency-band-aware and self-guided network for sparse-view ct reconstruction,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2023, 2023.
  49. K. L. Boedeker, V. N. Cooper, and M. F. McNitt-Gray, “Application of the noise power spectrum in modern diagnostic mdct: part i. measurement of noise power spectra and noise equivalent quanta,” Physics in Medicine & Biology, vol. 52, no. 14, p. 4027, 2007.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: