Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhanced Low-Dose CT Image Reconstruction by Domain and Task Shifting Gaussian Denoisers (2403.03551v3)

Published 6 Mar 2024 in eess.IV, cs.CV, and cs.LG

Abstract: Computed tomography from a low radiation dose (LDCT) is challenging due to high noise in the projection data. Popular approaches for LDCT image reconstruction are two-stage methods, typically consisting of the filtered backprojection (FBP) algorithm followed by a neural network for LDCT image enhancement. Two-stage methods are attractive for their simplicity and potential for computational efficiency, typically requiring only a single FBP and a neural network forward pass for inference. However, the best reconstruction quality is currently achieved by unrolled iterative methods (Learned Primal-Dual and ItNet), which are more complex and thus have a higher computational cost for training and inference. We propose a method combining the simplicity and efficiency of two-stage methods with state-of-the-art reconstruction quality. Our strategy utilizes a neural network pretrained for Gaussian noise removal from natural grayscale images, fine-tuned for LDCT image enhancement. We call this method FBP-DTSGD (Domain and Task Shifted Gaussian Denoisers) as the fine-tuning is a task shift from Gaussian denoising to enhancing LDCT images and a domain shift from natural grayscale to LDCT images. An ablation study with three different pretrained Gaussian denoisers indicates that the performance of FBP-DTSGD does not depend on a specific denoising architecture, suggesting future advancements in Gaussian denoising could benefit the method. The study also shows that pretraining on natural images enhances LDCT reconstruction quality, especially with limited training data. Notably, pretraining involves no additional cost, as existing pretrained models are used. The proposed method currently holds the top mean position in the LoDoPaB-CT challenge.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. M. Lell, J. Wildberger, H. Alkadhi, J. Damilakis, and M. Kachelriess, “Evolution in Computed Tomography: The Battle for Speed and Dose,” Investigative Radiology, vol. 50, no. 9, pp. 629–644, 2015.
  2. Q. Yang, P. Yan, Y. Zhang, H. Yu, Y. Shi, X. Mou, M. K. Kalra, Y. Zhang, L. Sun, and G. Wang, “Low-Dose CT Image Denoising Using a Generative Adversarial Network With Wasserstein Distance and Perceptual Loss,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1348–1357, 2018.
  3. S. Lu, B. Yang, Y. Xiao, S. Liu, M. Liu, L. Yin, and W. Zheng, “Iterative reconstruction of low-dose CT based on differential sparse,” Biomedical Signal Processing and Control, vol. 79, p. 104204, 2023.
  4. Y. Hu, Z. Zheng, H. Yu, J. Wang, X. Yang, and H. Shi, “Ultra-low-dose CT reconstructed with the artificial intelligence iterative reconstruction algorithm (AIIR) in 18F-FDG total-body PET/CT examination: a preliminary study,” EJNMMI Physics, vol. 10, no. 1, p. 1, 2023.
  5. S. Kulathilake, N. Abdullah, A. Sabri, and K. W. Lai, “A review on Deep Learning approaches for low-dose Computed Tomography restoration,” Complex & Intelligent Systems, vol. 9, 2021.
  6. H. Li et al., “Transformer With Double Enhancement for Low-Dose CT Denoising,” IEEE Journal of Biomedical and Health Informatics, vol. 27, no. 10, pp. 4660–4671, 2023.
  7. D. Wang, F. Fan, Z. Wu, R. Liu, F. Wang, and H. Yu, “CTformer: convolution-free Token2Token dilated vision transformer for low-dose CT denoising,” Physics in Medicine and Biology, vol. 68, 2023.
  8. F. N. Mazandarani, P. Babyn, and J. Alirezaie, “UNeXt: a Low-Dose CT denoising UNet model with the modified ConvNeXt block,” in ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023, pp. 1–5.
  9. L. Xiong, N. Li, W. Qiu, and Y. Zhang, “Re-UNet: A Novel Multi-scale Reverse U-shaped Network Architecture for Low-dose CT Image Reconstruction,” Available at SSRN 4426158, 2023.
  10. Y. Liu, J. Ma, Y. Fan, and Z. Liang, “Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction,” Physics in Medicine and Biology, vol. 57, pp. 7923 – 7956, 2012.
  11. S. Niu, Y. Gao, Z. Bian, J. Huang, W. Chen, G. Yu, Z. Liang, and J. Ma, “Sparse-view x-ray CT reconstruction via total generalized variation regularization,” Physics in Medicine & Biology, vol. 59, no. 12, p. 2997, 2014.
  12. L. Liu, X. Li, K. Xiang, J. Wang, and S. Tan, “Low-Dose CBCT Reconstruction Using Hessian Schatten Penalties,” IEEE Transactions on Medical Imaging, vol. 36, no. 12, pp. 2588–2599, 2017.
  13. G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol. 42, pp. 60–88, 2017.
  14. W. Stiller, “Basics of iterative reconstruction methods in computed tomography: a vendor-independent overview,” European Journal of Radiology, vol. 109, pp. 147–154, 2018.
  15. D. H. Ye, S. Srivastava, J.-B. Thibault, K. Sauer, and C. Bouman, “Deep Residual Learning for Model-Based Iterative CT Reconstruction Using Plug-and-Play Framework,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 6668–6672.
  16. K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, and R. Timofte, “Plug-and-Play Image Restoration With Deep Denoiser Prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 10, pp. 6360–6376, 2022.
  17. Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A Nested U-Net Architecture for Medical Image Segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4.   Springer, 2018, pp. 3–11.
  18. T. Liu, A. Chaman, D. Belius, and I. Dokmanić, “Interpreting U-Nets via Task-Driven Multiscale Dictionary Learning,” ArXiv, 2020.
  19. M. Genzel, I. Gühring, J. Macdonald, and M. März, “Near-Exact Recovery for Tomographic Inverse Problems via Deep Learning,” in Proceedings of the 39th International Conference on Machine Learning, vol. PMLR 162, 2022, pp. 7368–7381.
  20. J. Leuschner, M. Schmidt, P. S. Ganguly, V. Andriiashen, S. B. Coban, A. Denker, D. F. Bauer, A. Hadjifaradji, K. J. Batenburg, P. Maass, and M. van Eijnatten, “Quantitative Comparison of Deep Learning-Based Image Reconstruction Methods for Low-Dose and Sparse-Angle CT Applications,” Journal of Imaging, vol. 7, 2021.
  21. J. Adler and O. Öktem, “Learned Primal-Dual Reconstruction,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1322–1332, 2018.
  22. T. Liu, A. Chaman, D. Belius, and I. Dokmanic, “Learning Multiscale Convolutional Dictionaries for Image Reconstruction,” IEEE Transactions on Computational Imaging, vol. 8, pp. 1–1, 2022.
  23. A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Improving Language Understanding by Generative Pre-Training,” 2018. [Online]. Available: https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf
  24. Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao, Z. Zhang, L. Dong et al., “Swin transformer v2: Scaling up capacity and resolution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12 009–12 019.
  25. J. Leuschner, M. Schmidt, D. O. Baguer, and P. Maass, “LoDoPaB-CT, a benchmark dataset for low-dose computed tomography reconstruction,” Scientific Data, vol. 8, no. 1, p. 109, 2021.
  26. D. M. Pelt, K. J. Batenburg, and J. A. Sethian, “Improving tomographic reconstruction from limited data using mixed-scale dense convolutional neural networks,” Journal of Imaging, vol. 4, no. 11, p. 128, 2018.
  27. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18.   Springer, 2015, pp. 234–241.
  28. Z. Zhang, Q. Liu, and Y. Wang, “Road extraction by deep residual U-net,” IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 5, pp. 749–753, 2018.
  29. X. Wu, D. Hong, and J. Chanussot, “UIU-Net: U-Net in U-Net for Infrared Small Object Detection,” IEEE Transactions on Image Processing, vol. 32, pp. 364–376, 2023.
  30. A. Jansson, E. Humphrey, N. Montecchio, R. Bittner, A. Kumar, and T. Weyde, “Singing voice separation with deep U-Net convolutional networks,” in Proceedings of the 18th International Society for Music Information Retrieval Conference, Suzhou, China, 2017.
  31. H. Chen, Y. Zhang, M. K. Kalra, F. Lin, Y. Chen, P. Liao, J. Zhou, and G. Wang, “Low-Dose CT With a Residual Encoder-Decoder Convolutional Neural Network,” IEEE Transactions on Medical Imaging, vol. 36, no. 12, pp. 2524–2535, 2017.
  32. J. Kim, J. Kim, G. Han, C. Rim, and H. Jo, “Low-dose CT Image Restoration using generative adversarial networks,” Informatics in Medicine Unlocked, vol. 21, p. 100468, 2020.
  33. J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Išgum, “Generative Adversarial Networks for Noise Reduction in Low-Dose CT,” IEEE Transactions on Medical Imaging, vol. 36, no. 12, pp. 2536–2545, 2017.
  34. H. Shan, Y. Zhang, Q. Yang, U. Kruger, M. K. Kalra, L. Sun, W. Cong, and G. Wang, “3-D Convolutional Encoder-Decoder Network for Low-Dose CT via Transfer Learning From a 2-D Trained Network,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1522–1534, 2018.
  35. Z. Hu, C. Jiang, F. Sun, Q. Zhang, Y. Ge, Y. Yang, X. Liu, H. Zheng, and D. Liang, “Artifact correction in low‐dose dental CT imaging using Wasserstein generative adversarial networks,” Medical Physics, vol. 46, p. 1686–1696, 2019.
  36. X. Yi and P. Babyn, “Sharpness-Aware Low-Dose CT Denoising Using Conditional Generative Adversarial Network,” Journal of Digital Imaging, vol. 31, 2017.
  37. F. Fan, H. Shan, M. K. Kalra, R. Singh, G. Qian, M. Getzin, Y. Teng, J. Hahn, and G. Wang, “Quadratic Autoencoder (Q-AE) for Low-Dose CT Denoising,” IEEE Transactions on Medical Imaging, vol. 39, no. 6, pp. 2035–2050, 2020.
  38. Y. Ma, B. Wei, P. Feng, P. He, X. Guo, and G. Wang, “Low-dose CT image denoising using a generative adversarial network with a hybrid loss function for noise learning,” IEEE Access, vol. 8, pp. 67 519–67 529, 2020.
  39. T. Liang, Y. Jin, Y. Li, and T. Wang, “EDCNN: Edge enhancement-based Densely Connected Network with Compound Loss for Low-Dose CT Denoising,” in 2020 15th IEEE International Conference on Signal Processing (ICSP), vol. 1, 2020, pp. 193–198.
  40. A. Denker, M. Schmidt, J. Leuschner, P. Maass, and J. Behrmann, “Conditional normalizing flows for low-dose computed tomography image reconstruction,” arXiv:2006.06270, 2020.
  41. E. Kang, J. Min, and J. C. Ye, “A deep convolutional neural network using directional wavelets for low‐dose X‐ray CT reconstruction,” Medical Physics, vol. 44, p. e360–e375, 2016.
  42. M. Gholizadeh-Ansari, J. Alirezaie, and P. Babyn, “Low-dose CT Denoising Using Edge Detection Layer and Perceptual Loss,” in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2019, pp. 6247–6250.
  43. S. J. Kisner, E. Haneda, C. A. Bouman, S. Skatter, M. Kourinny, and S. Bedford, “Model-based CT reconstruction from sparse views,” in Second International Conference on Image Formation in X-Ray Computed Tomography, 2012, pp. 444–447.
  44. B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 136–144.
  45. K. Zhang, W. Zuo, and L. Zhang, “FFDNet: Toward a Fast and Flexible Solution for CNN-Based Image Denoising,” IEEE Transactions on Image Processing, vol. 27, no. 9, pp. 4608–4622, 2018.
  46. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
  47. D. R. I. M. Setiadi, “PSNR vs SSIM: imperceptibility quality assessment for image steganography,” Multimedia Tools and Applications, vol. 80, pp. 1–22, 2021.
  48. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv:1412.6980, 2014.
  49. S. G. Armato III, G. McLennan, L. Bidaut, M. F. McNitt-Gray, C. R. Meyer, A. P. Reeves, B. Zhao, D. R. Aberle, C. I. Henschke, E. A. Hoffman et al., “The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans,” Medical Physics, vol. 38, no. 2, pp. 915–931, 2011.
  50. U. Sara, M. Akter, and M. S. Uddin, “Image Quality Assessment through FSIM, SSIM, MSE and PSNR—A Comparative Study,” Journal of Computer and Communications, vol. 07, pp. 8–18, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tim Selig (1 paper)
  2. Thomas März (15 papers)
  3. Martin Storath (21 papers)
  4. Andreas Weinmann (36 papers)
Citations (2)