Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Unsupervised Reconstruction Method For Low-Dose CT Using Deep Generative Regularization Prior (2012.06448v2)

Published 11 Dec 2020 in eess.IV

Abstract: Low-dose CT imaging requires reconstruction from noisy indirect measurements which can be defined as an ill-posed linear inverse problem. In addition to conventional FBP method in CT imaging, recent compressed sensing based methods exploit handcrafted priors which are mostly simplistic and hard to determine. More recently, deep learning (DL) based methods have become popular in medical imaging field. In CT imaging, DL based methods try to learn a function that maps low-dose images to normal-dose images. Although the results of these methods are promising, their success mostly depends on the availability of high-quality massive datasets. In this study, we proposed a method that does not require any training data or a learning process. Our method exploits such an approach that deep convolutional neural networks (CNNs) generate patterns easier than the noise, therefore randomly initialized generative neural networks can be suitable priors to be used in regularizing the reconstruction. In the experiments, the proposed method is implemented with different loss function variants. Both analytical CT phantoms and real-world CT images are used with different views. Conventional FBP method, a popular iterative method (SART), and TV regularized SART are used in the comparisons. We demonstrated that our method with different loss function variants outperforms the other methods both qualitatively and quantitatively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Mehmet Ozan Unal (3 papers)
  2. Metin Ertas (2 papers)
  3. Isa Yildirim (10 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.