Papers
Topics
Authors
Recent
Search
2000 character limit reached

Deep-neural-network based sinogram synthesis for sparse-view CT image reconstruction

Published 2 Mar 2018 in physics.med-ph, cs.CV, and eess.IV | (1803.00694v2)

Abstract: Recently, a number of approaches to low-dose computed tomography (CT) have been developed and deployed in commercialized CT scanners. Tube current reduction is perhaps the most actively explored technology with advanced image reconstruction algorithms. Sparse data sampling is another viable option to the low-dose CT, and sparse-view CT has been particularly of interest among the researchers in CT community. Since analytic image reconstruction algorithms would lead to severe image artifacts, various iterative algorithms have been developed for reconstructing images from sparsely view-sampled projection data. However, iterative algorithms take much longer computation time than the analytic algorithms, and images are usually prone to different types of image artifacts that heavily depend on the reconstruction parameters. Interpolation methods have also been utilized to fill the missing data in the sinogram of sparse-view CT thus providing synthetically full data for analytic image reconstruction. In this work, we introduce a deep-neural-network-enabled sinogram synthesis method for sparse-view CT, and show its outperformance to the existing interpolation methods and also to the iterative image reconstruction approach.

Citations (203)

Summary

  • The paper introduces a deep neural network using a residual U-Net to synthesize missing sinogram data in sparse-view CT imaging.
  • It achieves improved reconstruction quality with higher PSNR and SSIM and reduced artifacts compared to traditional interpolation and iterative methods.
  • The research highlights a promising approach to low-dose CT imaging by effectively addressing inherent challenges in sparse-sample reconstruction.

Deep-Neural-Network Based Sinogram Synthesis for Sparse-View CT Image Reconstruction

The paper "Deep-neural-network based sinogram synthesis for sparse-view CT image reconstruction" presents a study that leverages deep neural networks to address the challenge of image reconstruction in CT imaging contexts with sparse-view data. Sparse-view CT has gained interest as a method to reduce radiation dose without compromising diagnostic utility. However, reconstructing images from such limited data constitutes an inherently ill-posed problem when incorporating traditional reconstruction algorithms, leading to artifacts.

Methods and Approach

This research introduces a convolutional neural network (CNN) to synthesize missing data within the sparse-view sinogram domain, subsequently allowing for the use of existing analytic reconstruction algorithms. The authors employ a residual U-Net architecture, designed to enhance the reconstruction quality by maintaining measured values more effectively compared to other methods characterized by linear or directional interpolation techniques, as well as other CNN methodologies.

Training this network involves re-projecting images from real patient CT data into sinograms, with adam optimizers guiding the learning process. Unlike traditional interpolation techniques or iterative reconstruction, this approach employs deep learning to complete the sinogram data, thus minimizing typical artifacts associated with sparse-sampling.

Experimental Setup

The researchers conducted their experiments using CT images from The Cancer Imaging Archive, re-projected to generate training sinograms. They explored various methods for sinogram synthesis, including the proposed CNN-based approach, comparing results with those obtained from other interpolation techniques and reconstruction algorithms such as the total variation minimization (POCS-TV). The evaluation focused on several metrics like Normalized Root Mean Square Error (NRMSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM).

Results and Analysis

The findings suggest a significant improvement over established methods. Notably, the proposed CNN architecture achieved a higher PSNR and SSIM, signifying superior synthesized sinogram quality. In the reconstructed image analysis, the approach demonstrated fewer artifacts and better retention of small structures when compared to both iterative algorithms and existing interpolation techniques.

Tables III to V in the paper detail quantitative results, underscoring the superiority of the proposed method in achieving reduced error margins and increased signal fidelity. The results further revealed that U-Net based approaches outperform structures based on successive convolutional layers, highlighting the importance of network architecture in handling complex datasets like sinograms in CT.

Implications and Future Work

The study advances low-dose CT imaging through innovative application of deep learning, reducing the radiation exposure for patients—a critical concern in medical imaging. The significant improvements in image recovery quality emphasize the potential for deep learning to solve complex inverse problems in medical imaging fields.

In future work, there is potential to explore these architectures in wider clinical contexts, such as cone-beam CT and multiple fan-beam CT, considering irregular angular sampling. Further, reduction of training times by optimizing the data redundancy without losing performance is identified as an area for development. Handling missing detector channel issues presents another avenue for exploration.

Conclusion

Through the integration of a deep neural network within the sinogram synthesis process, this study demonstrates a compelling alternative to traditional interpolation and iterative reconstruction methods. The research highlights the merit of employing advanced neural network architectures to enhance sparse-view CT imaging, contributing to the ongoing dialog about the role of AI in reducing diagnostic imaging radiation without compromising on quality.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.