Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-Modality Synthesis from CT to PET using FCN and GAN Networks for Improved Automated Lesion Detection (1802.07846v2)

Published 21 Feb 2018 in cs.CV and cs.AI

Abstract: In this work we present a novel system for generation of virtual PET images using CT scans. We combine a fully convolutional network (FCN) with a conditional generative adversarial network (GAN) to generate simulated PET data from given input CT data. The synthesized PET can be used for false-positive reduction in lesion detection solutions. Clinically, such solutions may enable lesion detection and drug treatment evaluation in a CT-only environment, thus reducing the need for the more expensive and radioactive PET/CT scan. Our dataset includes 60 PET/CT scans from Sheba Medical center. We used 23 scans for training and 37 for testing. Different schemes to achieve the synthesized output were qualitatively compared. Quantitative evaluation was conducted using an existing lesion detection software, combining the synthesized PET as a false positive reduction layer for the detection of malignant lesions in the liver. Current results look promising showing a 28% reduction in the average false positive per case from 2.9 to 2.1. The suggested solution is comprehensive and can be expanded to additional body organs, and different modalities.

Citations (194)

Summary

  • The paper proposes synthesizing PET images from CT scans using an integrated FCN and conditional GAN architecture to improve automated lesion detection.
  • Key results show the synthesis method reduced false positives in liver lesion detection by 28%, from 2.9 to 2.1 per case, while maintaining true positive rates.
  • This technique offers a non-invasive and cost-effective alternative to traditional PET/CT imaging, potentially increasing access to advanced diagnostic capabilities.

Cross-Modality Synthesis from CT to PET for Enhanced Lesion Detection

The paper "Cross-Modality Synthesis from CT to PET using FCN and GAN Networks for Improved Automated Lesion Detection" explores a method to virtually generate PET images from CT scans, primarily to aid in lesion detection without resorting to physical PET/CT imaging. This approach integrates Fully Convolutional Networks (FCNs) with Conditional Generative Adversarial Networks (cGANs) to produce simulated PET data, which aims to reduce the dependency on PET scans that are expensive and involve radiation exposure.

The authors employ an innovative architecture combining FCNs and cGANs to synthesize PET-like images. The FCN provides a preliminary prediction of the PET image, emphasizing malignant lesions' identification. Following this, a cGAN refines the image to improve the initial predictions' realism and accuracy. This two-stage process attempts to leverage the strengths of both network types, with FCNs known for their efficiency in processing entire images, while cGANs excel at generating realistic images by minimizing the difference from actual PET images.

The dataset consists of 60 PET/CT scans, with the data split into a training set of 23 and a test set of 37 scans, focusing on the liver region. The researchers present quantitative evaluations, reporting a 28% reduction in false positives when integrating synthesized PET images with existing lesion detection software. Specifically, false positives per case dropped from 2.9 to 2.1 on average, while maintaining the true positive detection rate. These results indicate an improved capacity for reliable lesion identification using the synthesized PET data, which could be vital in clinical settings.

This work addresses significant clinical challenges: the reduction of unnecessary radiation exposure and the cost associated with PET/CT imaging. Simulated PET images could facilitate more widespread adoption of advanced diagnostic imaging by providing a viable CT-only alternative. This could be especially beneficial in medical centers with limited access to PET/CT facilities.

Future work can expand on this by exploring applications in other anatomical regions and different scanning modalities. Potential developments could focus on the refinement of the network architecture for varying types of lesions and enhancing the synthesized images' diagnostic value. Additionally, a broader dataset encompassing diverse pathologies could generalize the solution for multiple clinical conditions.

In conclusion, this paper presents a robust method for cross-modality image synthesis, emphasizing its practical implications in reducing false positives in automated lesion detection systems. By providing a non-invasive and cost-effective alternative to PET imaging, it lays the groundwork for future advancements in medical imaging and diagnostic accuracy. However, challenges remain in extending and optimizing this model for various clinical applications, warranting further exploration and validation.