Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generative Adversarial Networks for Image-to-Image Translation on Multi-Contrast MR Images - A Comparison of CycleGAN and UNIT (1806.07777v1)

Published 20 Jun 2018 in cs.CV

Abstract: In medical imaging, a general problem is that it is costly and time consuming to collect high quality data from healthy and diseased subjects. Generative adversarial networks (GANs) is a deep learning method that has been developed for synthesizing data. GANs can thereby be used to generate more realistic training data, to improve classification performance of machine learning algorithms. Another application of GANs is image-to-image translations, e.g. generating magnetic resonance (MR) images from computed tomography (CT) images, which can be used to obtain multimodal datasets from a single modality. Here, we evaluate two unsupervised GAN models (CycleGAN and UNIT) for image-to-image translation of T1- and T2-weighted MR images, by comparing generated synthetic MR images to ground truth images. We also evaluate two supervised models; a modification of CycleGAN and a pure generator model. A small perceptual study was also performed to evaluate how visually realistic the synthesized images are. It is shown that the implemented GAN models can synthesize visually realistic MR images (incorrectly labeled as real by a human). It is also shown that models producing more visually realistic synthetic images not necessarily have better quantitative error measurements, when compared to ground truth data. Code is available at https://github.com/simontomaskarlsson/GAN-MRI

Analysis of Generative Adversarial Networks for Image-to-Image Translation in Multi-Contrast MR Imaging: A Comparative Study of CycleGAN and UNIT

The paper explores the application of Generative Adversarial Networks (GANs) for image-to-image translation, focusing specifically on the conversion between T1- and T2-weighted magnetic resonance (MR) images. This work investigates both supervised and unsupervised methods, providing a comparative analysis of CycleGAN and the UNIT models.

GANs have become a promising tool in medical imaging, especially for generating synthetic datasets that can enhance machine learning tasks. Their use in image-to-image translation can potentially simplify the task of acquiring multimodal datasets. This paper aims to evaluate the capability of GANs to produce synthetic MR images that are not only visually realistic but also quantitatively accurate.

Methodology

The authors implemented two unsupervised GAN models—CycleGAN and UNIT—due to their efficacy in handling unpaired training data, which is a practical requirement in medical imaging due to limited data availability. Furthermore, two supervised models were introduced: a supervised version of CycleGAN (CycleGAN_s) and a simple generator-based model (Generators_s). These models were implemented using Keras, and the dataset comprised paired T1- and T2-weighted images from 1113 subjects from the Human Connectome Project, split into training and testing datasets.

Evaluation

The evaluation framework included both quantitative metrics—Mean Absolute Error (MAE), Mutual Information (MI), and Peak Signal-to-Noise Ratio (PSNR)—and qualitative assessments, such as a perceptual paper to evaluate the visual realism of the generated images. Interestingly, the evaluation revealed a discrepancy between quantitative performance and visual realism. Models with superior quantitative scores did not always produce the most visually realistic images, highlighting a critical observation regarding the limitations of conventional metrics in gauging visual quality.

Results

In terms of numerical performance, the Generators_s model surpassed others, but this did not translate into visually realistic outputs, which were overly smooth. This emphasizes the potential inadequacy of relying solely on quantitative metrics for visual tasks. The CycleGAN and UNIT models demonstrated similar performance across various assessments. Notably, T2-weighted images generally posed a greater challenge both quantitatively and qualitatively. Visual assessments indicated that synthetic T2 images were less discernible from real ones compared to T1 images, potentially due to inherent contrast and intensity properties.

Discussion and Future Directions

The work suggests that while supervised models can minimize traditional error metrics, adversarial losses in GANs are crucial for achieving visual realism. The paper advocates for further exploration of 3D GANs, which may enhance the realism and applicability of synthesized MR datasets. Additionally, the development of novel loss functions that balance visual accuracy with quantitative fidelity remains a pertinent challenge.

This research solidifies the utility of GANs in enhancing medical imaging pipelines, providing valuable insights into their advantages and limitations. Future studies should explore the application of GANs for specific clinical objectives, such as disease classification or segmentation, to validate their efficacy in practical scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Per Welander (1 paper)
  2. Simon Karlsson (2 papers)
  3. Anders Eklund (38 papers)
Citations (100)