Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-modality image synthesis from unpaired data using CycleGAN: Effects of gradient consistency loss and training data size (1803.06629v3)

Published 18 Mar 2018 in cs.CV

Abstract: CT is commonly used in orthopedic procedures. MRI is used along with CT to identify muscle structures and diagnose osteonecrosis due to its superior soft tissue contrast. However, MRI has poor contrast for bone structures. Clearly, it would be helpful if a corresponding CT were available, as bone boundaries are more clearly seen and CT has standardized (i.e., Hounsfield) units. Therefore, we aim at MR-to-CT synthesis. The CycleGAN was successfully applied to unpaired CT and MR images of the head, these images do not have as much variation of intensity pairs as do images in the pelvic region due to the presence of joints and muscles. In this paper, we extended the CycleGAN approach by adding the gradient consistency loss to improve the accuracy at the boundaries. We conducted two experiments. To evaluate image synthesis, we investigated dependency of image synthesis accuracy on 1) the number of training data and 2) the gradient consistency loss. To demonstrate the applicability of our method, we also investigated a segmentation accuracy on synthesized images.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yuta Hiasa (6 papers)
  2. Yoshito Otake (24 papers)
  3. Masaki Takao (14 papers)
  4. Takumi Matsuoka (1 paper)
  5. Kazuma Takashima (2 papers)
  6. Jerry L. Prince (58 papers)
  7. Nobuhiko Sugano (14 papers)
  8. Yoshinobu Sato (17 papers)
Citations (188)

Summary

Cross-modality Image Synthesis from Unpaired Data Using CycleGAN: Effects of Gradient Consistency Loss and Training Data Size

The research work by Yuta Hiasa et al. focuses on synthesizing computed tomography (CT) images from magnetic resonance imaging (MRI) data using the CycleGAN framework. The motivation behind this is the inherent limitations of MRI, despite its superior soft tissue contrast, to effectively delineate bone structures, which CT scans handle with precision due to their standardized Hounsfield units. By facilitating the synthesis of CT images from MRI, this paper aims to benefit clinical scenarios, especially in situations where radiation exposure from CT scans is a concern.

The paper extends the conventional CycleGAN method by introducing a gradient consistency (GC) loss to enhance the delineation at image boundaries, which is crucial given the anatomical variability encountered in the pelvic region. This addresses one of the shortcomings of existing approaches that have been largely focused on the relatively consistent anatomical structures of the head. The paper outlines the methodology, dataset specifics, and experimental validation, providing insights into the impact of GC loss and training data size on synthesis accuracy.

Methodology

The primary contribution is the incorporation of GC loss within the CycleGAN architecture. The CycleGAN, as introduced by Zhu et al., is adept at translating images between domains without paired dataset requirements, making it a suitable choice for MR-to-CT synthesis with unpaired data.

  1. Gradient Consistency Loss: The GC loss is formulated using the gradient correlation between synthesized and real images, inspired by medical image registration techniques. This loss aims to encourage edge alignment, thus preserving structural integrity during image synthesis.
  2. Data Utilization: The paper utilizes a substantial dataset comprising 302 unlabeled MR and 613 unlabeled CT volumes. Additionally, 20 labeled CT volumes with manual segmentations provide a foundation for assessing segmentation tasks on synthesized images.
  3. Network Architecture: The network employs a 2D convolutional neural network with residual blocks for image generation and PatchGAN for discrimination. The objective function combines adversarial, cycle consistency, and GC losses, optimized using the Adam algorithm.

Results and Evaluations

Quantitative evaluations were conducted to ascertain the influence of training data size and the inclusion of GC loss on the accuracy of synthesized images.

  • Image Synthesis Accuracy: The paper reports a decrease in mean absolute error (MAE) and increase in peak signal-to-noise ratio (PSNR) as training data size increased and with the incorporation of GC loss. Synthesized images displayed improved boundary precision, evidenced by smaller differences from ground truth CT scans.
  • Segmentation Task Performance: Utilizing the synthesized CT images, segmentation networks were trained to identify musculoskeletal structures. The results revealed statistically significant enhancements in segmentation accuracy, particularly on the gluteus medius and minimus muscles, when compared with models trained without GC loss or with fewer images.

Implications and Future Directions

This research demonstrates the potential of cross-modality image synthesis using enhanced generative models in medical imaging. The integration of GC loss within CycleGAN represents a methodological advancement that can lead to more accurate and reliable translations between imaging modalities, potentially reducing the need for invasive procedures or additional radiative exposure.

Practical implications include improved pre-surgical planning, diagnostic processes, and patient-specific treatment approaches, particularly in orthopedic and musculoskeletal applications. Theoretically, the paper prompts future exploration into cooperative learning frameworks where multiple imaging modalities contribute jointly to enhance diagnostic models, as well as end-to-end systems that seamlessly integrate synthesis with downstream tasks like segmentation or classification.

In conclusion, this work establishes a framework for improving medical image synthesis through conscientious architecture enhancements, offering a pathway to more personalized and less invasive medical practices. Further research could explore integration with additional imaging modalities and learning strategies to refine synthesis outcomes across broader applications.