DiamondGAN: Unified Multi-Modal Generative Adversarial Networks for MRI Sequences Synthesis (1904.12894v4)
Abstract: Synthesizing MR imaging sequences is highly relevant in clinical practice, as single sequences are often missing or are of poor quality (e.g. due to motion). Naturally, the idea arises that a target modality would benefit from multi-modal input, as proprietary information of individual modalities can be synergistic. However, existing methods fail to scale up to multiple non-aligned imaging modalities, facing common drawbacks of complex imaging sequences. We propose a novel, scalable and multi-modal approach called DiamondGAN. Our model is capable of performing exible non-aligned cross-modality synthesis and data infill, when given multiple modalities or any of their arbitrary subsets, learning structured information in an end-to-end fashion. We synthesize two MRI sequences with clinical relevance (i.e., double inversion recovery (DIR) and contrast-enhanced T1 (T1-c)), reconstructed from three common sequences. In addition, we perform a multi-rater visual evaluation experiment and find that trained radiologists are unable to distinguish synthetic DIR images from real ones.
- Hongwei Li (97 papers)
- Johannes C. Paetzold (46 papers)
- Anjany Sekuboyina (32 papers)
- Florian Kofler (52 papers)
- Jianguo Zhang (97 papers)
- Jan S. Kirschke (39 papers)
- Benedikt Wiestler (80 papers)
- Bjoern Menze (116 papers)