Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-Modality Translation with Generative Adversarial Networks to Unveil Alzheimer's Disease Biomarkers (2405.05462v1)

Published 8 May 2024 in q-bio.NC, cs.LG, and eess.IV

Abstract: Generative approaches for cross-modality transformation have recently gained significant attention in neuroimaging. While most previous work has focused on case-control data, the application of generative models to disorder-specific datasets and their ability to preserve diagnostic patterns remain relatively unexplored. Hence, in this study, we investigated the use of a generative adversarial network (GAN) in the context of Alzheimer's disease (AD) to generate functional network connectivity (FNC) and T1-weighted structural magnetic resonance imaging data from each other. We employed a cycle-GAN to synthesize data in an unpaired data transition and enhanced the transition by integrating weak supervision in cases where paired data were available. Our findings revealed that our model could offer remarkable capability, achieving a structural similarity index measure (SSIM) of $0.89 \pm 0.003$ for T1s and a correlation of $0.71 \pm 0.004$ for FNCs. Moreover, our qualitative analysis revealed similar patterns between generated and actual data when comparing AD to cognitively normal (CN) individuals. In particular, we observed significantly increased functional connectivity in cerebellar-sensory motor and cerebellar-visual networks and reduced connectivity in cerebellar-subcortical, auditory-sensory motor, sensory motor-visual, and cerebellar-cognitive control networks. Additionally, the T1 images generated by our model showed a similar pattern of atrophy in the hippocampal and other temporal regions of Alzheimer's patients.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. N. K. Logothetis, “What we can do and what we cannot do with fMRI,” in Nature, vol. 453, no. 7197, pp. 869–878, 2008. Nature Publishing Group UK London.
  2. M. E. Phelps, “Positron emission tomography provides molecular imaging of biological processes,” in Proceedings of the National Academy of Sciences, vol. 97, no. 16, pp. 9226–9233, 2000. National Acad Sciences.
  3. V. D. Calhoun and J. Sui, “Multimodal fusion of brain imaging data: A key to finding the missing link(s) in complex mental illness,” in Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, vol. 1, no. 3, pp. 230–244, 2016. Elsevier.
  4. J. Venugopalan, L. Tong, H. R. Hassanzadeh, and M. D. Wang, “Multimodal deep learning models for early detection of Alzheimer’s disease stage,” in Scientific Reports, vol. 11, no. 1, pp. 3254, 2021. Nature Publishing Group UK London.
  5. L. Cai, Z. Wang, H. Gao, D. Shen, and S. Ji, “Deep adversarial learning for multi-modality missing data completion,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 1158–1166.
  6. X. Gao, F. Shi, D. Shen, and M. Liu, “Task-induced pyramid and attention GAN for multimodal brain image imputation and classification in Alzheimer’s disease,” in IEEE Journal of Biomedical and Health Informatics, vol. 26, no. 1, pp. 36–43, 2021. IEEE.
  7. D. Cheng, N. Qiu, F. Zhao, Y. Mao, and C. Li, “Research on the modality transfer method of brain imaging based on generative adversarial network,” in Frontiers in Neuroscience, vol. 15, pp. 655019, 2021. Frontiers Media SA.
  8. C. Tiago, S. R. Snare, J. Šprem, and K. McLeod, “A Domain Translation Framework With an Adversarial Denoising Diffusion Model to Generate Synthetic Datasets of Echocardiography Images,” in IEEE Access, vol. 11, pp. 17594–17602, 2023. IEEE.
  9. S. U. H. Dar, M. Yurt, L. Karacan, A. Erdem, E. Erdem, and T. Cukur, “Image synthesis in multi-contrast MRI with conditional generative adversarial networks,” in IEEE Transactions on Medical Imaging, vol. 38, no. 10, pp. 2375–2388, 2019. IEEE.
  10. W. Yuan, J. Wei, J. Wang, Q. Ma, and T. Tasdizen, “Unified generative adversarial networks for multimodal segmentation from unpaired 3D medical images,” in Medical Image Analysis, vol. 64, pp. 101731, 2020. Elsevier.
  11. J. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2223–2232.
  12. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” in IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. IEEE.
  13. K. Armanious, C. Jiang, S. Abdulatif, T. Küstner, S. Gatidis, and B. Yang, “Unsupervised medical image translation using cycle-MedGAN,” in 2019 27th European Signal Processing Conference (EUSIPCO), 2019, pp. 1–5. IEEE.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com