Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Interpretable Cross-Attentive Multi-modal MRI Fusion Framework for Schizophrenia Diagnosis (2404.00144v1)

Published 29 Mar 2024 in eess.IV and cs.CV

Abstract: Both functional and structural magnetic resonance imaging (fMRI and sMRI) are widely used for the diagnosis of mental disorder. However, combining complementary information from these two modalities is challenging due to their heterogeneity. Many existing methods fall short of capturing the interaction between these modalities, frequently defaulting to a simple combination of latent features. In this paper, we propose a novel Cross-Attentive Multi-modal Fusion framework (CAMF), which aims to capture both intra-modal and inter-modal relationships between fMRI and sMRI, enhancing multi-modal data representation. Specifically, our CAMF framework employs self-attention modules to identify interactions within each modality while cross-attention modules identify interactions between modalities. Subsequently, our approach optimizes the integration of latent features from both modalities. This approach significantly improves classification accuracy, as demonstrated by our evaluations on two extensive multi-modal brain imaging datasets, where CAMF consistently outperforms existing methods. Furthermore, the gradient-guided Score-CAM is applied to interpret critical functional networks and brain regions involved in schizophrenia. The bio-markers identified by CAMF align with established research, potentially offering new insights into the diagnosis and pathological endophenotypes of schizophrenia.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. J. Oh, B.-L. Oh, K.-U. Lee, J.-H. Chae, and K. Yun, “Identifying schizophrenia using structural mri with a deep learning algorithm,” Frontiers in psychiatry, vol. 11, p. 16, 2020.
  2. G. J. Katuwal, N. D. Cahill, S. A. Baum, and A. M. Michael, “The predictive power of structural mri in autism diagnosis,” in 2015 37th annual international conference of the ieee engineering in medicine and biology society (EMBC), 2015, pp. 4270–4273.
  3. S. Tomassini, N. Falcionelli, P. Sernani, H. Müller, and A. F. Dragoni, “An end-to-end 3d convlstm-based framework for early diagnosis of alzheimer’s disease from full-resolution whole-brain smri scans,” in 2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS), 2021, pp. 74–78.
  4. J. W. Belliveau et al., “Functional mapping of the human visual cortex by magnetic resonance imaging,” Science, vol. 254, no. 5032, pp. 716–719, 1991.
  5. H. Li, T. D. Satterthwaite, and Y. Fan, “Brain age prediction based on resting-state functional connectivity patterns using convolutional neural networks,” in 2018 ieee 15th international symposium on biomedical imaging (isbi 2018), 2018, pp. 101–104.
  6. Y. Zhang, H. Zhang, X. Chen, S.-W. Lee, and D. Shen, “Hybrid high-order functional connectivity networks using resting-state functional mri for mild cognitive impairment diagnosis,” Scientific reports, vol. 7, no. 1, p. 6530, 2017.
  7. J. Wang et al., “Functional network estimation using multigraph learning with application to brain maturation study,” Human brain mapping, vol. 42, no. 9, pp. 2880–2892, 2021.
  8. E. S. Finn, X. Shen, D. Scheinost, M. D. Rosenberg, J. Huang, M. M. Chun, X. Papademetris, and R. T. Constable, “Functional connectome fingerprinting: identifying individuals using patterns of brain connectivity,” Nature neuroscience, vol. 18, no. 11, pp. 1664–1671, 2015.
  9. J. Sui et al., “Three-way (n-way) fusion of brain imaging data based on mcca+ jica and its application to discriminating schizophrenia,” NeuroImage, vol. 66, pp. 119–132, 2013.
  10. C. Zu, B. Jie, M. Liu, S. Chen, D. Shen, D. Zhang, and A. D. N. Initiative, “Label-aligned multi-task feature learning for multimodal classification of alzheimer’s disease and mild cognitive impairment,” Brain imaging and behavior, vol. 10, pp. 1148–1159, 2016.
  11. Y. Yang, C. Ye, X. Guo, T. Wu, Y. Xiang, and T. Ma, “Mapping multi-modal brain connectome for brain disorder diagnosis via cross-modal mutual learning,” IEEE Transactions on Medical Imaging, 2023.
  12. W. Hu, B. Cai, A. Zhang, V. D. Calhoun, and Y.-P. Wang, “Deep collaborative learning with application to the study of multimodal brain development,” IEEE Transactions on Biomedical Engineering, vol. 66, no. 12, pp. 3346–3359, 2019.
  13. W. Hu et al., “Interpretable multimodal fusion networks reveal mechanisms of brain cognition,” IEEE transactions on medical imaging, vol. 40, no. 5, pp. 1474–1483, 2021.
  14. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  15. G. Qu, A. Orlichenko, J. Wang, G. Zhang, L. Xiao, K. Zhang, T. W. Wilson, J. M. Stephen, V. D. Calhoun, and Y.-P. Wang, “Interpretable cognitive ability prediction: A comprehensive gated graph transformer framework for analyzing functional brain networks,” IEEE Transactions on Medical Imaging, 2023.
  16. Q. Zhu, H. Wang, B. Xu, Z. Zhang, W. Shao, and D. Zhang, “Multimodal triplet attention network for brain disease diagnosis,” IEEE Transactions on Medical Imaging, vol. 41, no. 12, pp. 3884–3894, 2022.
  17. Z. Salahuddin, H. C. Woodruff, A. Chatterjee, and P. Lambin, “Transparency of deep neural networks for medical image analysis: A review of interpretability methods,” Computers in biology and medicine, vol. 140, p. 105111, 2022.
  18. D. T. Huff, A. J. Weisman, and R. Jeraj, “Interpretation and visualization techniques for deep learning models in medical imaging,” Physics in Medicine & Biology, vol. 66, no. 4, p. 04TR01, 2021.
  19. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.
  20. H. Wang, Z. Wang, M. Du, F. Yang, Z. Zhang, S. Ding, P. Mardziel, and X. Hu, “Score-cam: Score-weighted visual explanations for convolutional neural networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 24–25.
  21. J. D. Power et al., “Functional network organization of the human brain,” Neuron, vol. 72, no. 4, pp. 665–678, 2011.
  22. K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1026–1034.
  23. C. Aine et al., “Multimodal neuroimaging in schizophrenia: description and dissemination,” Neuroinformatics, vol. 15, pp. 343–364, 2017.
  24. D. B. Keator et al., “The function biomedical informatics research network data repository,” Neuroimage, vol. 124, pp. 1074–1079, 2016.
  25. B. M. Adhikari et al., “Functional network connectivity impairments and core cognitive deficits in schizophrenia,” Human brain mapping, vol. 40, no. 16, pp. 4593–4605, 2019.
  26. M. A. Rahaman, J. Chen, Z. Fu, N. Lewis, A. Iraji, T. G. van Erp, and V. D. Calhoun, “Deep multimodal predictome for studying mental disorders,” Human Brain Mapping, vol. 44, no. 2, pp. 509–522, 2023.
  27. D. Chicco and G. Jurman, “The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation,” BMC genomics, vol. 21, no. 1, pp. 1–13, 2020.
  28. C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, pp. 273–297, 1995.
  29. S.-i. Amari, “Backpropagation and stochastic gradient descent method,” Neurocomputing, vol. 5, no. 4-5, pp. 185–196, 1993.
  30. B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2921–2929.
  31. N. Tzourio-Mazoyer, B. Landeau, D. Papathanassiou, F. Crivello, O. Etard, N. Delcroix, B. Mazoyer, and M. Joliot, “Automated anatomical labeling of activations in spm using a macroscopic anatomical parcellation of the mni mri single-subject brain,” Neuroimage, vol. 15, no. 1, pp. 273–289, 2002.
  32. T. A. Hackett, “Anatomic organization of the auditory cortex,” Handbook of clinical neurology, vol. 129, pp. 27–53, 2015.
  33. S. W. Joo, W. Yoon, Y. T. Jo, H. Kim, Y. Kim, and J. Lee, “Aberrant executive control and auditory networks in recent-onset schizophrenia,” Neuropsychiatric Disease and Treatment, pp. 1561–1570, 2020.
  34. V. van de Ven, A. R. Jagiela, V. Oertel-Knöchel, and D. E. Linden, “Reduced intrinsic visual cortical connectivity is associated with impaired perceptual closure in schizophrenia,” NeuroImage: Clinical, vol. 15, pp. 45–52, 2017.
  35. R. L. Buckner, “The serendipitous discovery of the brain’s default network,” Neuroimage, vol. 62, no. 2, pp. 1137–1145, 2012.
  36. A. G. Garrity, G. D. Pearlson, K. McKiernan, D. Lloyd, K. A. Kiehl, and V. D. Calhoun, “Aberrant “default mode” functional connectivity in schizophrenia,” American journal of psychiatry, vol. 164, no. 3, pp. 450–457, 2007.
  37. J. Fitzsimmons et al., “Cingulum bundle abnormalities and risk for schizophrenia,” Schizophrenia research, vol. 215, pp. 385–391, 2020.
  38. T. J. Whitford et al., “Localized abnormalities in the cingulum bundle in patients with schizophrenia: a diffusion tensor tractography study,” NeuroImage: Clinical, vol. 5, pp. 93–99, 2014.
  39. G. Pergola, P. Selvaggi, S. Trizio, A. Bertolino, and G. Blasi, “The role of the thalamus in schizophrenia from a neuroimaging perspective,” Neuroscience & Biobehavioral Reviews, vol. 54, pp. 57–75, 2015.
  40. K. Takase, C. Tamagaki, G. Okugawa, K. Nobuhara, T. Minami, T. Sugimoto, S. Sawada, and T. Kinoshita, “Reduced white matter volume of the caudate nucleus in patients with schizophrenia,” Neuropsychobiology, vol. 50, no. 4, pp. 296–300, 2004.
Citations (1)

Summary

We haven't generated a summary for this paper yet.