Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On self-supervised multi-modal representation learning: An application to Alzheimer's disease (2012.13619v2)

Published 25 Dec 2020 in cs.LG

Abstract: Introspection of deep supervised predictive models trained on functional and structural brain imaging may uncover novel markers of Alzheimer's disease (AD). However, supervised training is prone to learning from spurious features (shortcut learning) impairing its value in the discovery process. Deep unsupervised and, recently, contrastive self-supervised approaches, not biased to classification, are better candidates for the task. Their multimodal options specifically offer additional regularization via modality interactions. In this paper, we introduce a way to exhaustively consider multimodal architectures for contrastive self-supervised fusion of fMRI and MRI of AD patients and controls. We show that this multimodal fusion results in representations that improve the results of the downstream classification for both modalities. We investigate the fused self-supervised features projected into the brain space and introduce a numerically stable way to do so.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Alex Fedorov (15 papers)
  2. Lei Wu (319 papers)
  3. Tristan Sylvain (20 papers)
  4. Margaux Luck (12 papers)
  5. Thomas P. DeRamus (4 papers)
  6. Dmitry Bleklov (3 papers)
  7. Sergey M. Plis (20 papers)
  8. Vince D. Calhoun (61 papers)
Citations (13)