Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Learning-Based Prediction of PET Amyloid Status Using Multi-Contrast MRI (2411.12061v2)

Published 18 Nov 2024 in eess.IV

Abstract: Identifying amyloid-beta positive patients is crucial for determining eligibility for Alzheimer's disease (AD) clinical trials and new disease-modifying treatments, but currently requires PET or CSF sampling. Previous MRI-based deep learning models for predicting amyloid positivity, using only T1w sequences, have shown moderate performance. We trained deep learning models to predict amyloid PET positivity and evaluated whether multi-contrast inputs improve performance. A total of 4,058 exams with multi-contrast MRI and PET-based quantitative amyloid deposition were obtained from three public datasets: the Alzheimer's Disease Neuroimaging Initiative (ADNI), the Open Access Series of Imaging Studies 3 (OASIS3), and the Anti-Amyloid Treatment in Asymptomatic Alzheimer's Disease (A4). Two separate EfficientNet models were trained for amyloid positivity prediction: one with only T1w images and the other with both T1w and T2-FLAIR images as network inputs. The area under the curve (AUC), accuracy, sensitivity, and specificity were determined using an internal held-out test set. The trained models were further evaluated using an external test set. In the held-out test sets, the T1w and T1w+T2FLAIR models demonstrated AUCs of 0.62 (95% CI: 0.60, 0.64) and 0.67 (95% CI: 0.64, 0.70) (p = 0.006); accuracies were 61% (95% CI: 60%, 63%) and 64% (95% CI: 62%, 66%) (p = 0.008); sensitivities were 0.88 and 0.71; and specificities were 0.23 and 0.53, respectively. The trained models showed similar performance in the external test set. Performance of the current model on both test sets exceeded that of the publicly available model. In conclusion, the use of multi-contrast MRI, specifically incorporating T2-FLAIR in addition to T1w images, significantly improved the predictive accuracy of PET-determined amyloid status from MRI scans using a deep learning approach.

Summary

  • The paper demonstrates that combining T1w and T2-FLAIR images in a 3D EfficientNet-B3 model significantly improves amyloid prediction (AUC from 0.62 to 0.67).
  • The study validates its approach using diverse datasets from ADNI, OASIS3, A4, and Stanford, enhancing the robustness of its findings.
  • The research highlights the potential of non-invasive, cost-effective multi-contrast MRI for enhancing Alzheimer’s screening and clinical trial recruitment.

Deep Learning-Based Prediction of PET Amyloid Status Using Multi-Contrast MRI

The paper "Deep Learning-Based Prediction of PET Amyloid Status Using Multi-Contrast MRI" presents a paper aimed at improving the ability to predict amyloid-beta (Aβ) positivity using MRI images encoded with deep learning models. Given the limitations of amyloid PET and CSF sampling for diagnosing Alzheimer's disease, there is a need for more accessible diagnostic tools. Leveraging advances in deep learning and MRI technologies, this paper seeks to enhance preclinical identification of Alzheimer's by incorporating multi-contrast MRI data, particularly T1-weighted (T1w) and T2-FLAIR (Fluid-Attenuated Inversion Recovery) images, as inputs for deep learning models.

Methodology

The paper utilized publicly available datasets—ADNI, OASIS3, and A4—as well as an independent dataset from Stanford University. The total number of exams included was 4,058, spanning from 2010 to 2023, ensuring inclusion of various updated imaging technologies. Amyloid positivity was determined based on predefined centiloid thresholds specific to each dataset, allowing for standardization across tracer-specific PET data.

The network architecture adopted for this paper was a 3D EfficientNet-B3 model, known for balancing network efficiency and performance through its ability to scale depth, width, and resolution. Two versions of the model were designed and compared: one using T1w images alone and another integrating both T1w and T2-FLAIR images. The performance was evaluated using AUC, accuracy, sensitivity, and specificity as the primary metrics.

Results

The analysis demonstrated clear improvements in prediction performance when T2-FLAIR was added to T1w images. The combined model achieved an AUC of 0.67 and accuracy of 64% in the internal held-out test set, outperforming the T1w-only model (AUC of 0.62, accuracy of 61%). This enhancement was consistent across an external test set, where the T1w+T2-FLAIR approach also exceeded the T1w-only model's performance. DeLong’s and McNemar’s tests confirmed the statistical significance of these improvements.

The paper underscores the benefit of multi-contrast data, particularly given the moderate prediction levels noted when using T1w alone. The effective visualization of WMH through T2-FLAIR, which is linked to amyloid deposits, likely contributed to these findings.

Implications and Future Work

This paper suggests that integrating volumetric multi-contrast MRI data can offer non-invasive, cost-effective alternatives for Alzheimer's screening, potentially serving clinical trial screenings and identifying individuals at risk. Future research might explore the inclusion of additional MRI contrasts or sequences to further improve prediction accuracy. Moreover, investigating how clinical and demographic data could be optimally combined with imaging data will be valuable, potentially elucidating the biological underpinnings of MRI-detectable amyloid-related changes.

In conclusion, while there is potential for improvement in prediction performance, this work highlights the role of advanced imaging techniques and AI in enhancing Alzheimer's disease diagnosis and may inform future multi-modal diagnostic frameworks.

X Twitter Logo Streamline Icon: https://streamlinehq.com