Validation of interpretability findings in ASD classification models
Ascertain whether interpretability outputs from deep learning classifiers trained on resting-state fMRI functional connectivity data for Autism Spectrum Disorder classification reflect genuine ASD-related neurobiological characteristics rather than dataset-specific patterns, by establishing validation procedures that can verify the correspondence between attributed features and ASD mechanisms.
References
Moreover, even when studies attempt to incorporate interpretability, they often fail to validate their results, leaving uncertainty about whether the model is truly learning characteristics of Autism or merely recognising patterns specific to the dataset.
— Explainable AI for Autism Diagnosis: Identifying Critical Brain Regions Using fMRI Data
(2409.15374 - Vidya et al., 19 Sep 2024) in Section 2.3 Feature analysis and interpretability in ASD research