Dice Question Streamline Icon: https://streamlinehq.com

Validation of interpretability findings in ASD classification models

Ascertain whether interpretability outputs from deep learning classifiers trained on resting-state fMRI functional connectivity data for Autism Spectrum Disorder classification reflect genuine ASD-related neurobiological characteristics rather than dataset-specific patterns, by establishing validation procedures that can verify the correspondence between attributed features and ASD mechanisms.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper highlights a gap in ASD research: interpretability methods are underused for functional connectivity data, and when used, their findings are rarely validated. Without validation, it is uncertain whether the highlighted features capture ASD-specific neurobiology or merely reflect idiosyncrasies of the dataset.

The authors explicitly state that this lack of validation leaves uncertainty about what models are actually learning, motivating the open problem of developing and applying robust validation frameworks for interpretability in ASD neuroimaging.

References

Moreover, even when studies attempt to incorporate interpretability, they often fail to validate their results, leaving uncertainty about whether the model is truly learning characteristics of Autism or merely recognising patterns specific to the dataset.

Explainable AI for Autism Diagnosis: Identifying Critical Brain Regions Using fMRI Data (2409.15374 - Vidya et al., 19 Sep 2024) in Section 2.3 Feature analysis and interpretability in ASD research