ProtoEEGNet: An Interpretable Approach for Detecting Interictal Epileptiform Discharges (2312.10056v1)
Abstract: In electroencephalogram (EEG) recordings, the presence of interictal epileptiform discharges (IEDs) serves as a critical biomarker for seizures or seizure-like events.Detecting IEDs can be difficult; even highly trained experts disagree on the same sample. As a result, specialists have turned to machine-learning models for assistance. However, many existing models are black boxes and do not provide any human-interpretable reasoning for their decisions. In high-stakes medical applications, it is critical to have interpretable models so that experts can validate the reasoning of the model before making important diagnoses. We introduce ProtoEEGNet, a model that achieves state-of-the-art accuracy for IED detection while additionally providing an interpretable justification for its classifications. Specifically, it can reason that one EEG looks similar to another ''prototypical'' EEG that is known to contain an IED. ProtoEEGNet can therefore help medical professionals effectively detect IEDs while maintaining a transparent decision-making process.
- Incidental epileptiform discharges in patients of a tertiary centre. Clinical Neurophysiology, 127(1):102–107, 2016. ISSN 1388-2457. doi: https://doi.org/10.1016/j.clinph.2015.02.056. URL https://www.sciencedirect.com/science/article/pii/S1388245715001613.
- Eeg is an essential clinical tool: Pro and con. Epilepsia, 47(s1):23–25, 2006. doi: https://doi.org/10.1111/j.1528-1167.2006.00655.x. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1528-1167.2006.00655.x.
- Value of the Electroencephalogram in Adult Patients With Untreated Idiopathic First Seizures. Archives of Neurology, 49(3):231–237, 03 1992. ISSN 0003-9942. doi: 10.1001/archneur.1992.00530270045017. URL https://doi.org/10.1001/archneur.1992.00530270045017.
- Interrater reliability of expert electroencephalographers identifying seizures and rhythmic and periodic patterns in eegs. Neurology, 100:e1737 – e1749, 2022. URL https://api.semanticscholar.org/CorpusID:254180050.
- Development of expert-level automated detection of epileptiform discharges during electroencephalogram interpretation. JAMA neurology, 2020. URL https://api.semanticscholar.org/CorpusID:204812691.
- Epileptiform spike detection via convolutional neural networks. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 754–758, 2016. doi: 10.1109/ICASSP.2016.7471776.
- Deep learning for interictal epileptiform discharge detection from scalp eeg recordings. In Jorge Henriques, Nuno Neves, and Paulo de Carvalho, editors, XV Mediterranean Conference on Medical and Biological Engineering and Computing – MEDICON 2019, pages 1984–1997, Cham, 2020. Springer International Publishing. ISBN 978-3-030-31635-8.
- Eeg based multi-class seizure type classification using convolutional neural network and transfer learning. Neural networks : the official journal of the International Neural Network Society, 124:202–212, 2020. URL https://api.semanticscholar.org/CorpusID:211035040.
- Automated interpretation of clinical electroencephalograms using artificial intelligence. JAMA Neurology, 80:805 – 812, 2023. URL https://api.semanticscholar.org/CorpusID:259202197.
- A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020. URL https://api.semanticscholar.org/CorpusID:213644599.
- Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study. PLoS Medicine, 15, 2018. URL https://api.semanticscholar.org/CorpusID:49558635.
- Using cnn saliency maps and eeg modulation spectra for improved and more interpretable machine learning-based alzheimer’s disease diagnosis. Computational Intelligence and Neuroscience, 2023, 2023. URL https://api.semanticscholar.org/CorpusID:256753238.
- Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1:206–215, 2018. URL https://api.semanticscholar.org/CorpusID:182656421.
- Sanity checks for saliency maps. Advances in neural information processing systems, 31, 2018.
- Assessing the trustworthiness of saliency maps for localizing abnormalities in medical imaging. Radiology. Artificial intelligence, 3 6:e200267, 2021. URL https://api.semanticscholar.org/CorpusID:261967989.
- Interpretable machine learning: Fundamental principles and 10 grand challenges. ArXiv, abs/2103.11251, 2021. URL https://api.semanticscholar.org/CorpusID:232307437.
- A quantitative approach to evaluating interictal epileptiform discharges based on interpretable quantitative criteria. Clinical Neurophysiology, 146:10–17, 2022. URL https://api.semanticscholar.org/CorpusID:253571863.
- This looks like that: deep learning for interpretable image recognition. In Neural Information Processing Systems, 2018. URL https://api.semanticscholar.org/CorpusID:49482223.
- Deformable protopnet: An interpretable image classifier using deformable prototypes. CoRR, abs/2111.15000, 2021. URL https://arxiv.org/abs/2111.15000.
- A revised glossary of terms most commonly used by clinical electroencephalographers and updated proposal for the report format of the eeg findings. revision 2017. Clinical Neurophysiology Practice, 2:170 – 185, 2017. URL https://api.semanticscholar.org/CorpusID:52271646.