Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Toward the application of XAI methods in EEG-based systems (2210.06554v4)

Published 12 Oct 2022 in cs.LG, cs.AI, and eess.SP

Abstract: An interesting case of the well-known Dataset Shift Problem is the classification of Electroencephalogram (EEG) signals in the context of Brain-Computer Interface (BCI). The non-stationarity of EEG signals can lead to poor generalisation performance in BCI classification systems used in different sessions, also from the same subject. In this paper, we start from the hypothesis that the Dataset Shift problem can be alleviated by exploiting suitable eXplainable Artificial Intelligence (XAI) methods to locate and transform the relevant characteristics of the input for the goal of classification. In particular, we focus on an experimental analysis of explanations produced by several XAI methods on an ML system trained on a typical EEG dataset for emotion recognition. Results show that many relevant components found by XAI methods are shared across the sessions and can be used to build a system able to generalise better. However, relevant components of the input signal also appear to be highly dependent on the input itself.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Contrastive explanations to classification systems using sparse dictionaries. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11751 LNCS:207 – 218, 2019.
  2. Sparse dictionaries for the explanation of classification systems. In PIE, page 009, 2015.
  3. Explaining classification systems using sparse dictionaries. ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, page 495 – 500, 2019.
  4. Eeg-based measurement system for monitoring student engagement in learning 4.0. Scientific Reports, 12(1):1–13, 2022.
  5. High-wearable eeg-based distraction detection in motor rehabilitation. Scientific Reports, 11(1):1–9, 2021.
  6. Exploiting auto-encoders and segmentation methods for middle-level explanations of image classification systems. Knowledge-Based Systems, 255:109725, 2022.
  7. Metrological characterization of a low-cost electroencephalograph for wearable neural interfaces in industry 4.0 applications. In 2021 IEEE International Workshop on Metrology for Industry 4.0 & IoT (MetroInd4. 0&IoT), pages 1–5. IEEE, 2021.
  8. A wearable ar-based bci for robot control in adhd treatment: Preliminary evaluation of adherence to therapy. In 2021 15th International Conference on Advanced Technologies, Systems and Services in Telecommunications (TELSIKS), pages 321–324. IEEE, 2021.
  9. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015.
  10. Wearable electroencephalography. IEEE engineering in medicine and biology magazine, 29(3):44–56, 2010.
  11. A novel explainable machine learning approach for eeg-based brain-computer interface systems. Neural Computing and Applications, 34(14):11347–11360, 2022.
  12. Nonstationary nature of the brain activity as revealed by eeg/meg: methodological, practical and conceptual challenges. Signal processing, 85(11):2190–2212, 2005.
  13. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  14. Modeling of explainable artificial intelligence with correlation-based feature selection approach for biomedical data analysis. In Biomedical Data Analysis and Processing Using Explainable (XAI) and Responsive Artificial Intelligence (RAI), pages 17–32. Springer, 2022.
  15. Games, gameplay, and bci: the state of the art. IEEE Transactions on Computational Intelligence and AI in Games, 5(2):82–99, 2013.
  16. Bci-based consumers’ choice prediction from eeg signals: An intelligent neuromarketing framework. Frontiers in Human Neuroscience, 16, 2022.
  17. Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, pages 193–209, 2019.
  18. Jerzy Neyman. On the two different aspects of the representative method: the method of stratified sampling and the method of purposive selection. In Breakthroughs in statistics, pages 123–150. Springer, 1992.
  19. Xnlp: A living survey for xai research in natural language processing. In 26th International Conference on Intelligent User Interfaces-Companion, pages 78–80, 2021.
  20. Dataset shift in machine learning. Mit Press, 2008.
  21. Review on epilepsy detection with explainable artificial intelligence. In 2022 10th International Conference on Emerging Trends in Engineering and Technology-Signal and Information Processing (ICETET-SIP-22), pages 1–6. IEEE, 2022.
  22. ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144, 2016.
  23. Evaluating the visualization of what a deep neural network has learned. IEEE transactions on neural networks and learning systems, 28(11):2660–2673, 2016.
  24. Human-centered xai: Developing design patterns for explanations of clinical decision support systems. International Journal of Human-Computer Studies, 154:102684, 2021.
  25. Explainable artificial intelligence with metaheuristic feature selection technique for biomedical data classification. In Biomedical Data Analysis and Processing Using Explainable (XAI) and Responsive Artificial Intelligence (RAI), pages 43–57. Springer, 2022.
  26. Learning important features through propagating activation differences. In International conference on machine learning, pages 3145–3153. PMLR, 2017.
  27. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
  28. Practical bayesian optimization of machine learning algorithms. Advances in neural information processing systems, 25, 2012.
  29. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
  30. Eeg signal analysis: a survey. Journal of medical systems, 34(2):195–212, 2010.
  31. Axiomatic attribution for deep networks. In International conference on machine learning, pages 3319–3328. PMLR, 2017.
  32. Hybrid method of automated eeg signals’ selection using reversed correlation algorithm for improved classification of emotions. Sensors, 20(24):7083, 2020.
  33. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818–833. Springer, 2014.
  34. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Transactions on Autonomous Mental Development, 7(3):162–175, 2015.
  35. A portable hci system-oriented eeg feature extraction and channel selection for emotion recognition. International Journal of Intelligent Systems, 36(1):152–176, 2021.
Citations (14)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com