Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interpretable Convolutional Neural Networks for Subject-Independent Motor Imagery Classification (2112.07208v1)

Published 14 Dec 2021 in cs.NE

Abstract: Deep learning frameworks have become increasingly popular in brain computer interface (BCI) study thanks to their outstanding performance. However, in terms of the classification model alone, they are treated as black box as they do not provide any information on what led them to reach a particular decision. In other words, we cannot convince whether the high performance was aroused by the neuro-physiological factors or simply noise. Because of this disadvantage, it is difficult to ensure adequate reliability compared to their high performance. In this study, we propose an explainable deep learning model for BCI. Specifically, we aim to classify EEG signal which is obtained from the motor-imagery (MI) task. In addition, we adopted layer-wise relevance propagation (LRP) to the model to interpret the reason that the model derived certain classification output. We visualized the heatmap which indicates the output of the LRP in form of topography to certify neuro-physiological factors. Furthermore, we classified EEG with the subject-independent manner to learn robust and generalized EEG features by avoiding subject dependency. The methodology also provides the advantage of avoiding the expense of building training data for each subject. With our proposed model, we obtained generalized heatmap patterns for all subjects. As a result, we can conclude that our proposed model provides neuro-physiologically reliable interpretation.

Citations (6)

Summary

We haven't generated a summary for this paper yet.