Interpretable Convolutional Neural Networks for Subject-Independent Motor Imagery Classification

14 Dec 2021  ·  Ji-Seon Bang, Seong-Whan Lee ·

Deep learning frameworks have become increasingly popular in brain computer interface (BCI) study thanks to their outstanding performance. However, in terms of the classification model alone, they are treated as black box as they do not provide any information on what led them to reach a particular decision. In other words, we cannot convince whether the high performance was aroused by the neuro-physiological factors or simply noise. Because of this disadvantage, it is difficult to ensure adequate reliability compared to their high performance. In this study, we propose an explainable deep learning model for BCI. Specifically, we aim to classify EEG signal which is obtained from the motor-imagery (MI) task. In addition, we adopted layer-wise relevance propagation (LRP) to the model to interpret the reason that the model derived certain classification output. We visualized the heatmap which indicates the output of the LRP in form of topography to certify neuro-physiological factors. Furthermore, we classified EEG with the subject-independent manner to learn robust and generalized EEG features by avoiding subject dependency. The methodology also provides the advantage of avoiding the expense of building training data for each subject. With our proposed model, we obtained generalized heatmap patterns for all subjects. As a result, we can conclude that our proposed model provides neuro-physiologically reliable interpretation.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods