Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transfer Learning for EEG-Based Brain-Computer Interfaces: A Review of Progress Made Since 2016 (2004.06286v4)

Published 13 Apr 2020 in cs.HC, cs.LG, and eess.SP

Abstract: A brain-computer interface (BCI) enables a user to communicate with a computer directly using brain signals. The most common non-invasive BCI modality, electroencephalogram (EEG), is sensitive to noise/artifact and suffers between-subject/within-subject non-stationarity. Therefore, it is difficult to build a generic pattern recognition model in an EEG-based BCI system that is optimal for different subjects, during different sessions, for different devices and tasks. Usually, a calibration session is needed to collect some training data for a new subject, which is time-consuming and user unfriendly. Transfer learning (TL), which utilizes data or knowledge from similar or relevant subjects/sessions/devices/tasks to facilitate learning for a new subject/session/device/task, is frequently used to reduce the amount of calibration effort. This paper reviews journal publications on TL approaches in EEG-based BCIs in the last few years, i.e., since 2016. Six paradigms and applications -- motor imagery, event-related potentials, steady-state visual evoked potentials, affective BCIs, regression problems, and adversarial attacks -- are considered. For each paradigm/application, we group the TL approaches into cross-subject/session, cross-device, and cross-task settings and review them separately. Observations and conclusions are made at the end of the paper, which may point to future research directions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Dongrui Wu (94 papers)
  2. Yifan Xu (92 papers)
  3. Bao-Liang Lu (26 papers)
Citations (224)

Summary

Transfer Learning for EEG-Based Brain-Computer Interfaces: A Review of Progress Made Since 2016

The paper "Transfer Learning for EEG-Based Brain-Computer Interfaces: A Review of Progress Made Since 2016" by Wu, Xu, and Lu offers a comprehensive survey on the application of transfer learning (TL) in EEG-based brain-computer interfaces (BCIs), covering recent advancements post-2016. This extensive review underlines the necessity of TL to mitigate challenges posed by EEG signal variability, non-stationarity, and the associated high calibration cost for individual users.

A wide spectrum of EEG-based BCI paradigms are assessed in this paper, encompassing motor imagery (MI), event-related potentials (ERP), steady-state visual evoked potentials (SSVEP), affective BCIs (aBCIs), regression tasks, and emerging adversarial attack studies. The authors categorize TL approaches into cross-subject/session, cross-device, and cross-task scenarios, providing detailed insights into each.

Key Findings and Techniques

  1. Cross-Subject/Session Transfer: Cross-subject and cross-session transfer remains a predominant focus, with numerous methods introduced for MI-based BCIs. Riemannian geometry-based approaches, such as Riemannian alignment (RA) and more computationally efficient alternatives like Euclidean alignment (EA), are employed to align EEG covariance matrices across subjects or sessions, facilitating better model generalization. Additionally, deep learning models, including CNNs tailored for EEG signal features, are fine-tuned with minimal data to enhance cross-subject performance.
  2. Cross-Device and Cross-Task Transfer: While cross-device TL is gaining research attention, cross-task TL in EEG-based BCIs is notably less explored. Approaches such as label alignment (LA) have been developed, enabling task transfer even when different MI tasks are represented in source and target datasets.
  3. Affective BCIs and Regression Problems: The review also highlights TL in aBCIs and regression-related BCI tasks, noting a promising but underexplored area. For aBCIs, differential entropy features and deep learning-based frameworks like domain adversarial neural networks are utilized to mitigate cross-subject variabilities. For regression tasks, driver drowsiness estimation showcases the application of TL frameworks like online weighted adaptation regularization for regression (OwARR).
  4. Adversarial Attacks: Adversarial examples pose new challenges to EEG-based BCIs. The paper discusses the transferability of adversarial perturbations across different models, posing potential threats and necessitating further exploration into developing robust, attack-resistant BCI systems.

Implications and Future Directions

The paper emphasizes TL's crucial role in making BCIs more feasible for broader applications by reducing calibration demands and enabling better cross-domain generalization. The research presented is instrumental in paving the way for more integrated and resilient EEG-based BCI systems. Future research could further expand on underdeveloped areas such as cross-task TL and the deployment of comprehensive defense mechanisms against adversarial attacks. Additionally, exploring TL's integration with other machine learning paradigms, such as meta-learning, could yield significant advancements in adaptive BCI technologies.