Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
140 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Inter-subject Contrastive Learning for Subject Adaptive EEG-based Visual Recognition (2202.02901v1)

Published 7 Feb 2022 in eess.SP, cs.AI, and cs.CV

Abstract: This paper tackles the problem of subject adaptive EEG-based visual recognition. Its goal is to accurately predict the categories of visual stimuli based on EEG signals with only a handful of samples for the target subject during training. The key challenge is how to appropriately transfer the knowledge obtained from abundant data of source subjects to the subject of interest. To this end, we introduce a novel method that allows for learning subject-independent representation by increasing the similarity of features sharing the same class but coming from different subjects. With the dedicated sampling principle, our model effectively captures the common knowledge shared across different subjects, thereby achieving promising performance for the target subject even under harsh problem settings with limited data. Specifically, on the EEG-ImageNet40 benchmark, our model records the top-1 / top-3 test accuracy of 72.6% / 91.6% when using only five EEG samples per class for the target subject. Our code is available at https://github.com/DeepBCI/Deep-BCI/tree/master/1_Intelligent_BCI/Inter_Subject_Contrastive_Learning_for_EEG.

Citations (10)

Summary

  • The paper introduces a novel inter-subject contrastive loss that unifies EEG feature learning across different subjects.
  • It employs a dedicated sampling strategy to effectively distinguish positive and negative pairs, significantly improving 5-shot recognition performance.
  • The method achieves 72.6% top-1 accuracy on EEG-ImageNet40, enabling robust subject adaptation with minimal target data for BCI applications.

Inter-subject Contrastive Learning for Subject Adaptive EEG-based Visual Recognition

The paper under examination presents a method for enhancing subject-adaptive visual recognition using EEG signals by integrating an inter-subject contrastive learning approach. The objective is to accurately classify visual stimuli with minimal data samples from a target subject, leveraging abundant data from source subjects.

Key Contributions

  1. Subject-independent Representation: The proposed method aims to develop a robust learning mechanism that captures features invariant to individual subjects. By harmonizing features of the same class from different subjects, the approach facilitates better transferability of knowledge across subjects.
  2. Inter-subject Contrastive Loss: The paper introduces a novel loss function that enhances subject-independent feature learning. By focusing on increasing the similarity of features sharing the same class across various subjects, the model fortifies its adaptability, significantly outperforming existing methodologies.
  3. Dedicated Sampling Strategy: The inadequacies in conventional contrastive sampling methods led to the authors designing a new strategy. The proposed sampling mechanism effectively discerns between positive and negative pairs, improving subject-independent feature acquisition.

Experimental Results

The method demonstrates noteworthy performance on the EEG-ImageNet40 benchmark, achieving a top-1 accuracy of 72.6% in a constrained 5-shot setting. This reflects an improvement over preceding models, particularly in data-scarce conditions. Tests across various settings reveal that even with minimal target subject data, the model's performance is significantly augmented by the presented methodology.

Implications and Future Directions

The proposed method offers a practical solution for visual recognition tasks involving EEG signals, crucial for real-world applications where collecting large-scale subject-specific data is impractical. This research can extend to broader applications in brain-computer interfaces (BCIs), potentially improving usability in scenarios of personal assistive technologies and cognitive neural prosthetics.

Future research could explore optimizing the model's adaptability to rapidly accommodate new, unseen subjects without retraining. Additionally, expanding this work to encompass diverse neural data types or multimodal integration may unlock further applications within the cognitive computing and BCI landscapes.

This work represents a significant technical contribution to EEG-based visual recognition, presenting a path that combines deep learning with novel loss functions and sampling strategies for effective domain adaptation in neural representations. The ongoing advancements herald promising opportunities for more adaptive and generalizable BCI systems.