Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PR-PL: A Novel Transfer Learning Framework with Prototypical Representation based Pairwise Learning for EEG-Based Emotion Recognition (2202.06509v3)

Published 14 Feb 2022 in cs.HC

Abstract: Affective brain-computer interfaces based on electroencephalography (EEG) is an important branch in the field of affective computing. However, individual differences and noisy labels seriously limit the effectiveness and generalizability of EEG-based emotion recognition models. In this paper, we propose a novel transfer learning framework with Prototypical Representation based Pairwise Learning (PR-PL) to learn discriminative and generalized prototypical representations for emotion revealing across individuals and formulate emotion recognition as pairwise learning for alleviating the reliance on precise label information. Extensive experiments are conducted on two benchmark databases under four cross-validation evaluation protocols (cross-subject cross-session, cross-subject within-session, within-subject cross-session, and within-subject within-session). The experimental results demonstrate the superiority of the proposed PR-PL against the state-of-the-arts under all four evaluation protocols, which shows the effectiveness and generalizability of PR-PL in dealing with the ambiguity of EEG responses in affective studies. The source code is available at https://github.com/KAZABANA/PR-PL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Rushuang Zhou (5 papers)
  2. Zhiguo Zhang (25 papers)
  3. Hong Fu (6 papers)
  4. Li Zhang (693 papers)
  5. Linling Li (6 papers)
  6. Gan Huang (11 papers)
  7. Yining Dong (7 papers)
  8. Fali Li (10 papers)
  9. Xin Yang (314 papers)
  10. Zhen Liang (31 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.