Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Domain-Invariant Representation Learning from EEG with Private Encoders (2201.11613v2)

Published 27 Jan 2022 in cs.LG, cs.CV, and cs.HC

Abstract: Deep learning based electroencephalography (EEG) signal processing methods are known to suffer from poor test-time generalization due to the changes in data distribution. This becomes a more challenging problem when privacy-preserving representation learning is of interest such as in clinical settings. To that end, we propose a multi-source learning architecture where we extract domain-invariant representations from dataset-specific private encoders. Our model utilizes a maximum-mean-discrepancy (MMD) based domain alignment approach to impose domain-invariance for encoded representations, which outperforms state-of-the-art approaches in EEG-based emotion classification. Furthermore, representations learned in our pipeline preserve domain privacy as dataset-specific private encoding alleviates the need for conventional, centralized EEG-based deep neural network training approaches with shared parameters.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. David Bethge (5 papers)
  2. Philipp Hallgarten (4 papers)
  3. Tobias Grosse-Puppendahl (5 papers)
  4. Mohamed Kari (4 papers)
  5. Ralf Mikut (55 papers)
  6. Albrecht Schmidt (31 papers)
  7. Ozan Ă–zdenizci (27 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.