Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stabilizing Subject Transfer in EEG Classification with Divergence Estimation (2310.08762v1)

Published 12 Oct 2023 in cs.LG, cs.AI, cs.HC, eess.SP, and stat.ML

Abstract: Classification models for electroencephalogram (EEG) data show a large decrease in performance when evaluated on unseen test sub jects. We reduce this performance decrease using new regularization techniques during model training. We propose several graphical models to describe an EEG classification task. From each model, we identify statistical relationships that should hold true in an idealized training scenario (with infinite data and a globally-optimal model) but that may not hold in practice. We design regularization penalties to enforce these relationships in two stages. First, we identify suitable proxy quantities (divergences such as Mutual Information and Wasserstein-1) that can be used to measure statistical independence and dependence relationships. Second, we provide algorithms to efficiently estimate these quantities during training using secondary neural network models. We conduct extensive computational experiments using a large benchmark EEG dataset, comparing our proposed techniques with a baseline method that uses an adversarial classifier. We find our proposed methods significantly increase balanced accuracy on test subjects and decrease overfitting. The proposed methods exhibit a larger benefit over a greater range of hyperparameters than the baseline method, with only a small computational cost at training time. These benefits are largest when used for a fixed training period, though there is still a significant benefit for a subset of hyperparameters when our techniques are used in conjunction with early stopping regularization.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. Dongrui Wu, Yifan Xu and Bao-Liang Lu “Transfer learning for EEG-based brain–computer interfaces: A review of progress made since 2016” In IEEE Transactions on Cognitive and Developmental Systems 14.1 IEEE, 2020, pp. 4–19
  2. Ye Wang, Toshiaki Koike-Akino and Deniz Erdogmus “Invariant representations from adversarially censored autoencoders” In arXiv preprint arXiv:1805.08097, 2018
  3. “A benchmark dataset for RSVP-based brain–computer interfaces” In Frontiers in neuroscience 14 Frontiers Media SA, 2020, pp. 568000
  4. “A review of rapid serial visual presentation-based brain–computer interfaces” In Journal of neural engineering 15.2 IOP Publishing, 2018, pp. 021001
  5. “Recursive Estimation of User Intent From Noninvasive Electroencephalography Using Discriminative Models” In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023, pp. 1–5 IEEE
  6. “The steady-state visual evoked potential in vision research: A review” In Journal of vision 15.6 The Association for Research in VisionOphthalmology, 2015, pp. 4–4
  7. “Most popular signal processing methods in motor-imagery BCI: a review and meta-analysis” In Frontiers in neuroinformatics 12 Frontiers Media SA, 2018, pp. 78
  8. “EEG-based BCI emotion recognition: A survey” In Sensors 20.18 MDPI, 2020, pp. 5083
  9. “Transfer learning in brain-computer interfaces” In IEEE Computational Intelligence Magazine 11.1 IEEE, 2016, pp. 20–31
  10. Marco Congedo, Alexandre Barachant and Rajendra Bhatia “Riemannian geometry for EEG-based brain-computer interfaces; a primer and a review” In Brain-Computer Interfaces 4.3 Taylor & Francis, 2017, pp. 155–174
  11. “Align and pool for EEG headset domain adaptation (ALPHA) to facilitate dry electrode based SSVEP-BCI” In IEEE Transactions on Biomedical Engineering 69.2 IEEE, 2021, pp. 795–806
  12. “Personalizing EEG-based affective models with transfer learning” In Proceedings of the twenty-fifth international joint conference on artificial intelligence, 2016, pp. 2732–2738
  13. “Transfer learning in brain-computer interfaces with adversarial variational autoencoders” In 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER), 2019, pp. 207–210 IEEE
  14. “Disentangled adversarial transfer learning for physiological biosignals” In 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2020, pp. 422–425 IEEE
  15. “AutoTransfer: Subject transfer learning with censored representations on biosignals data” In 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2022, pp. 3159–3165 IEEE
  16. Masashi Sugiyama, Taiji Suzuki and Takafumi Kanamori “Density ratio estimation: A comprehensive review (statistical experiment and its related topics)”, 2010, pp. 10–31
  17. “Approximating mutual information by maximum likelihood density ratio estimation” In New challenges for feature selection in data mining and knowledge discovery, 2008, pp. 5–20 PMLR
  18. “On variational bounds of mutual information” In International Conference on Machine Learning, 2019, pp. 5171–5180 PMLR
  19. “Wasserstein dependency measure for representation learning” In Advances in Neural Information Processing Systems 32, 2019
  20. Martin Arjovsky, Soumith Chintala and Léon Bottou “Wasserstein generative adversarial networks” In International conference on machine learning, 2017, pp. 214–223 PMLR
  21. “Adversarial deep learning in EEG biometrics” In IEEE signal processing letters 26.5 IEEE, 2019, pp. 710–714
  22. “A review on transfer learning in EEG signal analysis” In Neurocomputing 421 Elsevier, 2021, pp. 1–14
  23. Ross D Shachter “Bayes-ball: The rational pastime (for determining irrelevance and requisite information in belief networks and influence diagrams)” In arXiv preprint arXiv:1301.7412, 2013
  24. “Understanding and Improving the Role of Projection Head in Self-Supervised Learning” In arXiv preprint arXiv:2212.11491, 2022
  25. Vladimir Vapnik “Principles of risk minimization for learning theory” In Advances in neural information processing systems 4, 1991
  26. “Learning invariant representations from EEG via adversarial inference” In IEEE access 8 IEEE, 2020, pp. 27074–27085
  27. XuanLong Nguyen, Martin J Wainwright and Michael I Jordan “Estimating divergence functionals and the likelihood ratio by convex risk minimization” In IEEE Transactions on Information Theory 56.11 IEEE, 2010, pp. 5847–5861
  28. Sebastian Nowozin, Botond Cseke and Ryota Tomioka “f-gan: Training generative neural samplers using variational divergence minimization” In Advances in neural information processing systems 29, 2016
  29. “Adversarial symmetric variational autoencoder” In Advances in neural information processing systems 30, 2017
  30. Benjamin Rhodes, Kai Xu and Michael U Gutmann “Telescoping density-ratio estimation” In Advances in neural information processing systems 33, 2020, pp. 4905–4916
  31. Cédric Villani “Optimal transport: old and new” Springer, 2009
  32. “Spectral normalization for generative adversarial networks” In arXiv preprint arXiv:1802.05957, 2018
  33. “Pytorch: An imperative style, high-performance deep learning library” In Advances in neural information processing systems 32, 2019
  34. William Falcon and The PyTorch Lightning team “PyTorch Lightning”, 2019 DOI: 10.5281/zenodo.3828935
  35. “Decoupled weight decay regularization” In arXiv preprint arXiv:1711.05101, 2017
  36. Alfréd Rényi “On measures of entropy and information” In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics 4, 1961, pp. 547–562 University of California Press
  37. “Practical and consistent estimation of f-divergences” In Advances in Neural Information Processing Systems 32, 2019
  38. “Neural estimation of statistical divergences” In The Journal of Machine Learning Research 23.1 JMLRORG, 2022, pp. 5460–5534
  39. “A kernel two-sample test” In The Journal of Machine Learning Research 13.1 JMLR. org, 2012, pp. 723–773
  40. Aude Genevay, Gabriel Peyré and Marco Cuturi “Learning generative models with sinkhorn divergences” In International Conference on Artificial Intelligence and Statistics, 2018, pp. 1608–1617 PMLR

Summary

We haven't generated a summary for this paper yet.