Papers
Topics
Authors
Recent
Search
2000 character limit reached

ESTformer: Transformer Utilizing Spatiotemporal Dependencies for Electroencaphalogram Super-resolution

Published 3 Dec 2023 in eess.SP and cs.LG | (2312.10052v2)

Abstract: Towards practical applications of Electroencephalography (EEG), lightweight acquisition devices garner significant attention. However, EEG channel selection methods are commonly data-sensitive and cannot establish a unified sound paradigm for EEG acquisition devices. Through reverse conceptualisation, we formulated EEG applications in an EEG super-resolution (SR) manner, but suffered from high computation costs, extra interpolation bias, and few insights into spatiotemporal dependency modelling. To this end, we propose ESTformer, an EEG SR framework that utilises spatiotemporal dependencies based on the transformer. ESTformer applies positional encoding methods and a multihead self-attention mechanism to the space and time dimensions, which can learn spatial structural correlations and temporal functional variations. ESTformer, with the fixed mask strategy, adopts a mask token to upsample low-resolution (LR) EEG data in the case of disturbance from mathematical interpolation methods. On this basis, we designed various transformer blocks to construct a spatial interpolation module (SIM) and a temporal reconstruction module (TRM). Finally, ESTformer cascades the SIM and TRM to capture and model the spatiotemporal dependencies for EEG SR with fidelity. Extensive experimental results on two EEG datasets show the effectiveness of ESTformer against previous state-of-the-art methods, demonstrating the versatility of the Transformer for EEG SR tasks. The superiority of the SR data was verified in an EEG-based person identification and emotion recognition task, achieving a 2% to 38% improvement compared with the LR data at different sampling scales.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. J. H. Shin, J. Kwon, J. U. Kim, H. Ryu, J. Ok, S. Joon Kwon, H. Park, and T.-i. Kim, “Wearable EEG electronics for a Brain–AI Closed-Loop System to enhance autonomous machine decision-making,” npj Flexible Electronics, vol. 6, no. 1, p. 32, May 2022.
  2. S. Panwar, P. Rad, J. Quarles, and Y. Huang, “Generating EEG signals of an RSVP Experiment by a Class Conditioned Wasserstein Generative Adversarial Network,” in 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC).   Bari, Italy: IEEE, Oct. 2019, pp. 1304–1310.
  3. Z. A. A. Alyasseri, A. T. Khader, M. A. Al-Betar, and O. A. Alomari, “Person identification using EEG channel selection with hybrid flower pollination algorithm,” Pattern Recognition, vol. 105, p. 107393, Sep. 2020.
  4. M. Tacke, K. Janson, K. Vill, F. Heinen, L. Gerstl, K. Reiter, and I. Borggraefe, “Effects of a reduction of the number of electrodes in the EEG montage on the number of identified seizure patterns,” Scientific Reports, vol. 12, no. 1, p. 4621, Mar. 2022.
  5. G. Bao, B. Yan, L. Tong, J. Shu, L. Wang, K. Yang, and Y. Zeng, “Data Augmentation for EEG-Based Emotion Recognition Using Generative Adversarial Networks,” Frontiers in Computational Neuroscience, vol. 15, p. 723843, Dec. 2021.
  6. S. Saba-Sadiya, T. Alhanai, T. Liu, and M. M. Ghassemi, “EEG Channel Interpolation Using Deep Encoder-decoder Networks,” in 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM).   Seoul, Korea (South): IEEE, Dec. 2020, pp. 2432–2439.
  7. D. Kostas, S. Aroca-Ouellette, and F. Rudzicz, “BENDR: Using Transformers and a Contrastive Self-Supervised Learning Task to Learn From Massive Amounts of EEG Data,” Frontiers in Human Neuroscience, vol. 15, p. 653659, Jun. 2021.
  8. S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient Transformer for High-Resolution Image Restoration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5728–5739.
  9. C.-M. Feng, Y. Yan, H. Fu, L. Chen, and Y. Xu, “Task Transformer Network for Joint MRI Reconstruction and Super-Resolution,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2021, M. De Bruijne, P. C. Cattin, S. Cotin, N. Padoy, S. Speidel, Y. Zheng, and C. Essert, Eds.   Cham: Springer International Publishing, 2021, vol. 12906, pp. 307–317.
  10. J. Yoo, N. Ahn, and K.-A. Sohn, “Rethinking Data Augmentation for Image Super-resolution: A Comprehensive Analysis and a New Strategy,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).   Seattle, WA, USA: IEEE, Jun. 2020, pp. 8372–8381.
  11. I. A. Corley and Y. Huang, “Deep EEG super-resolution: Upsampling EEG spatial resolution with Generative Adversarial Networks,” in 2018 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI).   Las Vegas, NV, USA: IEEE, Mar. 2018, pp. 100–103.
  12. S. Han, M. Kwon, S. Lee, and S. C. Jun, “Feasibility Study of EEG Super-Resolution Using Deep Convolutional Networks,” in 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC).   Miyazaki, Japan: IEEE, Oct. 2018, pp. 1033–1038.
  13. M. Svantesson, H. Olausson, A. Eklund, and M. Thordstein, “Virtual EEG-electrodes: Convolutional neural networks as a method for upsampling or restoring channels,” Journal of Neuroscience Methods, vol. 355, p. 109126, May 2021.
  14. Y. Tang, D. Chen, H. Liu, C. Cai, and X. Li, “Deep EEG Superresolution via Correlating Brain Structural and Functional Connectivities,” IEEE Transactions on Cybernetics, vol. 53, no. 7, pp. 4410–4422, Jul. 2023.
  15. K. He, X. Chen, S. Xie, Y. Li, P. Dollar, and R. Girshick, “Masked Autoencoders Are Scalable Vision Learners,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).   New Orleans, LA, USA: IEEE, Jun. 2022, pp. 15 979–15 988.
  16. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, ser. NIPS’17.   Red Hook, NY, USA: Curran Associates Inc., Dec. 2017, pp. 6000–6010.
  17. H. Zhou, S. Zhang, J. Peng, S. Zhang, J. Li, H. Xiong, and W. Zhang, “Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 12, pp. 11 106–11 115, May 2021.
  18. M. Hassan and F. Wendling, “Electroencephalography Source Connectivity: Aiming for High Resolution of Brain Networks in Time and Space,” IEEE Signal Processing Magazine, vol. 35, no. 3, pp. 81–96, May 2018.
  19. I. Nouira, A. Ben Abdallah, and M. H. Bedoui, “EEG potential mapping by 3D interpolation methods,” in 2014 International Conference on Multimedia Computing and Systems (ICMCS).   Marrakech, Morocco: IEEE, Apr. 2014, pp. 469–474.
  20. I. Khouaja, I. Nouira, M. H. Bedoui, and M. Akil, “Enhancing EEG Surface Resolution by Using a Combination of Kalman Filter and Interpolation Method,” in 2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV).   Beni Mellal, Morocco: IEEE, Mar. 2016, pp. 353–357.
  21. W. Freeden, “Spherical spline interpolation—basic theory and computational aspects,” Journal of Computational and Applied Mathematics, vol. 11, no. 3, pp. 367–375, Dec. 1984.
  22. H. S. Courellis, J. R. Iversen, H. Poizner, and G. Cauwenberghs, “EEG channel interpolation using ellipsoid geodesic length,” in 2016 IEEE Biomedical Circuits and Systems Conference (BioCAS).   Shanghai, China: IEEE, Oct. 2016, pp. 540–543.
  23. H. Sun, C. Li, and H. Zhang, “Design of virtual BCI channels based on informer,” Frontiers in Human Neuroscience, vol. 17, p. 1150316, Apr. 2023.
  24. Y. E. Ouahidi, L. Drumetz, G. Lioi, N. Farrugia, B. Pasdeloup, and V. Gripon, “Spatial Graph Signal Interpolation with an Application for Merging BCI Datasets with Various Dimensionalities,” in ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   Rhodes Island, Greece: IEEE, Jun. 2023, pp. 1–5.
  25. W. Zhang, L. Wang, W. Chen, Y. Jia, Z. He, and J. Du, “3d Cross-Scale Feature Transformer Network for Brain Mr Image Super-Resolution,” in ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   Singapore, Singapore: IEEE, May 2022, pp. 1356–1360.
  26. D. Li, L. Xie, B. Chai, Z. Wang, and H. Yang, “Spatial-frequency convolutional self-attention network for EEG emotion recognition,” Applied Soft Computing, vol. 122, p. 108740, Jun. 2022.
  27. J. Luo, W. Cui, S. Xu, L. Wang, X. Li, X. Liao, and Y. Li, “A Dual-Branch Spatio-Temporal-Spectral Transformer Feature Fusion Network for EEG-Based Visual Recognition,” IEEE Transactions on Industrial Informatics, pp. 1–11, 2023.
  28. A. Arjun, A. S. Rajpoot, and M. Raveendranatha Panicker, “Introducing Attention Mechanism for EEG Signals: Emotion Recognition with Vision Transformers,” in 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC).   Mexico: IEEE, Nov. 2021, pp. 5723–5726.
  29. Z. Wang, Y. Wang, C. Hu, Z. Yin, and Y. Song, “Transformers for EEG-Based Emotion Recognition: A Hierarchical Spatial Information Learning Model,” IEEE Sensors Journal, vol. 22, no. 5, pp. 4359–4368, Mar. 2022.
  30. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” in International Conference on Learning Representations, Oct. 2020.
  31. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” in Proceedings of the 2019 Conference of the North.   Minneapolis, Minnesota: Association for Computational Linguistics, 2019, pp. 4171–4186.
  32. Z. Hou, Y. He, Y. Cen, X. Liu, Y. Dong, E. Kharlamov, and J. Tang, “GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner,” in Proceedings of the ACM Web Conference 2023.   Austin TX USA: ACM, Apr. 2023, pp. 737–746.
  33. Z. Zhang, S.-h. Zhong, and Y. Liu, “GANSER: A Self-supervised Data Augmentation Framework for EEG-based Emotion Recognition,” IEEE Transactions on Affective Computing, pp. 1–1, 2022.
  34. H.-Y. S. Chien, H. Goh, C. M. Sandino, and J. Y. Cheng, “MAEEG: Masked Auto-encoder for EEG Representation Learning,” in NeurIPS 2022 Workshop on Learning from Time Series for Health, Dec. 2022.
  35. R. Li, Y. Wang, W.-L. Zheng, and B.-L. Lu, “A Multi-view Spectral-Spatial-Temporal Masked Autoencoder for Decoding Emotions with Self-supervised Learning,” in Proceedings of the 30th ACM International Conference on Multimedia.   Lisboa Portugal: ACM, Oct. 2022, pp. 6–14.
  36. Y. Liu, S. Zhang, J. Chen, Z. Yu, K. Chen, and D. Lin, “Improving Pixel-based MIM by Reducing Wasted Modeling Capability,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 5361–5372.
  37. Y. Zhang and J. Yan, “Crossformer: Transformer Utilizing Cross-Dimension Dependency for Multivariate Time Series Forecasting,” in The Eleventh International Conference on Learning Representations, Sep. 2022.
  38. H. Lin, X. Cheng, X. Wu, and D. Shen, “CAT: Cross Attention in Vision Transformer,” in 2022 IEEE International Conference on Multimedia and Expo (ICME).   Taipei, Taiwan: IEEE, Jul. 2022, pp. 1–6.
  39. K. Bi, L. Xie, H. Zhang, X. Chen, X. Gu, and Q. Tian, “Accurate medium-range global weather forecasting with 3D neural networks,” Nature, vol. 619, no. 7970, pp. 533–538, Jul. 2023.
  40. L. Jiang, B. Dai, W. Wu, and C. C. Loy, “Focal Frequency Loss for Image Reconstruction and Synthesis,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV).   Montreal, QC, Canada: IEEE, Oct. 2021, pp. 13 899–13 909.
  41. R. Cipolla, Y. Gal, and A. Kendall, “Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.   Salt Lake City, UT, USA: IEEE, Jun. 2018, pp. 7482–7491.
  42. G. Schalk, D. McFarland, T. Hinterberger, N. Birbaumer, and J. Wolpaw, “BCI2000: A general-purpose brain-computer interface (BCI) system,” IEEE Transactions on Biomedical Engineering, vol. 51, no. 6, pp. 1034–1043, Jun. 2004.
  43. W.-L. Zheng and B.-L. Lu, “Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks,” IEEE Transactions on Autonomous Mental Development, vol. 7, no. 3, pp. 162–175, Sep. 2015.
Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.