Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reconstructing Visual Stimulus Images from EEG Signals Based on Deep Visual Representation Model (2403.06532v1)

Published 11 Mar 2024 in eess.IV, cs.CV, and q-bio.NC

Abstract: Reconstructing visual stimulus images is a significant task in neural decoding, and up to now, most studies consider the functional magnetic resonance imaging (fMRI) as the signal source. However, the fMRI-based image reconstruction methods are difficult to widely applied because of the complexity and high cost of the acquisition equipments. Considering the advantages of low cost and easy portability of the electroencephalogram (EEG) acquisition equipments, we propose a novel image reconstruction method based on EEG signals in this paper. Firstly, to satisfy the high recognizability of visual stimulus images in fast switching manner, we build a visual stimuli image dataset, and obtain the EEG dataset by a corresponding EEG signals collection experiment. Secondly, the deep visual representation model(DVRM) consisting of a primary encoder and a subordinate decoder is proposed to reconstruct visual stimuli. The encoder is designed based on the residual-in-residual dense blocks to learn the distribution characteristics between EEG signals and visual stimulus images, while the decoder is designed based on the deep neural network to reconstruct the visual stimulus image from the learned deep visual representation. The DVRM can fit the deep and multiview visual features of human natural state and make the reconstructed images more precise. Finally, we evaluate the DVRM in the quality of the generated images on our EEG dataset. The results show that the DVRM have good performance in the task of learning deep visual representation from EEG signals and generating reconstructed images that are realistic and highly resemble the original images.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Y. Zhang, S. Jia, Y. Zheng, Z. Yu, Y. Tian, S. Ma, T. Huang, and J. K. Liu, “Reconstruction of natural visual scenes from neural spikes with deep neural networks,” Neural Networks, vol. 125, pp. 19–30, 2020.
  2. A. Cruz, G. Pires, A. Lopes, C. Carona, and U. J. Nunes, “A self-paced bci with a collaborative controller for highly reliable wheelchair driving: Experimental tests with physically disabled individuals,” IEEE Transactions on Human-Machine Systems, vol. 51, no. 2, pp. 109–119, 2021.
  3. A. Hekmatmanesh, P. H. Nardelli, and H. Handroos, “Review of the state-of-the-art of brain-controlled vehicles,” IEEE Access, vol. 9, pp. 110 173–110 193, 2021.
  4. Z. Wang, Y. Yu, M. Xu, Y. Liu, E. Yin, and Z. Zhou, “Towards a hybrid bci gaming paradigm based on motor imagery and ssvep,” International Journal of Human–Computer Interaction, vol. 35, no. 3, pp. 197–205, 2019.
  5. R. A. Poldrack and M. J. Farah, “Progress and challenges in probing the human brain,” Nature, vol. 526, no. 7573, pp. 371–379, 2015.
  6. Hekmatmanesh, Amin and Wu, Huapeng and Motie-Nasrabadi, Ali and Li, Ming and Handroos, Heikki, “Combination of discrete wavelet packet transform with detrended fluctuation analysis using customized mother wavelet with the aim of an imagery-motor control interface for an exoskeleton,” Multimedia Tools and Applications, vol. 78, no. 21, pp. 30 503–30 522, 2019.
  7. R. VanRullen and L. Reddy, “Reconstructing faces from fMRI patterns using deep generative neural networks,” Communications Biology, vol. 2, no. 1, p. 193, 2019.
  8. F. Fahimi, S. Dosen, K. K. Ang, N. Mrachacz-Kersting, and C. Guan, “Generative adversarial networks-based data augmentation for brain–computer interface,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 9, pp. 4039–4051, 2021.
  9. K. Seeliger, U. Güçlü, L. Ambrogioni, Y. Güçlütürk, and M. A. J. van Gerven, “Generative adversarial networks for reconstructing natural images from brain activity,” NeuroImage, vol. 181, pp. 775–785, 2018.
  10. H. Pan, H. Song, Q. Zhang, and W. Mi, “Review of closed-loop brain–machine interface systems from a control perspective,” IEEE Transactions on Human-Machine Systems, pp. 1–17, 2022.
  11. M. Nakanishi, Y. Wang, Y.-T. Wang, Y. Mitsukura, and T.-P. Jung, “A high-speed brain speller using steady-state visual evoked potentials,” International Journal of Neural Systems, vol. 24, no. 06, p. 1450019, 2014.
  12. Y. Yuan, G. Xun, Q. Suo, K. Jia, and A. Zhang, “Wave2Vec: Deep representation learning for clinical temporal data,” Neurocomputing, vol. 324, pp. 31–42, 2019.
  13. Z. Ren, J. Li, X. Xue, X. Li, and X. Gao, “Reconstructing seen image from brain activity by visually-guided cognitive representation and adversarial learning,” NeuroImage, vol. 228, no. 2, p. 117602, 2021.
  14. C. Du, C. Du, L. Huang, and H. He, “Reconstructing perceived images from human brain activities with bayesian deep multiview learning,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 8, pp. 2310–2323, 2019.
  15. C. Du, C. Du, L. Huang, H. Wang, and H. He, “Structured neural decoding with multitask transfer learning of deep neural network representations,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 2, pp. 600–614, 2022.
  16. C. Du, C. Du, and H. He, “Multimodal deep generative adversarial models for scalable doubly semi-supervised learning,” Information Fusion, vol. 68, pp. 118–130, 2021.
  17. V. Parekh, R. Subramanian, D. Roy, and C. V. Jawahar, “An EEG-based image annotation system,” in Proceedings of the Computer Vision, Pattern Recognition, Image Processing, and Graphics, 2018, pp. 303–313.
  18. H. Cecotti and A. Graser, “Convolutional neural networks for p300 detection with application to brain-computer interfaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 3, p. 433, 2011.
  19. C. Spampinato, S. Palazzo, I. Kavasidis, D. Giordano, N. Souly, and M. Shah, “Deep learning human mind for automated visual classification,” in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4503–4511.
  20. I. Kavasidis, S. Palazzo, C. Spampinato, D. Giordano, and M. Shah, “Brain2image: Converting brain signals into images,” in Proceedings of the 25th ACM International Conference on Multimedia, 2017, p. 1809–1817.
  21. B. Hajipour Khire Masjidi, S. Bahmani, F. Sharifi, M. Peivandi, M. Khosravani, and A. Hussein Mohammed, “Ct-ml: Diagnosis of breast cancer based on ultrasound images and time-dependent feature extraction methods using contourlet transformation and machine learning,” Computational Intelligence and Neuroscience, vol. 2022, p. 1493847, 2022. [Online]. Available: https://doi.org/10.1155/2022/1493847
  22. X. Zheng, W. Chen, M. Li, T. Zhang, Y. You, and Y. Jiang, “Decoding human brain activity with deep learning,” Biomedical Signal Processing and Control, vol. 56, p. 101730, 2020.
  23. S. M. Seyed Alizadeh, A. Bagherzadeh, S. Bahmani, A. Nikzad, E. Aminzadehsarikhanbeglou, and S. Tatyana Yu, “Retrograde gas condensate reservoirs: Reliable estimation of dew point pressure by the hybrid neuro-fuzzy connectionist paradigm,” Journal of Energy Resources Technology, vol. 144, no. 6, p. 063007, 2022.
  24. D. J. Rezende, S. Mohamed, and D. Wierstra, “Stochastic backpropagation and approximate inference in deep generative models,” in Proceedings of the 31st International Conference on Machine Learning, vol. 32, no. 2, 2014, pp. 1278–1286.
  25. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
  26. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of the 27th International Conference on Neural Information Processing Systems, 2014, p. 2672–2680.
  27. B. Du, X. Cheng, Y. Duan, and H. Ning, “fmri brain decoding and its applications in brain-computer interface: A survey,” Brain Sci, vol. 12, no. 2, 2022. [Online]. Available: https://www.ncbi.nlm.nih.gov/pubmed/35203991
  28. W. Huang, H. Yan, C. Wang, J. Li, Z. Zuo, J. Zhang, Z. Shen, and H. Chen, “Perception-to-image: Reconstructing natural images from the brain activity of visual perception,” Ann Biomed Eng, vol. 48, no. 9, pp. 2323–2332, 2020. [Online]. Available: https://www.ncbi.nlm.nih.gov/pubmed/32285343
  29. C. K. Sønderby, T. Raiko, L. Maaløe, S. K. Sønderby, and O. Winther, “Ladder variational autoencoders,” in Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016, p. 3745–3753.
  30. D. P. Kingma, D. J. Rezende, S. Mohamed, and M. Welling, “Semi-supervised learning with deep generative models,” in Proceedings of the 27th International Conference on Neural Information Processing Systems, 2014, p. 3581–3589.
  31. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  32. J. Benesty, J. Chen, Y. Huang, and I. Cohen, “Pearson correlation coefficient,” in Noise reduction in speech processing.   Springer, 2009, pp. 1–4.
  33. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
  34. J. Korhonen and J. You, “Peak signal-to-noise ratio revisited: Is simple beautiful?” in 2012 Fourth International Workshop on Quality of Multimedia Experience.   IEEE, 2012, pp. 37–38.
  35. R. J. Hyndman and A. B. Koehler, “Another look at measures of forecast accuracy,” International journal of forecasting, vol. 22, no. 4, pp. 679–688, 2006.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Hongguang Pan (2 papers)
  2. Zhuoyi Li (4 papers)
  3. Yunpeng Fu (2 papers)
  4. Xuebin Qin (11 papers)
  5. Jianchen Hu (4 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com