EMGTFNet: Fuzzy Vision Transformer to decode Upperlimb sEMG signals for Hand Gestures Recognition (2310.03754v1)
Abstract: Myoelectric control is an area of electromyography of increasing interest nowadays, particularly in applications such as Hand Gesture Recognition (HGR) for bionic prostheses. Today's focus is on pattern recognition using Machine Learning and, more recently, Deep Learning methods. Despite achieving good results on sparse sEMG signals, the latter models typically require large datasets and training times. Furthermore, due to the nature of stochastic sEMG signals, traditional models fail to generalize samples for atypical or noisy values. In this paper, we propose the design of a Vision Transformer (ViT) based architecture with a Fuzzy Neural Block (FNB) called EMGTFNet to perform Hand Gesture Recognition from surface electromyography (sEMG) signals. The proposed EMGTFNet architecture can accurately classify a variety of hand gestures without any need for data augmentation techniques, transfer learning or a significant increase in the number of parameters in the network. The accuracy of the proposed model is tested using the publicly available NinaPro database consisting of 49 different hand gestures. Experiments yield an average test accuracy of 83.57\% & 3.5\% using a 200 ms window size and only 56,793 trainable parameters. Our results outperform the ViT without FNB, thus demonstrating that including FNB improves its performance. Our proposal framework EMGTFNet reported the significant potential for its practical application for prosthetic control.
- N. V. Iqbal and K. Subramaniam, “A review on upper-limb myoelectric prosthetic control,” IETE Journal of Research, vol. 64, no. 6, pp. 740–752, 2018.
- M. Atzori, A. Gijsberts, C. Castellini, B. Caputo, A.-G. M. Hager, S. Elsig, G. Giatsidis, F. Bassetto, and H. Müller, “Electromyography data for non-invasive naturally-controlled robotic hand prostheses,” Scientific data, vol. 1, no. 1, pp. 1–13, 2014.
- H. Yang, J. Wan, Y. Jin, X. Yu, and Y. Fang, “Eeg- and emg-driven poststroke rehabilitation: A review,” IEEE Sensors Journal, vol. 22, no. 24, pp. 23649–23660, 2022.
- M. B. I. Reaz, M. S. Hussain, and F. Mohd-Yasin, “Techniques of EMG signal analysis: detection, processing, classification and applications,” Biological procedures online, vol. 8, no. 1, pp. 11–35, 2006.
- E. Escandón and C. Flores, “Classification of daily-life grasping activities semg fractal dimension,” in Proceedings of the 6th Brazilian Technology Symposium (BTSym’20) (Y. Iano, O. Saotome, G. Kemper, A. C. Mendes de Seixas, and G. Gomes de Oliveira, eds.), (Cham), pp. 870–877, Springer International Publishing, 2021.
- E. R. Escandón and C. Flores, “Clustering of semg signals on real-life activities using fractal dimension and self-organizing maps,” in 2020 IEEE Engineering International Research Conference (EIRCON), pp. 1–4, 2020.
- E. Rahimian, S. Zabihi, S. F. Atashzar, A. Asif, and A. Mohammadi, “Semg-based hand gesture recognition via dilated convolutional neural networks,” in 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 1–5, 2019.
- T. Sun, Q. Hu, P. Gulati, and S. F. Atashzar, “Temporal dilation of deep lstm for agile decoding of semg: Application in prediction of upper-limb motor intention in neurorobotics,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 6212–6219, 2021.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” 2017.
- E. Rahimian, S. Zabihi, A. Asif, D. Farina, S. F. Atashzar, and A. Mohammadi, “Temgnet: Deep transformer-based decoding of upperlimb semg for hand gestures recognition,” CoRR, vol. abs/2109.12379, 2021.
- C. F. Vega, J. Quevedo, E. Escandón, M. Kiani, W. Ding, and J. Andreu-Perez, “Fuzzy temporal convolutional neural networks in p300-based brain–computer interface for smart home interaction,” Applied Soft Computing, vol. 117, p. 108359, 2022.
- P. Tsinganos, Multi-channel EMG pattern classification based on deep learning. PhD thesis, University of Patras, 2021.
- M. Atzori, M. Cognolato, and H. Müller, “Deep learning with convolutional neural networks applied to electromyography data: A resource for the classification of movements for prosthetic hands,” Frontiers in Neurorobotics, vol. 10, 2016.
- T. R. Farrell and R. F. Weir, “The optimal controller delay for myoelectric prostheses,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 15, no. 1, pp. 111–118, 2007.
- E. Rahimian, S. Zabihi, A. Asif, S. F. Atashzar, and A. Mohammadi, “Few-shot learning for decoding surface electromyography for hand gesture recognition,” in ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1300–1304, 2021.
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” 2020.
- Y. Hu, Y. Wong, W. Wei, Y. Du, M. Kankanhalli, and W. Geng, “A novel attention-based hybrid cnn-rnn architecture for semg-based gesture recognition,” PLOS ONE, vol. 13, p. e0206049, 10 2018.
- J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” 2016.
- K. Kiliç, Ö. Uncu, and I. B. Türksen, “Comparison of different strategies of utilizing fuzzy clustering in structure identification,” Information Sciences, vol. 177, no. 23, pp. 5153–5162, 2007.
- S. Shen, X. Wang, F. Mao, L. Sun, and M. Gu, “Movements classification through semg with convolutional vision transformer and stacking ensemble learning,” IEEE Sensors Journal, vol. 22, no. 13, pp. 13318–13325, 2022.
- Z. Ding, C. Yang, Z. Tian, C. Yi, Y. Fu, and F. Jiang, “semg-based gesture recognition with convolution neural networks,” Sustainability, vol. 10, no. 6, 2018.
- P. Gulati, Q. Hu, and S. Atashzar, “Toward deep generalization of peripheral emg-based human-robot interfacing: A hybrid explainable solution for neurorobotic systems,” IEEE Robotics and Automation Letters, vol. PP, pp. 1–1, 02 2021.
- M. Kiani, J. Andreu-Perez, H. Hagras, E. I. Papageorgiou, M. Prasad, and C.-T. Lin, “Effective brain connectivity for fnirs with fuzzy cognitive maps in neuroergonomics,” IEEE Transactions on Cognitive and Developmental Systems, vol. 14, no. 1, pp. 50–63, 2019.
- J. Andreu-Perez, L. L. Emberson, M. Kiani, M. L. Filippetti, H. Hagras, and S. Rigato, “Explainable artificial intelligence based analysis for interpreting infant fnirs data in developmental cognitive neuroscience,” Communications biology, vol. 4, no. 1, p. 1077, 2021.
- J. Andréu, J. Viúdez, and J. A. Holgado, “An ambient assisted-living architecture based on wireless sensor networks,” in 3rd Symposium of Ubiquitous Computing and Ambient Intelligence 2008, pp. 239–248, Springer, 2009.
- S. A. Cortez, C. Flores, and J. Andreu-Perez, “A smart home control prototype using a p300-based brain–computer interface for post-stroke patients,” in Proceedings of the 5th Brazilian Technology Symposium: Emerging Trends, Issues, and Challenges in the Brazilian Technology, Volume 2, pp. 131–139, Springer, 2020.
- M. Kiani, J. Andreu-Perez, and H. Hagras, “A temporal type-2 fuzzy system for time-dependent explainable artificial intelligence,” IEEE Transactions on Artificial Intelligence, 2022.
- J. Andreu-Perez, F. Deligianni, D. Ravi, and G.-Z. Yang, “Artificial intelligence and robotics,” arXiv preprint arXiv:1803.10813, 2018.