Papers
Topics
Authors
Recent
2000 character limit reached

Face-GPS: A Comprehensive Technique for Quantifying Facial Muscle Dynamics in Videos (2401.05625v1)

Published 11 Jan 2024 in cs.CV

Abstract: We introduce a novel method that combines differential geometry, kernels smoothing, and spectral analysis to quantify facial muscle activity from widely accessible video recordings, such as those captured on personal smartphones. Our approach emphasizes practicality and accessibility. It has significant potential for applications in national security and plastic surgery. Additionally, it offers remote diagnosis and monitoring for medical conditions such as stroke, Bell's palsy, and acoustic neuroma. Moreover, it is adept at detecting and classifying emotions, from the overt to the subtle. The proposed face muscle analysis technique is an explainable alternative to deep learning methods and a non-invasive substitute to facial electromyography (fEMG).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. P. Ekman and W.V. Friesen. Unmasking the Face: A Guide to Recognizing Emotions from Facial Clues. Number v. 10 in Spectrum book. Malor Books, 2003.
  2. Physiological correlates of subjective emotional valence and arousal dynamics while viewing films. Biological Psychology, 157:107974, 2020.
  3. Emotional valence tracking and classification via state-space analysis of facial electromyography. In 2019 53rd Asilomar Conference on Signals, Systems, and Computers, pages 2116–2120. IEEE, 2019.
  4. Facial expression distribution prediction based on surface electromyography. Expert Systems with Applications, 161:113683, 2020.
  5. Analysis of facial emg signal for emotion recognition using wavelet packet transform and svm. In Machine intelligence and signal analysis, pages 247–257. Springer, 2019.
  6. Facial action coding system. Environmental Psychology & Nonverbal Behavior, 1978.
  7. Paul Ekman. Facial action coding system (facs). A human face, 2002.
  8. Components and recognition of facial expression in the communication of emotion by actors. Journal of personality and social psychology, 68(1):83, 1995.
  9. Differences in facial expressions of four universal emotions. Psychiatry research, 128(3):235–244, 2004.
  10. Robust real-time face detection. International journal of computer vision, 57:137–154, 2004.
  11. Automated video-based facial expression analysis of neuropsychiatric disorders. Journal of neuroscience methods, 168(1):224–238, 2008.
  12. A unified probabilistic framework for measuring the intensity of spontaneous facial action units. In 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pages 1–7. IEEE, 2013.
  13. A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941, 2015.
  14. Recurrent neural networks for emotion recognition in video. In Proceedings of the 2015 ACM on international conference on multimodal interaction, pages 467–474, 2015.
  15. Spatio-temporal convolutional features with nested lstm for facial expression recognition. Neurocomputing, 317:50–57, 2018.
  16. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  17. Audio-visual emotion recognition using deep transfer learning and multiple temporal models. In Proceedings of the 19th ACM international conference on multimodal interaction, pages 577–582, 2017.
  18. Using synthetic data to improve facial expression analysis with 3d convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 1609–1618, 2017.
  19. Video-based emotion recognition using cnn-rnn and c3d hybrid networks. In Proceedings of the 18th ACM international conference on multimodal interaction, pages 445–450, 2016.
  20. Deep learning for spatio-temporal modeling of dynamic spontaneous emotions. IEEE Transactions on Affective Computing, 12(2):363–376, 2018.
  21. Developing crossmodal expression recognition based on a deep neural model. Adaptive behavior, 24(5):373–396, 2016.
  22. Modeling multimodal cues in a deep learning-based framework for emotion recognition in the wild. In Proceedings of the 19th ACM International Conference on Multimodal Interaction, pages 536–543, 2017.
  23. Real-time emotion detection by quantitative facial motion analysis. Plos one, 18(3):e0282730, 2023.
  24. Explainable face recognition. In European conference on computer vision, pages 248–263. Springer, 2020.
  25. Deep fair models for complex data: Graphs labeling and explainable face recognition. Neurocomputing, 470:318–334, 2022.
  26. Explainability of the implications of supervised and unsupervised face image quality estimations through activation map variation analyses in face recognition models. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 349–358, 2022.
  27. Xai-fr: Explainable ai-based face recognition using deep neural networks. Wireless Personal Communications, 129(1):663–680, 2023.
  28. Determining the Mechanical Properties of Rat Skin with Digital Image Speckle Correlation. Dermatology, 208:112–119, 2004.
  29. Comparison of native porcine skin and a dermal substitute using tensiometry and digital image speckle correlation. Annals of plastic surgery, 69(4):462–467, 2012.
  30. An Analysis of Facial Nerve Function in Patients with Vestibular Schwannomas Using Digital Image Speckle Correlation. Journal of Neuroscience and Neuroengineering, 3(1):62–71, 2014.
  31. An in Vivo Analysis of the Effect and Duration of Treatment with Botulinum Toxin Type A Using Digital Image Speckle Correlation. Skin Res Technol., 19(3):220–229, 2013.
  32. Digital Image Speckle Correlation to Optimize Botulinum Toxin Type A Injection: A Prospective, Randomized, Crossover Trial. Plast Reconstr Surg., 143(6):1614–1618, 2019.
  33. Face Recognition and Micro-expression Recognition Based on Discriminant Tensor Subspace Analysis Plus Extreme Learning Machine. Neural Process Letters, 39:25–43, 2014.
  34. Dynamic Approach for Face Recognition Using Digital Image Skin Correlation. In Takeo Kanade, Anil Jain, and Nalini K. Ratha, editors, Audio- and Video-Based Biometric Person Authentication, pages 1010–1018, Berlin, Heidelberg, 2005. Springer Berlin Heidelberg.
  35. Two-stream convolutional networks for action recognition in videos. Advances in neural information processing systems, 27, 2014.
  36. Deep spatial-temporal feature fusion for facial expression recognition in static images. Pattern Recognition Letters, 119:49–61, 2019.
  37. I. Grishchenko et al. Attention mesh: High-fidelity face mesh prediction in real-time. arXiv preprint arXiv:2006.10962, 2020.
  38. An iterative image registration technique with an application to stereo vision, volume 81. Vancouver, 1981.
  39. Kernel methods on riemannian manifolds with gaussian rbf kernels. IEEE transactions on pattern analysis and machine intelligence, 37(12):2464–2477, 2015.
  40. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. pages 94 – 101, 07 2010.
  41. D. Meng et al. Frame attention networks for facial expression recognition in videos. In 2019 IEEE international conference on image processing (ICIP), pages 3866–3870. IEEE, 2019.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.