Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MeciFace: Mechanomyography and Inertial Fusion-based Glasses for Edge Real-Time Recognition of Facial and Eating Activities (2306.13674v3)

Published 19 Jun 2023 in cs.CV, cs.LG, eess.IV, and eess.SP

Abstract: The increasing prevalence of stress-related eating behaviors and their impact on overall health highlights the importance of effective and ubiquitous monitoring systems. In this paper, we present MeciFace, an innovative wearable technology designed to monitor facial expressions and eating activities in real-time on-the-edge (RTE). MeciFace aims to provide a low-power, privacy-conscious, and highly accurate tool for promoting healthy eating behaviors and stress management. We employ lightweight convolutional neural networks as backbone models for facial expression and eating monitoring scenarios. The MeciFace system ensures efficient data processing with a tiny memory footprint, ranging from 11KB to 19 KB. During RTE evaluation, the system achieves an F1-score of < 86% for facial expression recognition and 94% for eating/drinking monitoring, for the RTE of unseen users (user-independent case).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. M. B. Morshed, S. S. Kulkarni, K. Saha, R. Li, L. G. Roper, L. Nachman, H. Lu, L. Mirabella, S. Srivastava, K. de Barbaro et al., “Food, mood, context: Examining college students’ eating context and mental well-being,” ACM Transactions on Computing for Healthcare, vol. 3, no. 4, pp. 1–26, 2022.
  2. L. Rachakonda, A. Kothari, S. P. Mohanty, E. Kougianos, and M. Ganapathiraju, “Stress-log: An iot-based smart system to monitor stress-eating,” in 2019 IEEE International Conference on Consumer Electronics (ICCE).   IEEE, 2019, pp. 1–6.
  3. K. Masai, K. Kunze, D. Sakamoto, Y. Sugiura, and M. Sugimoto, “Face commands-user-defined facial gestures for smart glasses,” in 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).   IEEE, 2020, pp. 374–386.
  4. H. Aoki, A. Ohnishi, N. Isoyama, T. Terada, and M. Tsukamoto, “Facerecglasses: A wearable system for recognizing self facial expressions using compact wearable cameras,” in Augmented Humans Conference 2021, 2021, pp. 55–65.
  5. J. Kwon, J. Ha, D.-H. Kim, J. W. Choi, and L. Kim, “Emotion recognition using a glasses-type wearable device via multi-channel facial responses,” IEEE Access, vol. 9, pp. 146 392–146 403, 2021.
  6. K. Masai, K. Kunze, Y. Sugiura, M. Ogata, M. Inami, and M. Sugimoto, “Evaluation of facial expression recognition by a smart eyewear for facial direction changes, repeatability, and positional drift,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 7, no. 4, pp. 1–23, 2017.
  7. R. Li, J. Lee, W. Woo, and T. Starner, “Kissglass: Greeting gesture recognition using smart glasses,” in Proceedings of the Augmented Humans International Conference, 2020, pp. 1–5.
  8. D. J. Matthies, C. Weerasinghe, B. Urban, and S. Nanayakkara, “Capglasses: Untethered capacitive sensing with smart glasses,” in Augmented Humans Conference 2021, 2021, pp. 121–130.
  9. W. Xie, Q. Zhang, and J. Zhang, “Acoustic-based upper facial action recognition for smart eyewear,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 5, no. 2, pp. 1–28, 2021.
  10. H. Gjoreski, I. Mavridou, J. A. W. Archer, A. Cleal, S. Stankoski, I. Kiprijanovska, M. Fatoorechi, P. Walas, J. Broulidakis, M. Gjoreski et al., “Ocosense glasses–monitoring facial gestures and expressions for augmented human-computer interaction: Ocosense glasses for monitoring facial gestures and expressions,” in Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 2023, pp. 1–4.
  11. H. Bello, L. A. S. Marin, S. Suh, B. Zhou, and P. Lukowicz, “Inmyface: Inertial and mechanomyography-based sensor fusion for wearable facial activity recognition,” Information Fusion, p. 101886, 2023.
  12. H. Bello, B. Zhou, and P. Lukowicz, “Facial muscle activity recognition with reconfigurable differential stethoscope-microphones,” Sensors, vol. 20, no. 17, p. 4904, 2020.
  13. B. Zhou and P. Lukowicz, “Snacap: snacking behavior monitoring with smart fabric mechanomyography on the temporalis,” in Proceedings of the 2020 ACM International Symposium on Wearable Computers, 2020, pp. 96–100.
  14. M. Olszanowski, G. Pochwatko, K. Kuklinski, M. Scibor-Rylski, P. Lewinski, and R. K. Ohme, “Warsaw set of emotional facial expression pictures: a validation study of facial display photographs,” Frontiers in psychology, vol. 5, p. 1516, 2015.
  15. Y. Jin, Y. Gao, X. Xu, S. Choi, J. Li, F. Liu, Z. Li, and Z. Jin, “Earcommand: " hearing" your silent speech commands in ear,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 6, no. 2, pp. 1–28, 2022.
  16. M. I. Khan, B. Acharya, and R. K. Chaurasiya, “ihearken: Chewing sound signal analysis based food intake recognition system using bi-lstm softmax network,” Computer Methods and Programs in Biomedicine, vol. 221, p. 106843, 2022.
  17. D. Verma, “Expressear: Sensing fine-grained facial expressions with earables,” Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 5, pp. 129:1–129:28, 2021.
  18. X. Song, K. Huang, and W. Gao, “Facelistener: Recognizing human facial expressions via acoustic sensing on commodity headphones,” in 2022 21st ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN).   IEEE, 2022, pp. 145–157.
  19. T. Chen, B. Steeper, K. Alsheikh, S. Tao, F. Guimbretière, and C. Zhang, “C-face: Continuously reconstructing facial expressions by deep learning contours of the face with ear-mounted miniature cameras,” in Proceedings of the 33rd annual ACM symposium on user interface software and technology, 2020, pp. 112–125.
  20. J. Meyer, A. Frank, T. Schlebusch, and E. Kasneci, “U-har: A convolutional approach to human activity recognition combining head and eye movements for context-aware smart glasses,” Proceedings of the ACM on Human-Computer Interaction, vol. 6, no. ETRA, pp. 1–19, 2022.
  21. A. Bedri, D. Li, R. Khurana, K. Bhuwalka, and M. Goel, “Fitbyte: Automatic diet monitoring in unconstrained situations using multimodal sensing on eyeglasses,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020, pp. 1–12.
  22. J. Shin, S. Lee, T. Gong, H. Yoon, H. Roh, A. Bianchi, and S.-J. Lee, “Mydj: Sensing food intakes with an attachable on your eyeglass frame,” in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 2022, pp. 1–17.
  23. J. Nie, Y. Liu, Y. Hu, Y. Wang, S. Xia, M. Preindl, and X. Jiang, “Spiders+: A light-weight, wireless, and low-cost glasses-based wearable platform for emotion sensing and bio-signal acquisition,” Pervasive and Mobile Computing, vol. 75, p. 101424, 2021.
  24. R. Zhang, K. Li, Y. Hao, Y. Wang, Z. Lai, F. Guimbretière, and C. Zhang, “Echospeech: Continuous silent speech recognition on minimally-obtrusive eyewear powered by acoustic sensing,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023, pp. 1–18.
  25. J. R. Lee, L. Wang, and A. Wong, “Emotionnet nano: An efficient deep convolutional neural network design for real-time facial expression recognition,” Frontiers in Artificial Intelligence, vol. 3, p. 609673, 2021.
  26. I.-J. Kwon, T.-Y. Jung, Y. Son, B. Kim, S.-M. Kim, and J.-H. Lee, “Detection of volatile sulfur compounds (vscs) in exhaled breath as a potential diagnostic method for oral squamous cell carcinoma,” BMC Oral Health, vol. 22, no. 1, pp. 1–8, 2022.
Citations (3)

Summary

We haven't generated a summary for this paper yet.