Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SoK: Behind the Accuracy of Complex Human Activity Recognition Using Deep Learning (2405.00712v2)

Published 25 Apr 2024 in eess.SP and cs.LG

Abstract: Human Activity Recognition (HAR) is a well-studied field with research dating back to the 1980s. Over time, HAR technologies have evolved significantly from manual feature extraction, rule-based algorithms, and simple machine learning models to powerful deep learning models, from one sensor type to a diverse array of sensing modalities. The scope has also expanded from recognising a limited set of activities to encompassing a larger variety of both simple and complex activities. However, there still exist many challenges that hinder advancement in complex activity recognition using modern deep learning methods. In this paper, we comprehensively systematise factors leading to inaccuracy in complex HAR, such as data variety and model capacity. Among many sensor types, we give more attention to wearable and camera due to their prevalence. Through this Systematisation of Knowledge (SoK) paper, readers can gain a solid understanding of the development history and existing challenges of HAR, different categorisations of activities, obstacles in deep learning-based complex HAR that impact accuracy, and potential research directions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (72)
  1. E. Ramanujam, T. Perumal, and S. Padmavathi, “Human activity recognition with smartphone and wearable sensors using deep learning techniques: A review,” IEEE Sensors Journal, vol. 21, pp. 13 029–13 040, 6 2021.
  2. J. Aggarwal and Q. Cai, “Human motion analysis: A review,” Computer Vision and Image Understanding, vol. 73, pp. 428–440, 3 1999.
  3. L. Wang, W. Hu, and T. Tan, “Recent developments in human motion analysis,” Pattern Recognition, vol. 36, pp. 585–601, 3 2003.
  4. T. B. Moeslund, A. Hilton, and V. Krüger, “A survey of advances in vision-based human motion capture and analysis,” Computer Vision and Image Understanding, vol. 104, pp. 90–126, 11 2006.
  5. P. Turaga, R. Chellappa, V. S. Subrahmanian, and O. Udrea, “Machine recognition of human activities: A survey,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, 2008.
  6. A. Avci, S. Bosch, M. Marin-Perianu, R. Marin-Perianu, and P. Havinga, “Activity recognition using inertial sensing for healthcare, wellbeing and sports applications: A survey,” 2010.
  7. Óscar D. Lara and M. A. Labrador, “A survey on human activity recognition using wearable sensors,” IEEE Communications Surveys and Tutorials, vol. 15, 2013.
  8. S. Herath, M. Harandi, and F. Porikli, “Going deeper into action recognition: A survey,” Image and Vision Computing, vol. 60, 2017.
  9. C. Chen, R. Jafari, and N. Kehtarnavaz, “A survey of depth and inertial sensor fusion for human action recognition,” Multimedia Tools and Applications, vol. 76, 2017.
  10. F. Attal, S. Mohammed, M. Dedabrishvili, F. Chamroukhi, L. Oukhellou, and Y. Amirat, “Physical human activity recognition using wearable sensors,” 2015.
  11. I. H. Lopez-Nava and A. Munoz-Melendez, “Wearable inertial sensors for human motion analysis: A review,” 2016.
  12. P. Kumar, S. Chauhan, and L. K. Awasthi, “Human activity recognition (har) using deep learning: Review, methodologies, progress and future research directions,” 2023.
  13. K. Chen, D. Zhang, L. Yao, B. Guo, Z. Yu, and Y. Liu, “Deep learning for sensor-based human activity recognition: Overview, challenges, and opportunities,” 2021.
  14. F. Demrozi, G. Pravadelli, A. Bihorac, and P. Rashidi, “Human activity recognition using inertial, physiological and environmental sensors: A comprehensive survey,” IEEE Access, vol. 8, pp. 210 816–210 836, 2020.
  15. Z. Sun, Q. Ke, H. Rahmani, M. Bennamoun, G. Wang, and J. Liu, “Human action recognition from various data modalities: A review,” 2023.
  16. S. Qiu, H. Zhao, N. Jiang, Z. Wang, L. Liu, Y. An, H. Zhao, X. Miao, R. Liu, and G. Fortino, “Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges,” 2022.
  17. M. M. Islam, S. Nooruddin, F. Karray, and G. Muhammad, “Human activity recognition using tools of convolutional neural networks: A state of the art review, data sets, challenges, and future prospects,” 2022.
  18. A. Shahzad and K. Kim, “Falldroid: An automated smart-phone-based fall detection system using multiple kernel learning,” IEEE Transactions on Industrial Informatics, vol. 15, no. 1, pp. 35–44, 2019.
  19. R. Bodor, A. Drenner, M. Janssen, P. Schrater, and N. Papanikolopoulos, “Mobile camera positioning to optimize the observability of human activity recognition tasks,” 2005.
  20. H. H. Ali, H. M. Moftah, and A. A. Youssif, “Depth-based human activity recognition: A comparative perspective study on feature extraction,” Future Computing and Informatics Journal, vol. 3, 2018.
  21. B. Kwolek and M. Kepski, “Fuzzy inference-based fall detection using kinect and body-worn accelerometer,” Applied Soft Computing, vol. 40, pp. 305–318, 2016.
  22. Z. Wang, V. Ramamoorthy, U. Gal, and A. Guez, “Possible life saver: A review on human fall detection technology,” Robotics, vol. 9, no. 3, p. 55, Jul 2020.
  23. A. Singh, S. U. Rehman, S. Yongchareon, and P. H. J. Chong, “Sensor technologies for fall detection systems: A review,” IEEE Sensors Journal, vol. 20, no. 13, pp. 6889–6919, 2020.
  24. A. Hakim, M. S. Huq, S. Shanta, and B. Ibrahim, “Smartphone based data mining for fall detection: Analysis and design,” Procedia Computer Science, vol. 105, pp. 46–51, 2017, 2016 IEEE International Symposium on Robotics and Intelligent Sensors, IRIS 2016, 17-20 December 2016, Tokyo, Japan.
  25. X. Wang, J. Ellul, and G. Azzopardi, “Elderly fall detection systems: A literature survey,” Front Robot AI, vol. 7, p. 71, Jun. 2020.
  26. X. Yang and Y. L. Tian, “Super normal vector for human activity recognition with depth cameras,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, 2017.
  27. M. Kangas, R. Korpelainen, I. Vikman, L. Nyberg, and T. Jämsä, “Sensitivity and false alarm rate of a fall sensor in long-term fall detection in the elderly,” Gerontology, vol. 61, no. 1, pp. 61–68, 2015.
  28. S. Chaudhuri, D. Oudejans, H. J. Thompson, and G. Demiris, “Real-world accuracy and use of a wearable fall detection device by older adults,” J Am Geriatr Soc, vol. 63, no. 11, pp. 2415–2416, Nov. 2015.
  29. S. Rastogi and J. Singh, “A systematic review on machine learning for fall detection system,” Computational Intelligence, vol. 37, 04 2021.
  30. G. Yang, L. Zhang, C. Bu, S. Wang, H. Wu, and A. Song, “Freqsense: Adaptive sampling rates for sensor-based human activity recognition under tunable computational budgets,” IEEE Journal of Biomedical and Health Informatics, vol. 27, no. 12, pp. 5791–5802, 2023.
  31. Y. Wang, S. Cang, and H. Yu, “A survey on wearable sensor modality centred human activity recognition in health care,” 2019.
  32. A. K. Bourke, P. van de Ven, M. Gamble, R. O’Connor, K. Murphy, E. Bogan, E. McQuade, P. Finucane, G. Olaighin, and J. Nelson, “Evaluation of waist-mounted tri-axial accelerometer based fall-detection algorithms during scripted and continuous unscripted activities,” J Biomech, vol. 43, no. 15, pp. 3051–3057, Nov. 2010.
  33. G. Debard, P. Karsmakers, M. Deschodt, E. Vlaeyen, E. Dejaeger, K. Milisen, T. Goedemé, B. Vanrumste, and T. Tuytelaars, “Camera-based fall detection on real world data,” vol. 7474, 01 2011, pp. 356–375.
  34. O. Aziz, J. Klenk, L. Schwickert, L. Chiari, C. Becker, E. J. Park, G. Mori, and S. N. Robinovitch, “Validation of accuracy of svm-based fall detection system using real-world fall and non-fall datasets,” PLOS ONE, vol. 12, no. 7, pp. 1–11, 07 2017.
  35. D.-A. Nguyen, C. Pham, R. Argent, B. Caulfield, and N.-A. Le-Khac, “Model and empirical study on multi-tasking learning for human fall detection,” Vietnam Journal of Computer Science, 2024.
  36. X. Yu, J. Jang, and S. Xiong, “A large-scale open motion dataset (kfall) and benchmark algorithms for detecting pre-impact fall of the elderly using wearable inertial sensors,” Frontiers in Aging Neuroscience, vol. 13, 2021.
  37. L. Martínez-Villaseñor, H. Ponce, J. Brieva, E. Moya-Albor, J. Núñez-Martínez, and C. Peñafort-Asturiano, “Up-fall detection dataset: A multimodal approach,” Sensors, vol. 19, no. 9, 2019.
  38. A. Sucerquia, J. D. López, and J. F. Vargas-Bonilla, “SisFall: A fall and movement dataset,” Sensors (Basel), vol. 17, no. 1, Jan. 2017.
  39. M. SALEH and R. LE BOUQUIN JEANNES, “Fallalld: A comprehensive dataset of human falls and activities of daily living,” 2020.
  40. J.-L. Reyes-Ortiz, L. Oneto, A. Samà, X. Parra, and D. Anguita, “Transition-aware human activity recognition using smartphones,” Neurocomputing, vol. 171, pp. 754–767, 2016.
  41. S. L. Rosenthal and A. K. Dey, “Towards maximizing the accuracy of human-labeled sensor data,” in Proceedings of the 15th International Conference on Intelligent User Interfaces, ser. IUI ’10.   New York, NY, USA: Association for Computing Machinery, 2010, p. 259–268.
  42. F. Cruciani, I. Cleland, C. Nugent, P. McCullagh, K. Synnes, and J. Hallberg, “Automatic annotation for human activity recognition in free living using a smartphone,” Sensors, vol. 18, no. 7, 2018.
  43. S. S. Khan and J. Hoey, “Review of fall detection techniques: A data availability perspective,” Medical Engineering & Physics, vol. 39, pp. 12–22, 2017.
  44. X. Wang, L. Zhang, W. Huang, S. Wang, H. Wu, J. He, and A. Song, “Deep convolutional networks with tunable speed–accuracy tradeoff for human activity recognition using wearables,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–12, 2022.
  45. L. Palmerini, J. Klenk, C. Becker, and L. Chiari, “Accelerometer-based fall detection using machine learning: Training and testing on real-world falls,” Sensors, vol. 20, no. 22, 2020.
  46. R. Padilla, S. L. Netto, and E. A. D. Silva, “A survey on performance metrics for object-detection algorithms,” vol. 2020-July, 2020.
  47. S. Deldari, H. Xue, A. Saeed, D. V. Smith, and F. D. Salim, “Cocoa: Cross modality contrastive learning for sensor data,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 6, 2022.
  48. Y. Jain, C. I. Tang, C. Min, F. Kawsar, and A. Mathur, “Collossl: Collaborative self-supervised learning for human activity recognition,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 6, 2022.
  49. D. Koutrintzes., E. Mathe., and E. Spyrou., “Boosting the performance of deep approaches through fusion with handcrafted features,” in Proceedings of the 11th International Conference on Pattern Recognition Applications and Methods - ICPRAM, INSTICC.   SciTePress, 2022, pp. 370–377.
  50. V.-R. Xefteris, A. Tsanousa, G. Meditskos, S. Vrochidis, and I. Kompatsiaris, “Performance, challenges, and limitations in multimodal fall detection systems: A review,” IEEE Sensors Journal, vol. 21, no. 17, pp. 18 398–18 409, 2021.
  51. A. Abedin, M. Ehsanpour, Q. Shi, H. Rezatofighi, and D. C. Ranasinghe, “Attend and discriminate: Beyond the state-of-the-art for human activity recognition using wearable sensors,” Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 5, no. 1, mar 2021.
  52. Y. Meng, R. Panda, C. Lin, P. Sattigeri, L. Karlinsky, K. Saenko, A. Oliva, and R. Feris, “Adafuse: Adaptive temporal fusion network for efficient action recognition,” CoRR, vol. abs/2102.05775, 2021.
  53. M. M. Islam and T. Iqbal, “Multi-gat: A graphical attention-based hierarchical multimodal representation learning approach for human activity recognition,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1729–1736, 2021.
  54. H. Haresamudram, I. Essa, and T. Plotz, “Contrastive predictive coding for human activity recognition,” Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 5, no. 2, jun 2021.
  55. Y. A. Andrade-Ambriz, S. Ledesma, M.-A. Ibarra-Manzano, M. I. Oros-Flores, and D.-L. Almanza-Ojeda, “Human activity recognition using temporal convolutional neural network architecture,” Expert Systems with Applications, vol. 191, p. 116287, 2022.
  56. Y. Tang, L. Zhang, Q. Teng, F. Min, and A. Song, “Triple cross-domain attention on human activity recognition using wearable sensors,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 6, no. 5, pp. 1167–1176, 2022.
  57. Y. Tang, L. Zhang, F. Min, and J. He, “Multiscale deep feature learning for human activity recognition using wearable sensors,” IEEE Transactions on Industrial Electronics, vol. 70, no. 2, pp. 2106–2116, 2022.
  58. H. Haresamudram, I. Essa, and T. Plötz, “Investigating enhancements to contrastive predictive coding for human activity recognition,” in 2023 IEEE International Conference on Pervasive Computing and Communications (PerCom), 2023, pp. 232–241.
  59. R. Hu, L. Chen, S. Miao, and X. Tang, “Swl-adapt: An unsupervised domain adaptation model with sample weight learning for cross-user wearable human activity recognition,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 5, pp. 6012–6020, Jun. 2023.
  60. M. Cormier, Y. Schmid, and J. Beyerer, “Enhancing skeleton-based action recognition in real-world scenarios through realistic data augmentation,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, January 2024, pp. 290–299.
  61. A. Hoelzemann, N. Sorathiya, and K. V. Laerhoven, “Data augmentation strategies for human activity data using generative adversarial neural networks,” 2021.
  62. N. Nida, M. H. Yousaf, A. Irtaza, and S. A. Velastin, “Video augmentation technique for human action recognition using genetic algorithm,” ETRI Journal, vol. 44, 2022.
  63. W. Lu, J. Wang, X. Sun, Y. Chen, X. Ji, Q. Yang, and X. Xie, “Diversify: A general framework for time series out-of-distribution detection and generalization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–17, 2024.
  64. A. Rahate, R. Walambe, S. Ramanna, and K. Kotecha, “Multimodal co-learning: Challenges, applications with datasets, recent advances and future directions,” Information Fusion, vol. 81, 2022.
  65. Y. Qu, Y. Tang, X. Yang, Y. Wen, and W. Zhang, “Context-aware mutual learning for semi-supervised human activity recognition using wearable sensors,” Expert Systems with Applications, vol. 219, 2023.
  66. D.-A. Nguyen, C. Pham, and N.-A. Le-Khac, “Virtual fusion with contrastive learning for single sensor-based activity recognition,” 2023.
  67. H. Bi, M. Perello-Nieto, R. Santos-Rodriguez, P. Flach, and I. Craddock, “An active semi-supervised deep learning model for human activity recognition,” Journal of Ambient Intelligence and Humanized Computing, vol. 14, 2023.
  68. S. Rahimi Taghanaki, M. J. Rainbow, and A. Etemad, “Self-supervised human activity recognition by learning to predict cross-dimensional motion,” in Proceedings of the 2021 ACM International Symposium on Wearable Computers, ser. ISWC ’21.   New York, NY, USA: Association for Computing Machinery, 2021, p. 23–27.
  69. X. Li, Y. He, F. Fioranelli, and X. Jing, “Semisupervised human activity recognition with radar micro-doppler signatures,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–12, 2022.
  70. Y. Zhou, H. Zhao, Y. Huang, T. Riedel, M. Hefenbrock, and M. Beigl, “Tinyhar: A lightweight deep learning model designed for human activity recognition,” 2022.
  71. Q. Xu, M. Wu, X. Li, K. Mao, and Z. Chen, “Contrastive distillation with regularized knowledge for deep model compression on sensor-based human activity recognition,” IEEE Transactions on Industrial Cyber-Physical Systems, vol. 1, pp. 217–226, 2023.
  72. S. Zhang, Y. Li, S. Zhang, F. Shahabi, S. Xia, Y. Deng, and N. Alshurafa, “Deep learning in human activity recognition withwearable sensors: A review on advances,” Sensors, vol. 22, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Duc-Anh Nguyen (3 papers)
  2. Nhien-An Le-Khac (79 papers)
Citations (1)