Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Combining Public Human Activity Recognition Datasets to Mitigate Labeled Data Scarcity (2306.13735v1)

Published 23 Jun 2023 in cs.CV and cs.LG

Abstract: The use of supervised learning for Human Activity Recognition (HAR) on mobile devices leads to strong classification performances. Such an approach, however, requires large amounts of labeled data, both for the initial training of the models and for their customization on specific clients (whose data often differ greatly from the training data). This is actually impractical to obtain due to the costs, intrusiveness, and time-consuming nature of data annotation. Moreover, even with the help of a significant amount of labeled data, model deployment on heterogeneous clients faces difficulties in generalizing well on unseen data. Other domains, like Computer Vision or Natural Language Processing, have proposed the notion of pre-trained models, leveraging large corpora, to reduce the need for annotated data and better manage heterogeneity. This promising approach has not been implemented in the HAR domain so far because of the lack of public datasets of sufficient size. In this paper, we propose a novel strategy to combine publicly available datasets with the goal of learning a generalized HAR model that can be fine-tuned using a limited amount of labeled data on an unseen target domain. Our experimental evaluation, which includes experimenting with different state-of-the-art neural network architectures, shows that combining public datasets can significantly reduce the number of labeled samples required to achieve satisfactory performance on an unseen target domain.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. F. Gu, M.-H. Chung, M. Chignell, S. Valaee, B. Zhou, and X. Liu, “A survey on deep learning for human activity recognition,” ACM Computing Surveys (CSUR), vol. 54, no. 8, pp. 1–34, 2021.
  2. J. Wang, Y. Chen, S. Hao, X. Peng, and L. Hu, “Deep learning for sensor-based activity recognition: A survey,” Pattern recognition letters, vol. 119, pp. 3–11, 2019.
  3. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al., “Imagenet large scale visual recognition challenge,” International journal of computer vision, vol. 115, pp. 211–252, 2015.
  4. X. Liu, F. Zhang, Z. Hou, L. Mian, Z. Wang, J. Zhang, and J. Tang, “Self-supervised learning: Generative or contrastive,” IEEE Transactions on Knowledge and Data Engineering, 2021.
  5. A. Stisen, H. Blunck, S. Bhattacharya, T. S. Prentow, M. B. Kjærgaard, A. Dey, T. Sonne, and M. M. Jensen, “Smart devices are different: Assessing and mitigating mobile sensing heterogeneities for activity recognition,” in Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems, (New York, NY, USA), p. 127–140, 2015.
  6. G. M. Weiss and J. Lockhart, “The impact of personalization on smartphone-based activity recognition,” in Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence, 2012.
  7. G. Vavoulas, C. Chatzaki, T. Malliotakis, M. Pediaditis, and M. Tsiknakis, “The mobiact dataset: Recognition of activities of daily living using smartphones.,” in ICT4AgeingWell, pp. 143–151, 2016.
  8. M. Malekzadeh, R. G. Clegg, A. Cavallaro, and H. Haddadi, “Protecting sensory data against sensitive inferences,” in Proceedings of the 1st Workshop on Privacy by Design in Distributed Systems, W-P2DS’18, (New York, NY, USA), pp. 2:1–2:6, ACM, 2018.
  9. A. Reiss and D. Stricker, “Introducing a new benchmarked dataset for activity monitoring,” in 2012 16th International Symposium on Wearable Computers, pp. 108–109, IEEE, 2012.
  10. Y. Vaizman, K. Ellis, and G. Lanckriet, “Recognizing detailed human context in the wild from smartphones and smartwatches,” IEEE pervasive computing, vol. 16, no. 4, pp. 62–74, 2017.
  11. X. Qin, Y. Chen, J. Wang, and C. Yu, “Cross-dataset activity recognition via adaptive spatial-temporal transfer learning,” in ACM IMWUT 2019, vol. 3, no. 4, pp. 1–25, 2019.
  12. M. Ronald, A. Poulose, and D. S. Han, “isplinception: An inception-resnet deep learning architecture for human activity recognition,” IEEE Access, vol. 9, pp. 68985–69001, 2021.
  13. C. Jobanputra, J. Bavishi, and N. Doshi, “Human activity recognition: A survey,” Procedia Computer Science, vol. 155, pp. 698–703, 2019.
  14. D. J. Cook, K. D. Feuz, and N. C. Krishnan, “Transfer learning for activity recognition: A survey,” Knowl. Inf. Sys., vol. 36, no. 3, 2013.
  15. A. Farahani, S. Voghoei, K. Rasheed, and H. R. Arabnia, “A brief review of domain adaptation,” Advances in Data Science and Information Engineering: Proceedings from ICDATA 2020 and IKE 2020, 2021.
  16. F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, and Q. He, “A comprehensive survey on transfer learning,” Proceedings of the IEEE, vol. 109, no. 1, pp. 43–76, 2020.
  17. A. Radford, J. W. Kim, T. Xu, G. Brockman, C. McLeavey, and I. Sutskever, “Robust speech recognition via large-scale weak supervision,” arXiv preprint arXiv:2212.04356, 2022.
  18. D. Cook, K. D. Feuz, and N. C. Krishnan, “Transfer learning for activity recognition: A survey,” Knowl. Inf. Sys., 2013.
  19. J. Wang, V. W. Zheng, Y. Chen, and M. Huang, “Deep transfer learning for cross-domain activity recognition,” in proceedings of the 3rd International Conference on Crowd Science and Engineering, 2018.
  20. F. J. O. Morales and D. Roggen, “Deep convolutional feature transfer across mobile activity recognition domains, sensor modalities and locations,” in Proceedings of the 2016 ACM International Symposium on Wearable Computers, pp. 92–99, 2016.
  21. M. Kurz, G. Hölzl, A. Ferscha, A. Calatroni, D. Roggen, and G. Tröster, “Real-time transfer and evaluation of activity recognition capabilities in an opportunistic system,” machine learning, vol. 1, no. 7, p. 8, 2011.
  22. Y. Chen, J. Wang, M. Huang, and H. Yu, “Cross-position activity recognition with stratified transfer learning,” Pervasive and Mobile Computing, vol. 57, pp. 1–13, 2019.
  23. C.-H. Lu, Y.-C. Ho, Y.-H. Chen, and L.-C. Fu, “Hybrid user-assisted incremental model adaptation for activity recognition in a dynamic smart-home environment,” IEEE Transactions on Human-Machine Systems, vol. 43, no. 5, pp. 421–436, 2013.
  24. M. Gjoreski, S. Kalabakov, M. Luštrek, M. Gams, and H. Gjoreski, “Cross-dataset deep transfer learning for activity recognition,” in in adjunct ACM UbiComp 2019 and ACM ISWC 2019, pp. 714–718, 2019.
  25. M. A. A. H. Khan, N. Roy, and A. Misra, “Scaling human activity recognition via deep learning-based domain adaptation,” in 2018 IEEE international conference on pervasive computing and communications (PerCom), pp. 1–9, IEEE, 2018.
  26. A. R. Sanabria and J. Ye, “Unsupervised domain adaptation for activity recognition across heterogeneous datasets,” Pervasive and Mobile Computing, vol. 64, p. 101147, 2020.
  27. Z. Zhou, Y. Zhang, X. Yu, P. Yang, X.-Y. Li, J. Zhao, and H. Zhou, “Xhar: Deep domain adaptation for human activity recognition with smart devices,” in 2020 17th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), IEEE, 2020.
  28. J. Wang, Y. Chen, Y. Gu, Y. Xiao, and H. Pan, “Sensorygans: An effective generative adversarial framework for sensor-based human activity recognition,” in 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8, IEEE, 2018.
  29. M. H. Chan and M. H. M. Noor, “A unified generative model using generative adversarial network for activity recognition,” Journal of Ambient Intelligence and Humanized Computing, pp. 1–10, 2020.
  30. E. Soleimani and E. Nazerfard, “Cross-subject transfer learning in human activity recognition systems using generative adversarial networks,” Neurocomputing, vol. 426, pp. 26–34, 2021.
  31. A. Jaiswal, A. R. Babu, M. Z. Zadeh, D. Banerjee, and F. Makedon, “A survey on contrastive self-supervised learning,” Technologies, 2020.
  32. C. I. Tang, I. Perez-Pozuelo, D. Spathis, S. Brage, N. Wareham, and C. Mascolo, “Selfhar: Improving human activity recognition through self-training with unlabeled data,” arXiv preprint arXiv:2102.06073, 2021.
  33. H. Haresamudram, I. Essa, and T. Plötz, “Assessing the state of self-supervised human activity recognition using wearables,” arXiv preprint arXiv:2202.12938, 2022.
  34. A. Baevski, Y. Zhou, A. Mohamed, and M. Auli, “wav2vec 2.0: A framework for self-supervised learning of speech representations,” Advances in neural information processing systems, 2020.
  35. S. EK, F. Portet, and P. Lalanda, “Lightweight transformers for human activity recognition on mobile devices,” arXiv preprint arXiv:2209.11750, 2022.
  36. T. Sztyler and H. Stuckenschmidt, “On-body localization of wearable devices: An investigation of position-aware activity recognition,” in 2016 IEEE International Conference on Pervasive Computing and Communications (PerCom), pp. 1–9, 2016.
  37. D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz, “A public domain dataset for human activity recognition using smartphones,” in 21st European Symposium on Artificial Neural Networks, ESANN 2013, Bruges, Belgium, April 24-26, 2013, 2013.
  38. W. Sousa Lima, E. Souto, K. El-Khatib, R. Jalali, and J. Gama, “Human activity recognition using inertial sensors in a smartphone: An overview,” Sensors, vol. 19, p. 3213, 07 2019.
  39. A. Ignatov, “Real-time human activity recognition from accelerometer data using convolutional neural networks,” Applied Soft Computing, vol. 62, pp. 915–922, 2018.
  40. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of the AAAI conference on artificial intelligence, 2017.
  41. F. J. Ordóñez and D. Roggen, “Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition,” Sensors, vol. 16, no. 1, p. 115, 2016.
  42. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, vol. 30, 2017.
  43. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467, 2016.
  44. X. Liu, F. Zhang, Z. Hou, L. Mian, Z. Wang, J. Zhang, and J. Tang, “Self-supervised learning: Generative or contrastive,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 1, pp. 857–876, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Riccardo Presotto (4 papers)
  2. Sannara Ek (7 papers)
  3. Gabriele Civitarese (15 papers)
  4. François Portet (29 papers)
  5. Philippe Lalanda (13 papers)
  6. Claudio Bettini (18 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com