HARMamba: Efficient and Lightweight Wearable Sensor Human Activity Recognition Based on Bidirectional Mamba (2403.20183v3)
Abstract: Wearable sensor-based human activity recognition (HAR) is a critical research domain in activity perception. However, achieving high efficiency and long sequence recognition remains a challenge. Despite the extensive investigation of temporal deep learning models, such as CNNs, RNNs, and transformers, their extensive parameters often pose significant computational and memory constraints, rendering them less suitable for resource-constrained mobile health applications. This study introduces HARMamba, an innovative light-weight and versatile HAR architecture that combines selective bidirectional State Spaces Model and hardware-aware design. To optimize real-time resource consumption in practical scenarios, HARMamba employs linear recursive mechanisms and parameter discretization, allowing it to selectively focus on relevant input sequences while efficiently fusing scan and recompute operations. The model employs independent channels to process sensor data streams, dividing each channel into patches and appending classification tokens to the end of the sequence. It utilizes position embedding to represent the sequence order. The patch sequence is subsequently processed by HARMamba Block, and the classification head finally outputs the activity category. The HARMamba Block serves as the fundamental component of the HARMamba architecture, enabling the effective capture of more discriminative activity sequence features. HARMamba outperforms contemporary state-of-the-art frameworks, delivering comparable or better accuracy with significantly reducing computational and memory demands. It's effectiveness has been extensively validated on 4 publically available datasets namely PAMAP2, WISDM, UNIMIB SHAR and UCI. The F1 scores of HARMamba on the four datasets are 99.74%, 99.20%, 88.23% and 97.01%, respectively.
- A. Waswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in NIPS, 2017.
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” ICLR, 2021.
- Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10 012–10 022.
- Z. Xia, X. Pan, S. Song, L. E. Li, and G. Huang, “Vision transformer with deformable attention,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 4794–4803.
- A. Gu, K. Goel, and C. Ré, “Efficiently modeling long sequences with structured state spaces,” arXiv preprint arXiv:2111.00396, 2021.
- A. Gu, I. Johnson, K. Goel, K. Saab, T. Dao, A. Rudra, and C. Ré, “Combining recurrent, convolutional, and continuous-time models with linear state space layers,” Advances in neural information processing systems, vol. 34, pp. 572–585, 2021.
- A. Gupta, A. Gu, and J. Berant, “Diagonal state spaces are as effective as structured state spaces,” Advances in Neural Information Processing Systems, vol. 35, pp. 22 982–22 994, 2022.
- A. Gu, K. Goel, A. Gupta, and C. Ré, “On the parameterization and initialization of diagonal state space models,” Advances in Neural Information Processing Systems, vol. 35, pp. 35 971–35 983, 2022.
- A. Gu and T. Dao, “Mamba: Linear-time sequence modeling with selective state spaces,” arXiv preprint arXiv:2312.00752, 2023.
- L. Zhu, B. Liao, Q. Zhang, X. Wang, W. Liu, and X. Wang, “Vision mamba: Efficient visual representation learning with bidirectional state space model,” arXiv preprint arXiv:2401.09417, 2024.
- Y. Nie, N. H. Nguyen, P. Sinthong, and J. Kalagnanam, “A time series is worth 64 words: Long-term forecasting with transformers,” in International Conference on Learning Representations, 2023.
- K. Chen, D. Zhang, L. Yao, B. Guo, Z. Yu, and Y. Liu, “Deep learning for sensor-based human activity recognition: Overview, challenges, and opportunities,” ACM Computing Surveys (CSUR), vol. 54, no. 4, pp. 1–40, 2021.
- X. Zhou, W. Liang, K. I.-K. Wang, H. Wang, L. T. Yang, and Q. Jin, “Deep-learning-enhanced human activity recognition for internet of healthcare things,” IEEE Internet of Things Journal, vol. 7, no. 7, pp. 6429–6438, 2020.
- S. Xu, L. Zhang, W. Huang, H. Wu, and A. Song, “Deformable convolutional networks for multimodal human activity recognition using wearable sensors,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–14, 2022.
- Z. N. Khan and J. Ahmad, “Attention induced multi-head convolutional neural network for human activity recognition,” Applied soft computing, vol. 110, p. 107671, 2021.
- T. R. Mim, M. Amatullah, S. Afreen, M. A. Yousuf, S. Uddin, S. A. Alyami, K. F. Hasan, and M. A. Moni, “Gru-inc: An inception-attention based approach using gru for human activity recognition,” Expert Systems with Applications, vol. 216, p. 119419, 2023.
- Y. Tang, L. Zhang, Q. Teng, F. Min, and A. Song, “Triple cross-domain attention on human activity recognition using wearable sensors,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 6, no. 5, pp. 1167–1176, 2022.
- W. Gao, L. Zhang, W. Huang, F. Min, J. He, and A. Song, “Deep neural networks for sensor-based human activity recognition using selective kernel convolution,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–13, 2021.
- S. K. Challa, A. Kumar, and V. B. Semwal, “A multibranch cnn-bilstm model for human activity recognition using wearable sensor data,” The Visual Computer, vol. 38, no. 12, pp. 4095–4109, 2022.
- I. Dirgová Luptáková, M. Kubovčík, and J. Pospíchal, “Wearable sensor-based human activity recognition with transformer model,” Sensors, vol. 22, no. 5, p. 1911, 2022.
- O. Banos, J.-M. Galvez, M. Damas, A. Guillen, L.-J. Herrera, H. Pomares, I. Rojas, C. Villalonga, C. S. Hong, and S. Lee, “Multiwindow fusion for wearable activity recognition,” in Advances in Computational Intelligence: 13th International Work-Conference on Artificial Neural Networks, IWANN 2015, Palma de Mallorca, Spain, June 10-12, 2015. Proceedings, Part II 13. Springer, 2015, pp. 290–297.
- J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440.
- V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.
- R. Yao, G. Lin, Q. Shi, and D. C. Ranasinghe, “Efficient dense labelling of human activity sequences from wearables using fully convolutional networks,” Pattern Recognition, vol. 78, pp. 252–266, 2018.
- Y. Zhang, Z. Zhang, Y. Zhang, J. Bao, Y. Zhang, and H. Deng, “Human activity recognition based on motion sensor using u-net,” IEEE Access, vol. 7, pp. 75 213–75 226, 2019.
- L. Zhang, W. Zhang, and N. Japkowicz, “Conditional-unet: A condition-aware deep model for coherent human activity recognition from wearables,” in 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021, pp. 5889–5896.
- E. Baron, I. Zimerman, and L. Wolf, “2-d ssm: A general spatial layer for visual transformers,” arXiv preprint arXiv:2306.06635, 2023.
- W. He, K. Han, Y. Tang, C. Wang, Y. Yang, T. Guo, and Y. Wang, “Densemamba: State space models with dense hidden connection for efficient large language models,” arXiv preprint arXiv:2403.00818, 2024.
- J. Smith, S. De Mello, J. Kautz, S. Linderman, and W. Byeon, “Convolutional state space models for long-range spatiotemporal modeling,” Advances in Neural Information Processing Systems, vol. 36, 2024.
- X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-c. Woo, “Convolutional lstm network: A machine learning approach for precipitation nowcasting,” Advances in neural information processing systems, vol. 28, 2015.
- Y. Liu, Y. Tian, Y. Zhao, H. Yu, L. Xie, Y. Wang, Q. Ye, and Y. Liu, “Vmamba: Visual state space model,” arXiv preprint arXiv:2401.10166, 2024.
- W. Liao, Y. Zhu, X. Wang, C. Pan, Y. Wang, and L. Ma, “Lightm-unet: Mamba assists in lightweight unet for medical image segmentation,” arXiv preprint arXiv:2403.05246, 2024.
- Z. Zhang, A. Liu, I. Reid, R. Hartley, B. Zhuang, and H. Tang, “Motion mamba: Efficient and long sequence motion generation with hierarchical and bidirectional selective ssm,” arXiv preprint arXiv:2403.07487, 2024.
- T. Kim, J. Kim, Y. Tae, C. Park, J.-H. Choi, and J. Choo, “Reversible instance normalization for accurate time-series forecasting against distribution shift,” in International Conference on Learning Representations, 2021.
- J. R. Kwapisz, G. M. Weiss, and S. A. Moore, “Activity recognition using cell phone accelerometers,” ACM SigKDD Explorations Newsletter, vol. 12, no. 2, pp. 74–82, 2011.
- A. Reiss and D. Stricker, “Introducing a new benchmarked dataset for activity monitoring,” in 2012 16th international symposium on wearable computers. IEEE, 2012, pp. 108–109.
- D. Micucci, M. Mobilio, and P. Napoletano, “Unimib shar: A dataset for human activity recognition using acceleration data from smartphones,” Applied Sciences, vol. 7, no. 10, p. 1101, 2017.
- D. Anguita, A. Ghio, L. Oneto, X. Parra Perez, and J. L. Reyes Ortiz, “A public domain dataset for human activity recognition using smartphones,” in Proceedings of the 21th international European symposium on artificial neural networks, computational intelligence and machine learning, 2013, pp. 437–442.
- E. Essa and I. R. Abdelmaksoud, “Temporal-channel convolution with self-attention network for human activity recognition using wearable sensors,” Knowledge-Based Systems, vol. 278, p. 110867, 2023.
- Y. Tang, L. Zhang, H. Wu, J. He, and A. Song, “Dual-branch interactive networks on multichannel time series for human activity recognition,” IEEE Journal of Biomedical and Health Informatics, vol. 26, no. 10, pp. 5223–5234, 2022.
- W. Huang, L. Zhang, H. Wu, F. Min, and A. Song, “Channel-equalization-har: A light-weight convolutional neural network for wearable sensor based human activity recognition,” IEEE Transactions on Mobile Computing, vol. 22, no. 9, pp. 5064–5077, 2023.
- Y. Tang, L. Zhang, F. Min, and J. He, “Multiscale deep feature learning for human activity recognition using wearable sensors,” IEEE Transactions on Industrial Electronics, vol. 70, no. 2, pp. 2106–2116, 2022.
- F. Duan, T. Zhu, J. Wang, L. Chen, H. Ning, and Y. Wan, “A multi-task deep learning approach for sensor-based human activity recognition and segmentation,” IEEE Transactions on Instrumentation and Measurement, 2023.
- L. Tong, H. Ma, Q. Lin, J. He, and L. Peng, “A novel deep learning bi-gru-i model for real-time human activity recognition using inertial sensors,” IEEE Sensors Journal, vol. 22, no. 6, pp. 6164–6174, 2022.
- J. Li, H. Xu, and Y. Wang, “Multi-resolution fusion convolutional network for open set human activity recognition,” IEEE Internet of Things Journal, 2023.
- M. A. Khatun, M. A. Yousuf, S. Ahmed, M. Z. Uddin, S. A. Alyami, S. Al-Ashhab, H. F. Akhdar, A. Khan, A. Azad, and M. A. Moni, “Deep cnn-lstm with self-attention model for human activity recognition using wearable sensor,” IEEE Journal of Translational Engineering in Health and Medicine, vol. 10, pp. 1–16, 2022.
- Y. Wang, H. Xu, Y. Liu, M. Wang, Y. Wang, Y. Yang, S. Zhou, J. Zeng, J. Xu, S. Li et al., “A novel deep multifeature extraction framework based on attention mechanism using wearable sensor data for human activity recognition,” IEEE Sensors Journal, vol. 23, no. 7, pp. 7188–7198, 2023.
- M. A. Al-Qaness, A. Dahou, M. Abd Elaziz, and A. Helmi, “Multi-resatt: Multilevel residual network with attention for human activity recognition using wearable sensors,” IEEE Transactions on Industrial Informatics, vol. 19, no. 1, pp. 144–152, 2023.
- Shuangjian Li (3 papers)
- Tao Zhu (205 papers)
- Furong Duan (3 papers)
- Liming Chen (102 papers)
- Huansheng Ning (53 papers)
- Yaping Wan (7 papers)
- Christopher Nugent (2 papers)