Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FedBChain: A Blockchain-enabled Federated Learning Framework for Improving DeepConvLSTM with Comparative Strategy Insights (2407.21282v2)

Published 31 Jul 2024 in cs.LG and cs.HC

Abstract: Recent research in the field of Human Activity Recognition has shown that an improvement in prediction performance can be achieved by reducing the number of LSTM layers. However, this kind of enhancement is only significant on monolithic architectures, and when it runs on large-scale distributed training, data security and privacy issues will be reconsidered, and its prediction performance is unknown. In this paper, we introduce a novel framework: FedBChain, which integrates the federated learning paradigm based on a modified DeepConvLSTM architecture with a single LSTM layer. This framework performs comparative tests of prediction performance on three different real-world datasets based on three different hidden layer units (128, 256, and 512) combined with five different federated learning strategies, respectively. The results show that our architecture has significant improvements in Precision, Recall and F1-score compared to the centralized training approach on all datasets with all hidden layer units for all strategies: FedAvg strategy improves on average by 4.54%, FedProx improves on average by 4.57%, FedTrimmedAvg improves on average by 4.35%, Krum improves by 4.18% on average, and FedAvgM improves by 4.46% on average. Based on our results, it can be seen that FedBChain not only improves in performance, but also guarantees the security and privacy of user data compared to centralized training methods during the training process. The code for our experiments is publicly available (https://github.com/Glen909/FedBChain).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. G. Saleem, U. I. Bajwa, and R. H. Raza, “Toward human activity recognition: a survey,” Neural Computing and Applications, Oct. 2022, doi: https://doi.org/10.1007/s00521-022-07937-4.
  2. F. Ordóñez and D. Roggen, “Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition,” Sensors, vol. 16, no. 1, p. 115, Jan. 2016, doi: https://doi.org/10.3390/s16010115.
  3. K. Chen, D. Zhang, L. Yao, B. Guo, Z. Yu, and Y. Liu, “Deep Learning for Sensor-based Human Activity Recognition,” ACM Computing Surveys, vol. 54, no. 4, pp. 1–40, May 2021, doi: https://doi.org/10.1145/3447744.
  4. Andrej Karpathy, J. C. Johnson, and L. Fei-Fei, “Visualizing and Understanding Recurrent Networks,” arXiv (Cornell University), Jun. 2015, doi: https://doi.org/10.48550/arxiv.1506.02078.
  5. M. Bock, A. Hoelzemann, M. Moeller, and K. Van Laerhoven, “Improving Deep Learning for HAR with shallow LSTMs,” arXiv.org, Sep. 21, 2021. https://arxiv.org/abs/2108.00702.
  6. A. Qammar, A. Karim, H. Ning, and J. Ding, “Securing federated learning with blockchain: a systematic literature review,” Artificial Intelligence Review, Sep. 2022, doi: https://doi.org/10.1007/s10462-022-10271-9.
  7. J. Konečný, H. Mcmahan, F. Yu, A. Theertha, D. Google, and P. Richtárik, “FEDERATED LEARNING: STRATEGIES FOR IMPROVING COMMUNICATION EFFICIENCY.” Available: https://arxiv.org/pdf/1610.05492.pdf.
  8. T. Li, M. Sanjabi, A. Beirami, and V. Smith, “Fair Resource Allocation in Federated Learning,” arXiv:1905.10497 [cs, stat], Feb. 2020, Available: https://arxiv.org/abs/1905.10497.
  9. R. C. Geyer, T. Klein, and M. Nabi, “Differentially Private Federated Learning: A Client Level Perspective,” arXiv:1712.07557 [cs, stat], Mar. 2018, Available: https://arxiv.org/abs/1712.07557.
  10. A. Bulling, U. Blanke, and B. Schiele, “A tutorial on human activity recognition using body-worn inertial sensors,” ACM Computing Surveys, vol. 46, no. 3, pp. 1–33, Jan. 2014, doi: https://doi.org/10.1145/2499621.
  11. G. Gad and Z. Fadlullah, “Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems,” Sensors, vol. 23, no. 1, p. 6, Dec. 2022, doi: https://doi.org/10.3390/s23010006.
  12. A. Khatoon, “A Blockchain-Based Smart Contract System for Healthcare Management,” Electronics, vol. 9, no. 1, p. 94, Jan. 2020, doi: https://doi.org/10.3390/electronics9010094.
  13. Q. Liu, Y. Liu, M. Luo, D. He, H. Wang, and K.-K. R. Choo, “The Security of Blockchain-Based Medical Systems: Research Challenges and Opportunities,” IEEE Systems Journal, pp. 1–12, 2022, doi: https://doi.org/10.1109/JSYST.2022.3155156.
  14. D. M. Patel, C. K. Sahu, and R. Rai, “Security in modern manufacturing systems: integrating blockchain in artificial intelligence-assisted manufacturing,” International Journal of Production Research, pp. 1–31, Sep. 2023, doi: https://doi.org/10.1080/00207543.2023.2262050.
  15. Timo Sztyler and Heiner Stuckenschmidt, “On-body localization of wearable devices: An investigation of position-aware activity recognition,” Mar. 2016, doi: https://doi.org/10.1109/percom.2016.7456521.
  16. M. Núñez-Regueiro, “Yaşlı Kadınlarda Üreme Sağlığı,” DergiPark (Istanbul University), vol. 1, no. 1, Feb. 2015, doi: https://doi.org/10.1016/j.
  17. M. H. Brendan, E. Moore, D. Ramage, S. Hampson, and Arcas, Blaise Agüera y, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” arXiv.org, 2016. https://arxiv.org/abs/1602.05629.
  18. T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated Optimization in Heterogeneous Networks,” arXiv:1812.06127 [cs, stat], Apr. 2020, Available: https://arxiv.org/abs/1812.06127.
  19. D. Yin, Y. Chen, K. Ramchandran, and P. Bartlett, “Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates,” arXiv.org, Feb. 25, 2021. https://arxiv.org/abs/1803.01498.
  20. P. Blanchard, E. M. E. Mhamdi, R. Guerraoui, and J. Stainer, “Byzantine-Tolerant Machine Learning,” arXiv.org, Mar. 08, 2017. https://arxiv.org/abs/1703.02757.
  21. T.-M. H. Hsu, H. Qi, and M. Brown, “Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification,” arXiv:1909.06335 [cs, stat], Sep. 2019, Available: https://arxiv.org/abs/1909.06335.
  22. X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” proceedings.mlr.press, Mar. 31, 2010. http://proceedings.mlr.press/v9/glorot10a.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com