Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Secure Decentralized Learning with Blockchain (2310.07079v2)

Published 10 Oct 2023 in cs.CR and cs.LG

Abstract: Federated Learning (FL) is a well-known paradigm of distributed machine learning on mobile and IoT devices, which preserves data privacy and optimizes communication efficiency. To avoid the single point of failure problem in FL, decentralized federated learning (DFL) has been proposed to use peer-to-peer communication for model aggregation, which has been considered an attractive solution for machine learning tasks on distributed personal devices. However, this process is vulnerable to attackers who share false models and data. If there exists a group of malicious clients, they might harm the performance of the model by carrying out a poisoning attack. In addition, in DFL, clients often lack the incentives to contribute their computing powers to do model training. In this paper, we proposed Blockchain-based Decentralized Federated Learning (BDFL), which leverages a blockchain for decentralized model verification and auditing. BDFL includes an auditor committee for model verification, an incentive mechanism to encourage the participation of clients, a reputation model to evaluate the trustworthiness of clients, and a protocol suite for dynamic network updates. Evaluation results show that, with the reputation mechanism, BDFL achieves fast model convergence and high accuracy on real datasets even if there exist 30\% malicious clients in the system.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics.   PMLR, 2017.
  2. C. Ma, J. Li, M. Ding, H. H. Yang, F. Shu, T. Q. Quek, and H. V. Poor, “On safeguarding privacy and security in the framework of federated learning,” IEEE network, 2020.
  3. K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin, T. Q. Quek, and H. V. Poor, “Federated learning with differential privacy: Algorithms and performance analysis,” IEEE TIFS, 2020.
  4. T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE signal processing magazine, 2020.
  5. L. He, A. Bian, and M. Jaggi, “Cola: Decentralized linear learning,” Advances in NIPS, 2018.
  6. T. Sun, D. Li, and B. Wang, “Decentralized federated averaging,” Proceedings of IEEE TPAMI, 2022.
  7. J. Liu, H. Xu, L. Wang, Y. Xu, C. Qian, J. Huang, and H. Huang, “Adaptive asynchronous federated learning in resource-constrained edge computing,” IEEE TMC, 2023.
  8. Y. Liao, Y. Xu, H. Xu, L. Wang, and C. Qian, “Adaptive configuration for heterogeneous participants in decentralized federated learning,” in Proceedings of IEEE INFOCOM, 2023.
  9. M. Shayan, C. Fung, C. J. Yoon, and I. Beschastnikh, “Biscotti: A blockchain system for private and secure federated learning,” proceedings of IEEE TPDS, 2020.
  10. A. Gervais, G. O. Karame, K. Wüst, V. Glykantzis, H. Ritzdorf, and S. Capkun, “On the security and performance of proof of work blockchains,” in Proceedings of ACM CCS, 2016.
  11. B. Biggio, B. Nelson, P. Laskov et al., “Poisoning attacks against support vector machines,” in Proceedings of ICML, 2012.
  12. C. Fung, C. J. Yoon, and I. Beschastnikh, “The limitations of federated learning in sybil settings.” in RAID, 2020.
  13. B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the gan: information leakage from collaborative deep learning,” in Proceedings of ACM CCS, 2017.
  14. A. Salem, Y. Zhang, M. Humbert, P. Berrang, M. Fritz, and M. Backes, “Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models,” arXiv preprint arXiv:1806.01246, 2018.
  15. M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of ACM CCS, 2016.
  16. C. Dwork, A. Roth et al., “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, 2014.
  17. M. Zamani, M. Movahedi, and M. Raykova, “Rapidchain: Scaling blockchain via full sharding,” in Proceedings of ACM CCS, 2018.
  18. Y. Hua, K. Miller, A. L. Bertozzi, C. Qian, and B. Wang, “Efficient and reliable overlay networks for decentralized federated learning,” SIAM Journal on Applied Mathematics, vol. 82, no. 4, pp. 1558–1586, 2022.
  19. Y. Yu and C. Qian, “Space shuffle: A scalable, flexible, and high-bandwidth data center network,” in Proceedings of IEEE ICNP, 2014.
  20. M. J. Sheller, B. Edwards, G. A. Reina, J. Martin, S. Pati, A. Kotrotsou, M. Milchenko, W. Xu, D. Marcus, R. R. Colen et al., “Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data,” Scientific reports, 2020.
  21. V. Mavroudis, K. Wüst, A. Dhar, K. Kostiainen, and S. Capkun, “Snappy: Fast on-chain payments with practical collaterals,” in Proceedings of USENIX NDSS, 2020.
  22. M. Castro and B. Liskov, “Practical byzantine fault tolerance and proactive recovery,” ACM TOCS, 2002.
  23. J. H. Bell, K. A. Bonawitz, A. Gascón, T. Lepoint, and M. Raykova, “Secure single-server aggregation with (poly) logarithmic overhead,” in Proceedings of ACM CCS, 2020.
  24. K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical secure aggregation for privacy-preserving machine learning,” in proceedings of ACM CCS, 2017.
  25. O. Choudhury, A. Gkoulalas-Divanis, T. Salonidis, I. Sylla, Y. Park, G. Hsu, and A. Das, “Anonymizing data for privacy-preserving federated learning,” arXiv preprint arXiv:2002.09096, 2020.
  26. M. Fang, X. Cao, J. Jia, and N. Z. Gong, “Local model poisoning attacks to byzantine-robust federated learning,” in Proceedings of USENIX Security, 2020.
  27. C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor, “Our data, ourselves: Privacy via distributed noise generation,” in EUROCRYPT.   Springer, 2006.
  28. Z. Peng, J. Xu, X. Chu, S. Gao, Y. Yao, R. Gu, and Y. Tang, “Vfchain: Enabling verifiable and auditable federated learning via blockchain systems,” IEEE TNSE, 2021.
  29. L. Deng, “The mnist database of handwritten digit images for machine learning research [best of the web],” IEEE signal processing magazine, 2012.
  30. A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009.
  31. J. Li, Y. Shao, K. Wei, M. Ding, C. Ma, L. Shi, Z. Han, and H. V. Poor, “Blockchain assisted decentralized federated learning (blade-fl): Performance analysis and resource allocation,” IEEE Transactions on Parallel and Distributed Systems, 2021.
  32. P. Ramanan and K. Nakayama, “Baffle: Blockchain based aggregator free federated learning,” in IEEE international conference on blockchain (Blockchain), 2020.
  33. L. Feng, Y. Zhao, S. Guo, X. Qiu, W. Li, and P. Yu, “Bafl: A blockchain-based asynchronous federated learning framework,” IEEE Transactions on Computers, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Xiaoxue Zhang (18 papers)
  2. Yifan Hua (6 papers)
  3. Chen Qian (226 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com