Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Chu-ko-nu: A Reliable, Efficient, and Anonymously Authentication-Enabled Realization for Multi-Round Secure Aggregation in Federated Learning (2402.15111v2)

Published 23 Feb 2024 in cs.CR, cs.DC, and cs.LG

Abstract: Secure aggregation enables federated learning (FL) to perform collaborative training of clients from local gradient updates without exposing raw data. However, existing secure aggregation schemes inevitably perform an expensive fresh setup per round because each client needs to establish fresh input-independent secrets over different rounds. The latest research, Flamingo (S&P 2023), designed a share-transfer-based reusable secret key to support the server continuously performing multiple rounds of aggregation. Nevertheless, the share transfer mechanism it proposed can only be achieved with P probability, which has limited reliability. To tackle the aforementioned problems, we propose a more reliable and anonymously authenticated scheme called Chu-ko-nu for multi-round secure aggregation. Specifically, in terms of share transfer, Chu-ko-nu breaks the probability P barrier by supplementing a redistribution process of secret key components (the sum of all components is the secret key), thus ensuring the reusability of the secret key. Based on this reusable secret key, Chu-ko-nu can efficiently perform consecutive aggregation in the following rounds. Furthermore, considering the client identity authentication and privacy protection issue most approaches ignore, Chu-ko-nu introduces a zero-knowledge proof-based authentication mechanism. It can support clients anonymously participating in FL training and enables the server to authenticate clients effectively in the presence of various attacks. Rigorous security proofs and extensive experiments demonstrated that Chu-ko-nu can provide reliable and anonymously authenticated aggregation for FL with low aggregation costs, at least a 21.02% reduction compared to the state-of-the-art schemes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. Deep leakage from gradients. Advances in neural information processing systems, 32, 2019.
  2. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pages 1322–1333, 2015.
  3. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), pages 3–18. IEEE, 2017.
  4. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE symposium on security and privacy (SP), pages 691–706. IEEE, 2019.
  5. Secure single-server aggregation with (poly) logarithmic overhead. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, pages 1253–1269, 2020.
  6. Lightsecagg: a lightweight and versatile design for secure aggregation in federated learning. Proceedings of Machine Learning and Systems, 4:694–720, 2022.
  7. Privacy-preserving aggregation in federated learning: A survey. IEEE Transactions on Big Data, pages 1–20, 2022.
  8. Verifynet: Secure and verifiable federated learning. IEEE Transactions on Information Forensics and Security, 15:911–926, 2020.
  9. Flamingo: Multi-round single-server secure aggregation with applications to private federated learning. Cryptology ePrint Archive, 2023.
  10. A survey on federated learning systems: Vision, hype and reality for data privacy and protection. IEEE Transactions on Knowledge and Data Engineering, 35(4):3347–3366, 2023.
  11. A survey on security and privacy of federated learning. Future Generation Computer Systems, 115:619–640, 2021.
  12. Privacy threat and defense for federated learning with non-i.i.d. data in aiot. IEEE Transactions on Industrial Informatics, 18(2):1310–1321, 2022.
  13. Fedcomm: A privacy-enhanced and efficient authentication protocol for federated learning in vehicular ad-hoc networks. IEEE Transactions on Information Forensics and Security, 19:777–792, 2024.
  14. Towards verifiable federated learning. In Lud De Raedt, editor, Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pages 5686–5693. International Joint Conferences on Artificial Intelligence Organization, 7 2022. Survey Track.
  15. Anonymous and privacy-preserving federated learning with industrial big data. IEEE Transactions on Industrial Informatics, 17(9):6314–6323, 2021.
  16. Federated learning approach to protect healthcare data over big data scenario. Sustainability, 14(5):2500, 2022.
  17. Lightweight techniques for private heavy hitters. In 2021 IEEE Symposium on Security and Privacy (SP), pages 762–776, 2021.
  18. Ibex: Privacy-preserving ad conversion tracking and bidding. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pages 3223–3237, 2022.
  19. Prio: Private, robust, and scalable computation of aggregate statistics. In 14th USENIX symposium on networked systems design and implementation (NSDI 17), pages 259–282, 2017.
  20. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017.
  21. Oort: Efficient federated learning via guided participant selection. In 15th {normal-{\{{USENIX}normal-}\}} Symposium on Operating Systems Design and Implementation ({normal-{\{{OSDI}normal-}\}} 21), pages 19–35, 2021.
  22. Versa: Verifiable secure aggregation for cross-device federated learning. IEEE Transactions on Dependable and Secure Computing, 2021.
  23. Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 1175–1191, 2017.
  24. Swiftagg: Communication-efficient and dropout-resistant secure aggregation for federated learning with worst-case security guarantees. In 2022 IEEE International Symposium on Information Theory (ISIT), pages 103–108. IEEE, 2022.
  25. Turbo-aggregate: Breaking the quadratic aggregation barrier in secure federated learning. IEEE Journal on Selected Areas in Information Theory, 2(1):479–489, 2021.
  26. Client selection in federated learning: Principles, challenges, and opportunities. IEEE Internet of Things Journal, 10(24):21811–21819, 2023.
  27. Fedgp: Correlation-based active client selection strategy for heterogeneous federated learning. arXiv preprint arXiv:2103.13822, 2021.
  28. Pyramidfl: A fine-grained client selection framework for efficient federated learning. In Proceedings of the 28th Annual International Conference on Mobile Computing And Networking, pages 158–171, 2022.
  29. Eiffel: Efficient and fair scheduling in adaptive federated learning. IEEE Transactions on Parallel and Distributed Systems, 33(12):4282–4294, 2022.
  30. Nike-based fast privacy-preserving highdimensional data aggregation for mobile devices. IEEE T Depend Secure, pages 142–149, 2018.
  31. Lsfl: A lightweight and secure federated learning scheme for edge computing. IEEE Transactions on Information Forensics and Security, 18:365–379, 2023.
  32. Efficient verifiable protocol for privacy-preserving aggregation in federated learning. IEEE Transactions on Information Forensics and Security, 18:2977–2990, 2023.
  33. Flashe: Additively symmetric homomorphic encryption for cross-silo federated learning. arXiv preprint arXiv:2109.00675, 2021.
  34. Aggregation service for federated learning: An efficient, secure, and more resilient realization. IEEE Transactions on Dependable and Secure Computing, 20(2):988–1001, 2023.
  35. Feverless: Fast and secure vertical federated learning based on xgboost for decentralized labels. IEEE Transactions on Big Data, pages 1–15, 2022.
  36. Securing secure aggregation: Mitigating multi-round privacy leakage in federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 9864–9873, 2023.
  37. A generic federated recommendation framework via fake marks and secret sharing. ACM Transactions on Information Systems, 41(2):1–37, 2022.
  38. Anonymous and efficient authentication scheme for privacy-preserving distributed learning. IEEE Transactions on Information Forensics and Security, 17:2227–2240, 2022.
  39. A multi-shuffler framework to establish mutual confidence for secure federated learning. IEEE Transactions on Dependable and Secure Computing, 20(5):4230–4244, 2023.
  40. Fedtor: An anonymous framework of federated learning in internet of things. IEEE Internet of Things Journal, 9(19):18620–18631, 2022.
  41. Privacy-preserved federated learning for autonomous driving. IEEE Transactions on Intelligent Transportation Systems, 23(7):8423–8434, 2022.
  42. Federated learning-based collaborative authentication protocol for shared data in social iov. IEEE Sensors Journal, 22(7):7385–7398, 2022.
  43. A practical cross-device federated learning framework over 5g networks. IEEE Wireless Communications, 29(6):128–134, 2022.
  44. An efficient and privacy-preserving federated learning scheme for flying ad hoc networks. In ICC 2022 - IEEE International Conference on Communications, pages 1–6, 2022.
  45. Fedtracker: Furnishing ownership verification and traceability for federated learning model. arXiv preprint arXiv:2211.07160, 2022.
  46. Blockchain-based decentralized and lightweight anonymous authentication for federated learning. IEEE Transactions on Vehicular Technology, 72(9):12075–12086, 2023.
  47. Paul Feldman. A practical scheme for non-interactive verifiable secret sharing. In 28th Annual Symposium on Foundations of Computer Science (sfcs 1987), pages 427–438, 1987.
Citations (1)

Summary

We haven't generated a summary for this paper yet.