Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Private Aggregation in Hierarchical Wireless Federated Learning with Partial and Full Collusion (2306.14088v2)

Published 25 Jun 2023 in cs.LG, cs.CR, cs.DC, and stat.ML

Abstract: In federated learning, a federator coordinates the training of a model, e.g., a neural network, on privately owned data held by several participating clients. The gradient descent algorithm, a well-known and popular iterative optimization procedure, is run to train the model. Every client computes partial gradients based on their local data and sends them to the federator, which aggregates the results and updates the model. Privacy of the clients' data is a major concern. In fact, it is shown that observing the partial gradients can be enough to reveal the clients' data. Existing literature focuses on private aggregation schemes that tackle the privacy problem in federated learning in settings where all users are connected to each other and to the federator. In this paper, we consider a hierarchical wireless system architecture in which the clients are connected to base stations; the base stations are connected to the federator either directly or through relays. We examine settings with and without relays, and derive fundamental limits on the communication cost under information-theoretic privacy with different collusion assumptions. We introduce suitable private aggregation schemes tailored for these settings whose communication costs are multiplicative factors away from the derived bounds.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in International Conference on Artificial Intelligence and Statistics, vol. 54, 2017, pp. 1273–1282.
  2. J. Jeon, K. Lee, S. Oh, J. Ok et al., “Gradient inversion with generative image prior,” Advances in Neural Information Processing Systems, vol. 34, pp. 29 898–29 908, 2021.
  3. K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical secure aggregation for privacy-preserving machine learning,” in ACM Conference on Computer and Communications Security, 2017, pp. 1175–1191.
  4. A. Shamir, “How to share a secret,” Communications of the ACM, vol. 22, no. 11, pp. 612–613, 1979.
  5. G. R. Blakley, “Safeguarding cryptographic keys,” in International Workshop on Managing Requirements Knowledge, 1979, pp. 313–313.
  6. R. J. McEliece and D. V. Sarwate, “On sharing secrets and Reed-Solomon codes,” Communications of the ACM, vol. 24, no. 9, pp. 583–584, 1981.
  7. T. Jahani-Nezhad, M. A. Maddah-Ali, S. Li, and G. Caire, “Swiftagg+: Achieving asymptotically optimal communication loads in secure aggregation for federated learning,” arXiv preprint arXiv:2203.13060, 2022.
  8. J. So, C. J. Nolet, C.-S. Yang, S. Li, Q. Yu, R. E. Ali, B. Guler, and S. Avestimehr, “LightSecAgg: a lightweight and versatile design for secure aggregation in federated learning,” in Machine Learning and Systems, vol. 4, 2022, pp. 694–720.
  9. Y. Zhao and H. Sun, “Information theoretic secure aggregation with user dropouts,” in IEEE International Symposium on Information Theory (ISIT), 2021, pp. 1124–1129.
  10. R. Schlegel, S. Kumar, E. Rosnes et al., “CodedPaddedFL and CodedSecAgg: Straggler mitigation and secure aggregation in federated learning,” arXiv preprint arXiv:2112.08909, 2021.
  11. T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 50–60, 2020.
  12. P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings et al., “Advances and open problems in federated learning,” Foundations and Trends® in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021.
  13. M. S. H. Abad, E. Ozfatura, D. Gündüz, and O. Ercetin, “Hierarchical federated learning across heterogeneous cellular networks,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 8866–8870.
  14. Z. Chen, D. Li, M. Zhao, S. Zhang, and J. Zhu, “Semi-federated learning,” in Wireless Communications and Networking Conference (WCNC), 2020, pp. 1–6.
  15. J. H. Bell, K. A. Bonawitz, A. Gascón, T. Lepoint, and M. Raykova, “Secure single-server aggregation with (poly)logarithmic overhead,” in ACM SIGSAC Conference on Computer and Communications Security, 2020, p. 1253–1269.
  16. J. So, B. Güler, and A. S. Avestimehr, “Turbo-aggregate: Breaking the quadratic aggregation barrier in secure federated learning,” IEEE Journal on Selected Areas in Information Theory, vol. 2, no. 1, pp. 479–489, 2021.
  17. S. Kadhe, N. Rajaraman, O. O. Koyluoglu, and K. Ramchandran, “FastSecAgg: Scalable secure aggregation for privacy-preserving federated learning,” arXiv preprint arXiv:2009.11248, 2020.
  18. T. Jahani-Nezhad, M. A. Maddah-Ali, S. Li, and G. Caire, “Swiftagg: Communication-efficient and dropout-resistant secure aggregation for federated learning with worst-case security guarantees,” in IEEE International Symposium on Information Theory (ISIT), 2022, pp. 103–108.
  19. Q. Yu, S. Li, N. Raviv, S. M. M. Kalan, M. Soltanolkotabi, and S. A. Avestimehr, “Lagrange coded computing: Optimal design for resiliency, security, and privacy,” in International Conference on Artificial Intelligence and Statistics, vol. 89, 2019, pp. 1215–1225.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets