Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Theoretical Analysis of Efficiency Constrained Utility-Privacy Bi-Objective Optimization in Federated Learning (2312.16554v2)

Published 27 Dec 2023 in cs.LG and cs.AI

Abstract: Federated learning (FL) enables multiple clients to collaboratively learn a shared model without sharing their individual data. Concerns about utility, privacy, and training efficiency in FL have garnered significant research attention. Differential privacy has emerged as a prevalent technique in FL, safeguarding the privacy of individual user data while impacting utility and training efficiency. Within Differential Privacy Federated Learning (DPFL), previous studies have primarily focused on the utility-privacy trade-off, neglecting training efficiency, which is crucial for timely completion. Moreover, differential privacy achieves privacy by introducing controlled randomness (noise) on selected clients in each communication round. Previous work has mainly examined the impact of noise level ($\sigma$) and communication rounds ($T$) on the privacy-utility dynamic, overlooking other influential factors like the sample ratio ($q$, the proportion of selected clients). This paper systematically formulates an efficiency-constrained utility-privacy bi-objective optimization problem in DPFL, focusing on $\sigma$, $T$, and $q$. We provide a comprehensive theoretical analysis, yielding analytical solutions for the Pareto front. Extensive empirical experiments verify the validity and efficacy of our analysis, offering valuable guidance for low-cost parameter design in DPFL.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Artificial intelligence and statistics.   PMLR, 2017, pp. 1273–1282.
  2. Q. Yang, Y. Liu, Y. Cheng, Y. Kang, T. Chen, and H. Yu, “Federated Learning,” Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 13, no. 3, pp. 1–207, Dec. 2019.
  3. C. Dwork, A. Roth et al., “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
  4. M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’16.   New York, NY, USA: Association for Computing Machinery, 2016, p. 308–318. [Online]. Available: https://doi.org/10.1145/2976749.2978318
  5. Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1–19, 2019.
  6. X. Zhang, H. Gu, L. Fan, K. Chen, and Q. Yang, “No free lunch theorem for security and utility in federated learning,” ACM Trans. Intell. Syst. Technol., vol. 14, no. 1, nov 2022. [Online]. Available: https://doi.org/10.1145/3563219
  7. Y. Kang, H. Gu, X. Tang, Y. He, Y. Zhang, J. He, Y. Han, L. Fan, and Q. Yang, “Optimizing privacy, utility and efficiency in constrained multi-objective federated learning,” arXiv preprint arXiv:2305.00312, 2023.
  8. Z. He, T. Zhang, and R. B. Lee, “Model inversion attacks against collaborative inference,” in Proceedings of the 35th Annual Computer Security Applications Conference, 2019, pp. 148–162.
  9. L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” Advances in neural information processing systems, vol. 32, 2019.
  10. J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, “Inverting gradients-how easy is it to break privacy in federated learning?” Advances in Neural Information Processing Systems, vol. 33, pp. 16 937–16 947, 2020.
  11. H. Yin, A. Mallya, A. Vahdat, J. M. Alvarez, J. Kautz, and P. Molchanov, “See through gradients: Image batch recovery via gradinversion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 16 337–16 346.
  12. B. Zhao, K. R. Mopuri, and H. Bilen, “idlg: Improved deep leakage from gradients,” arXiv preprint arXiv:2001.02610, 2020.
  13. J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” arXiv preprint arXiv:1610.05492, 2016.
  14. A. Singh, P. Vepakomma, O. Gupta, and R. Raskar, “Detailed comparison of communication efficiency of split learning and federated learning,” arXiv preprint arXiv:1909.09145, 2019.
  15. A. Ghosh, J. Chung, D. Yin, and K. Ramchandran, “An efficient framework for clustered federated learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 19 586–19 597, 2020.
  16. R. C. Geyer, T. Klein, and M. Nabi, “Differentially private federated learning: A client level perspective,” arXiv preprint arXiv:1712.07557, 2017.
  17. K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin, T. Q. Quek, and H. V. Poor, “Federated learning with differential privacy: Algorithms and performance analysis,” IEEE Transactions on Information Forensics and Security, vol. 15, pp. 3454–3469, 2020.
  18. H. Wu, C. Chen, and L. Wang, “A theoretical perspective on differentially private federated multi-task learning,” arXiv preprint arXiv:2011.07179, 2020.
  19. Y. Fraboni, R. Vidal, L. Kameni, and M. Lorenzi, “On the impact of client sampling on federated learning convergence,” 2022. [Online]. Available: https://openreview.net/forum?id=edN_G_4njyi
  20. Y. J. Cho, J. Wang, and G. Joshi, “Client selection in federated learning: Convergence analysis and power-of-choice selection strategies,” arXiv preprint arXiv:2010.01243, 2020.
  21. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  22. L. Deng, “The mnist database of handwritten digit images for machine learning research,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 141–142, 2012.
  23. C. Dwork, F. McSherry, K. Nissim, and A. Smith, “Calibrating noise to sensitivity in private data analysis,” in Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3.   Springer, 2006, pp. 265–284.
  24. C. Dwork and M. Naor, “On the difficulties of disclosure prevention in statistical databases or the case for differential privacy,” Journal of Privacy and Confidentiality, vol. 2, no. 1, 2010.
  25. C. Dwork, “Differential privacy,” in International colloquium on automata, languages, and programming.   Springer, 2006, pp. 1–12.
  26. Z. Bu, J. Dong, Q. Long, and W. J. Su, “Deep learning with gaussian differential privacy,” Harvard data science review, vol. 2020, no. 23, pp. 10–1162, 2020.
  27. I. Mironov, “Rényi differential privacy,” in 2017 IEEE 30th computer security foundations symposium (CSF).   IEEE, 2017, pp. 263–275.
  28. M. Seif, R. Tandon, and M. Li, “Wireless federated learning with local differential privacy,” in 2020 IEEE International Symposium on Information Theory (ISIT).   IEEE, 2020, pp. 2604–2609.
  29. S. Truex, L. Liu, K.-H. Chow, M. E. Gursoy, and W. Wei, “Ldp-fed: Federated learning with local differential privacy,” in Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking, 2020, pp. 61–66.
  30. N. Gunantara, “A review of multi-objective optimization: Methods and its applications,” Cogent Engineering, vol. 5, no. 1, p. 1502242, 2018.
  31. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: Nsga-ii,” IEEE transactions on evolutionary computation, vol. 6, no. 2, pp. 182–197, 2002.
  32. K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: Solving problems with box constraints,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 577–601, 2014.
  33. M. Kim, T. Hiroyasu, M. Miki, and S. Watanabe, “Spea2+: Improving the performance of the strength pareto evolutionary algorithm 2,” in Parallel Problem Solving from Nature-PPSN VIII: 8th International Conference, Birmingham, UK, September 18-22, 2004. Proceedings 8.   Springer, 2004, pp. 742–751.
  34. Q. Zhang and H. Li, “Moea/d: A multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on evolutionary computation, vol. 11, no. 6, pp. 712–731, 2007.
  35. A. Biswas, C. Fuentes, and C. Hoyle, “A multi-objective bayesian optimization approach using the weighted tchebycheff method,” Journal of mechanical design, vol. 144, no. 1, p. 011703, 2022.
  36. S. Daulton, D. Eriksson, M. Balandat, and E. Bakshy, “Multi-objective bayesian optimization over high-dimensional search spaces,” in Uncertainty in Artificial Intelligence.   PMLR, 2022, pp. 507–517.
  37. M. Laumanns and J. Ocenasek, “Bayesian optimization algorithms for multi-objective optimization,” in International Conference on Parallel Problem Solving from Nature.   Springer, 2002, pp. 298–307.
  38. K. Yang, M. Emmerich, A. Deutz, and T. Bäck, “Multi-objective bayesian global optimization using expected hypervolume improvement gradient,” Swarm and evolutionary computation, vol. 44, pp. 945–956, 2019.
  39. J.-A. Désidéri, “Mgda variants for multi-objective optimization,” Ph.D. dissertation, INRIA, 2012.
  40. ——, “Multiple-gradient descent algorithm (mgda) for multiobjective optimization,” Comptes Rendus Mathematique, vol. 350, no. 5-6, pp. 313–318, 2012.
  41. X. Liu, X. Tong, and Q. Liu, “Profiling pareto front with multi-objective stein variational gradient descent,” Advances in Neural Information Processing Systems, vol. 34, pp. 14 721–14 733, 2021.
  42. S. Liu and L. N. Vicente, “The stochastic multi-gradient algorithm for multi-objective optimization and its application to supervised machine learning,” Annals of Operations Research, pp. 1–30, 2021.
  43. D. Mahapatra and V. Rajan, “Multi-task learning with user preferences: Gradient descent with controlled ascent in pareto optimization,” in International Conference on Machine Learning.   PMLR, 2020, pp. 6597–6607.
  44. X. Zhang, X. Chen, M. Hong, Z. S. Wu, and J. Yi, “Understanding clipping for federated learning: Convergence and client-level differential privacy,” in International Conference on Machine Learning, ICML 2022, 2022.
  45. X. Lin, Z. Yang, X. Zhang, and Q. Zhang, “Pareto set learning for expensive multi-objective optimization,” Advances in Neural Information Processing Systems, vol. 35, pp. 19 231–19 247, 2022.
Citations (1)

Summary

We haven't generated a summary for this paper yet.