Papers
Topics
Authors
Recent
Search
2000 character limit reached

A Communication-efficient Local Differentially Private Algorithm in Federated Optimization

Published 4 Apr 2023 in cs.MA, cs.CR, cs.DC, cs.SY, and eess.SY | (2304.01510v2)

Abstract: Federated optimization, wherein several agents in a network collaborate with a central server to achieve optimal social cost over the network with no requirement for exchanging information among agents, has attracted significant interest from the research community. In this context, agents demand resources based on their local computation. Due to the exchange of optimization parameters such as states, constraints, or objective functions with a central server, an adversary may infer sensitive information of agents. We develop a differentially-private additive-increase and multiplicative-decrease algorithm to allocate multiple divisible shared heterogeneous resources to agents in a network. The developed algorithm provides a differential privacy guarantee to each agent in the network. The algorithm does not require inter-agent communication, and the agents do not need to share their cost function or their derivatives with other agents or a central server; however, they share their allocation states with a central server that keeps track of the aggregate consumption of resources. The algorithm incurs very little communication overhead; for m heterogeneous resources in the system, the asymptotic upper bound on the communication complexity is O(m) bits at a time step. Furthermore, if the algorithm converges in K time steps, then the upper bound communication complexity will be O(mK) bits. The algorithm can find applications in several areas, including smart cities, smart energy systems, resource management in the sixth generation (6G) wireless networks with privacy guarantees, etc. We present experimental results to check the efficacy of the algorithm. Furthermore, we present empirical analyses for the trade-off between privacy and algorithm efficiency.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (77)
  1. S. J. Reddi, Z. Charles, M. Zaheer, Z. Garrett, K. Rush, J. Konecny, S. Kumar, and H. B. McMahan, “Adaptive federated optimization,” in International Conference on Learning Representations, ICLR, 2021.
  2. J. Konecny, B. McMahan, and D. Ramage, “Federated optimization: Distributed optimization beyond the datacenter,” in NIPS Workshop on Optimization for Machine Learning, 2015.
  3. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proceedings of International Conference on Artificial Intelligence and Statistics, vol. 54, 2017, pp. 1273–1282.
  4. P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. A. Bonawitz, Z. Charles, G. Cormode, R. Cummings, R. G. L. D’Oliveira, H.Eichner, S. E. Rouayheb, D. Evans, J. Gardner, Z. Garrett, A. Gascón, B. Ghazi, P. B. Gibbons, M. Gruteser, Z. Harchaoui, C. He, L. He, Z. Huo, B. Hutchinson, J. Hsu, M. Jaggi, T. Javidi, G. Joshi, M. Khodak, J. Konečný, A. Korolova, F. Koushanfar, S. Koyejo, T. Lepoint, Y. Liu, P. Mittal, M. Mohri, R. Nock, A. Özgür, R. Pagh, H. Qi, D. Ramage, R. Raskar, M. Raykova, D. Song, W. Song, S. U. Stich, Z. Sun, A. T. Suresh, F. Tramèr, P. Vepakomma, J. Wang, L. Xiong, Z. Xu, Q. Yang, F. X. Yu, H. Yu, and S. Zhao, “Advances and open problems in federated learning,” Foundations and Trends® in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021.
  5. X. Yin, Y. Zhu, and J. Hu, “A comprehensive survey of privacy-preserving federated learning: A taxonomy, review, and future directions,” ACM Comput. Surv., vol. 54, no. 6, 2021.
  6. A. Koskela and A. Honkela, “Learning rate adaptation for differentially private learning,” in Proceedings of the International Conference on Artificial Intelligence and Statistics, ser. Proceedings of Machine Learning Research, S. Chiappa and R. Calandra, Eds., vol. 108.   PMLR, 2020, pp. 2465–2475.
  7. M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’16, 2016, pp. 308–318.
  8. F. Fioretto and P. V. Hentenryck, “Privacy-preserving federated data sharing,” in Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS, E. Elkind, M. Veloso, N. Agmon, and M. E. Taylor, Eds., 2019, pp. 638–646.
  9. Q. Zheng, S. Chen, Q. Long, and W. Su, “Federated f-differential privacy,” in Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, ser. Proceedings of Machine Learning Research, A. Banerjee and K. Fukumizu, Eds., vol. 130.   PMLR, 2021, pp. 2251–2259.
  10. C. Dwork, “Differential privacy,” in Automata, Languages and Programming, M. Bugliesi, B. Preneel, V. Sassone, and I. Wegener, Eds., 2006, pp. 1–12.
  11. C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,” Found. Trends Theor. Comput. Sci., vol. 9, no. 3–4, pp. 211–407, 2014.
  12. P. Kairouz, S. Oh, and P. Viswanath, “Extremal mechanisms for local differential privacy,” J. Mach. Learn. Res., vol. 17, no. 1, pp. 492–542, 2016.
  13. B. Ding, J. Kulkarni, and S. Yekhanin, “Collecting telemetry data privately,” in Advances in Neural Information Processing Systems, 2017.
  14. S. Han, U. Topcu, and G. J. Pappas, “Differentially private distributed constrained optimization,” IEEE Transactions on Automatic Control, vol. 62, no. 1, pp. 50–64, 2017.
  15. O. Beaude, P. Benchimol, S. Gaubert, P. Jacquot, and N. Oudjane, “A privacy-preserving method to optimize distributed resource allocation,” SIAM Journal on Optimization, vol. 30, no. 3, pp. 2303–2336, 2020.
  16. S. Caldas, J. Konecny, H. B. McMahan, and A. Talwalkar, “Expanding the reach of federated learning by reducing client resource requirements,” in Workshop on Federated Learning for Data Privacy and Confidentiality, ser. FL-NeurIPS, 2019.
  17. C. He, M. Annavaram, and S. Avestimehr, “Group knowledge transfer: Federated learning of large CNNs at the edge,” in Proceedings of the International Conference on Neural Information Processing Systems, ser. NIPS’20.   Curran Associates Inc., 2022.
  18. J. Hamer, M. Mohri, and A. T. Suresh, “FedBoost: A communication-efficient algorithm for federated learning,” in Proceedings of the International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, H. D. III and A. Singh, Eds., vol. 119.   PMLR, Jul 2020, pp. 3973–3983.
  19. N. Rieke, J. Hancox, W. Li, F. Milletari, H. R. Roth, S. Albarqouni, S. Bakas, M. N. Galtier, B. A. Landman, K. Maier-Hein, S. Ourselin, M. Sheller, R. M. Summers, A. Trask, D. Xu, M. Baust, and M. J. Cardoso, “The future of digital health with federated learning,” npj Digital Medicine, vol. 3, 2020.
  20. Z. Zheng, Y. Zhou, Y. Sun, Z. Wang, B. Liu, and K. Li, “Applications of federated learning in smart cities: recent advances, taxonomy, and open challenges,” Connection Science, vol. 34, no. 1, pp. 1–28, 2022.
  21. J. Zhang, Z. Ning, and F. Xue, “A two-stage federated optimization algorithm for privacy computing in internet of things,” Future Generation Computer Systems, vol. 145, pp. 354–366, 2023.
  22. G. Zheng, L. Kong, and A. Brintrup, “Federated machine learning for privacy preserving, collective supply chain risk prediction,” International Journal of Production Research, vol. 0, no. 0, pp. 1–18, 2023.
  23. R. Dobbe, Y. Pu, J. Zhu, K. Ramchandran, and C. Tomlin, “Local differential privacy for multi-agent distributed optimal power flow,” in IEEE PES Innovative Smart Grid Technologies Europe, 2020, pp. 265–269.
  24. Z. Yang, M. Chen, K.-K. Wong, H. V. Poor, and S. Cui, “Federated learning for 6g: Applications, challenges, and opportunities,” Engineering, vol. 8, pp. 33–41, 2022.
  25. A. Ghodsi, M. Zaharia, B. Hindman, A. Konwinski, S. Shenker, and I. Stoica, “Dominant resource fairness: Fair allocation of multiple resource types,” in Proceedings of the USENIX Conference on Networked Systems Design and Implementation, 2011, pp. 323–336.
  26. D. Chiu and R. Jain, “Analysis of the increase and decrease algorithms for congestion avoidance in computer networks,” Computer Networks and ISDN Systems, vol. 17, no. 1, pp. 1–14, 1989.
  27. S. E. Alam, R. Shorten, F. Wirth, and J. Y. Yu, “Derandomized distributed multi-resource allocation with little communication overhead,” in Allerton Conference on Communication, Control, and Computing, Oct 2018, pp. 84–91.
  28. F. Wirth, S. Stüdli, J. Y. Yu, M. Corless, and R. Shorten, “Nonhomogeneous place-dependent Markov chains, unsynchronised AIMD, and optimisation,” Journal of the ACM, vol. 66, no. 4, pp. 24:1–24:37, 2019.
  29. S. E. Alam, R. Shorten, F. Wirth, and J. Y. Yu, “Communication-efficient distributed multi-resource allocation,” in IEEE International Smart Cities Conference (ISC2), Sep. 2018, pp. 1–8.
  30. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine Learning, vol. 3, pp. 1–122, 2011.
  31. A. A. Zishan, M. M. Haji, and O. Ardakanian, “Adaptive congestion control for electric vehicle charging in the smart grid,” IEEE Transactions on Smart Grid, vol. 12, no. 3, pp. 2439–2449, 2021.
  32. S. N. Shah, G. P. Incremona, P. Bolzern, and P. Colaneri, “Optimization based AIMD saturated algorithms for public charging of electric vehicles,” Eur. J. Control, vol. 47, pp. 74–83, 2019.
  33. S. E. Alam, R. Shorten, F. Wirth, and J. Y. Yu, “Distributed algorithms for Internet-of-Things enabled prosumer markets: A control theoretic perspective,” in Analytics for the Sharing Economy: Mathematics, Engineering and Business Perspectives, E. C. et al., Ed.   Springer, 2020.
  34. E. Crisostomi, M. Liu, M. Raugi, and R. Shorten, “Plug-and-play distributed algorithms for optimized power generation in a microgrid,” IEEE Transactions on Smart Grid, vol. 5, no. 4, pp. 2145–2154, 2014.
  35. X. Fan, E. Crisostomi, D. Thomopulos, B. Zhang, R. Shorten, and S. Yang, “An optimized decentralized power sharing strategy for wind farm de-loading,” IEEE Transactions on Power Systems, vol. 36, no. 1, pp. 136–146, 2021.
  36. G. Cormode, S. Jha, T. Kulkarni, N. Li, D. Srivastava, and T. Wang, “Privacy at scale: Local differential privacy in practice,” in Proceedings of the International Conference on Management of Data, 2018, pp. 1655–1658.
  37. J. Wang, Z. Charles, Z. Xu, G. Joshi, H. B. McMahan, B. A. Arcas, M. Al-Shedivat, G. Andrew, S. Avestimehr, K. Daly, D. Data, S. N. Diggavi, H. Eichner, A. Gadhikar, Z. Garrett, A. M. Girgis, F. Hanzely, A. Hard, C. He, S. Horváth, Z. Huo, A. Ingerman, M. Jaggi, T. Javidi, P. Kairouz, S. Kale, S. P. Karimireddy, J. Konečný, S. Koyejo, T. Li, L. Liu, M. Mohri, H. Qi, S. J. Reddi, P. Richtárik, K. Singhal, V. Smith, M. Soltanolkotabi, W. Song, A. T. Suresh, S. U. Stich, A. Talwalkar, H. Wang, B. E. Woodworth, S. Wu, F. X. Yu, H. Yuan, M. Zaheer, M. Zhang, T. Zhang, C. Zheng, C. Zhu, and W. Zhu, “A field guide to federated optimization,” CoRR, vol. abs/2107.06917, 2021.
  38. U. Erlingsson, V. Pihur, and A. Korolova, “Rappor: Randomized aggregatable privacy-preserving ordinal response,” in Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, 2014, pp. 1054–1067.
  39. Chromium.org. (2022, October) Design documents: Rappor (randomized aggregatable privacy preserving ordinal responses). [Online]. Available: http://www.chromium.org/developers/design-documents/rappor.
  40. A. Thakurta, A. Vyrros, U. Vaishampayan, G. Kapoor, J. Freudiger, V.R.Sridhar, and D. Davidson, “Learning new words,” March 2017, US Patent 9,594,741.
  41. A. Hard, K. Rao, R. Mathews, F. Beaufays, S. Augenstein, H. Eichner, C. Kiddon, and D. Ramage, “Federated learning for mobile keyboard prediction,” CoRR, vol. abs/1811.03604, 2018.
  42. S. Ramaswamy, R. Mathews, K. Rao, and F. Beaufays, “Federated learning for emoji prediction in a mobile keyboard,” CoRR, vol. abs/1906.04329, 2019.
  43. M. Flores. (2020, October) Triaging covid-19 patients: 20 hospitals in 20 days build ai model that predicts oxygen needs. [Online]. Available: https://blogs.nvidia.com/blog/2020/10/05/federated-learning-covid-oxygen-needs/
  44. S. E. Alam, “Communication-efficient distributed multi-resource allocation,” Ph.D. dissertation, Concordia University, June 2022. [Online]. Available: https://spectrum.library.concordia.ca/id/eprint/989944/
  45. Y. Shang, “Hybrid consensus for averager-copier-voter networks with non-rational agents,” Chaos, Solitons & Fractals, vol. 110, pp. 244–251, 2018.
  46. D. T. Nguyen, L. B. Le, and V. K. Bhargava, “A market-based framework for multi-resource allocation in fog computing,” IEEE/ACM Transactions on Networking, vol. 27, no. 3, pp. 1151–1164, 2019.
  47. F. Fossati, S. Moretti, P. Perny, and S. Secci, “Multi-resource allocation for network slicing,” IEEE/ACM Transactions on Networking, vol. 28, no. 3, pp. 1311–1324, 2020.
  48. A. Gündoğan, H. M. Gürsu, V. Pauli, and W. Kellerer, “Distributed resource allocation with multi-agent deep reinforcement learning for 5G-V2V communication,” in Proceedings of the International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, ser. Mobihoc ’20, 2020, pp. 357–362.
  49. J. Khamse-Ashari, I. Lambadaris, G. Kesidis, B. Urgaonkar, and Y. Zhao, “A cost-aware fair allocation mechanism for multi-resource servers,” IEEE Networking Letters, vol. 1, no. 1, pp. 34–37, 2019.
  50. Y. Shang, “Resilient consensus for expressed and private opinions,” IEEE Transactions on Cybernetics, vol. 51, no. 1, pp. 318–331, 2021.
  51. P. Poullie, T. Bocek, and B. Stiller, “A survey of the state-of-the-art in fair multi-resource allocations for data centers,” IEEE Transactions on Network and Service Management, vol. 15, no. 1, pp. 169–183, 2018.
  52. W. Zou, J. Guo, C. K. Ahn, and Z. Xiang, “Sampled-data consensus protocols for a class of second-order switched nonlinear multiagent systems,” IEEE Transactions on Cybernetics, pp. 1–12, 2022.
  53. J. Tsitsiklis, D. Bertsekas, and M. Athans, “Distributed asynchronous deterministic and stochastic gradient optimization algorithms,” IEEE Transactions on Automatic Control, vol. 31, no. 9, pp. 803–812, 1986.
  54. J. N. Tsitsiklis, “Problems in decentralized decision making and computation,” Ph.D. dissertation, Massachusetts Institute of Technology, Cambridge, MA, USA, 1984.
  55. A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” IEEE Transactions on Automatic Control, vol. 54, no. 1, pp. 48–61, Jan. 2009.
  56. L. Romao, K. Margellos, G. Notarstefano, and A. Papachristodoulou, “Subgradient averaging for multi-agent optimisation with different constraint sets,” Automatica, vol. 131, p. 109738, 2021.
  57. S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah, “Randomized gossip algorithms,” IEEE/ACM Trans. Netw., vol. 14, no. SI, pp. 2508–2530, 2006.
  58. S. Pu, W. Shi, J. Xu, and A. Nedic, “Push–pull gradient methods for distributed optimization in networks,” IEEE Transactions on Automatic Control, vol. 66, no. 1, pp. 1–16, 2021.
  59. J. C. Duchi, A. Agarwal, and M. J. Wainwright, “Dual averaging for distributed optimization: Convergence analysis and network scaling,” IEEE Transactions on Automatic Control, vol. 57, no. 3, pp. 592–606, March 2012.
  60. D. Han, K. Liu, H. Sandberg, S. Chai, and Y. Xia, “Privacy-preserving dual averaging with arbitrary initial conditions for distributed optimization,” IEEE Transactions on Automatic Control, 2021.
  61. A. Nedic, “Asynchronous broadcast-based convex optimization over a network,” IEEE Transactions on Automatic Control, vol. 56, no. 6, pp. 1337–1351, 2011.
  62. D. Silvestre, J. P. Hespanha, and C. Silvestre, “Broadcast and gossip stochastic average consensus algorithms in directed topologies,” IEEE Transactions on Control of Network Systems, vol. 6, no. 2, pp. 474–486, 2019.
  63. X. Huo and M. Liu, “Privacy-preserving distributed multi-agent cooperative optimization—paradigm design and privacy analysis,” IEEE Control Systems Letters, vol. 6, pp. 824–829, 2022.
  64. J. Konecny, H. B. McMahan, F. X. Yu, P. Richtarik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” in NIPS Workshop on Private Multi-Party Machine Learning, 2016.
  65. T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” in Proceedings of Machine Learning and Systems, MLSys, 2020.
  66. F. Sattler, S. Wiedemann, K. R. Muller, and W. Samek, “Robust and communication-efficient federated learning from non-i.i.d. data,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 9, pp. 3400–3413, 2020.
  67. J. Wang, Q. Liu, H. Liang, G. Joshi, and H. V. Poor, “Tackling the objective inconsistency problem in heterogeneous federated optimization,” in Annual Conference on Neural Information Processing Systems, NeurIPS, 2020.
  68. S. P. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich, and A. T. Suresh, “SCAFFOLD: Stochastic controlled averaging for federated learning,” in Proceedings of the International Conference on Machine Learning, vol. 119, 2020, pp. 5132–5143.
  69. Y. Zhou, Q. Ye, and J. Lv, “Communication-efficient federated learning with compensated Overlap-FedAvg,” IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 1, pp. 192–205, 2022.
  70. H. Yuan, M. Zaheer, and S. Reddi, “Federated composite optimization,” in Proceedings of the 38th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 139, 2021, pp. 12 253–12 266.
  71. Z. Huang, S. Mitra, and N. Vaidya, “Differentially private distributed optimization,” in Proceedings of the International Conference on Distributed Computing and Networking, ser. ICDCN, no. 4, 2015.
  72. J. Le Ny and G. J. Pappas, “Differentially private filtering,” IEEE Transactions on Automatic Control, vol. 59, no. 2, pp. 341–354, 2014.
  73. V. Katewa, F. Pasqualetti, and V. Gupta, “On privacy vs. cooperation in multi-agent systems,” International Journal of Control, vol. 91, no. 7, pp. 1693–1707, 2018.
  74. T. Ding, S. Zhu, J. He, C. Chen, and X. Guan, “Consensus-based distributed optimization in multi-agent systems: Convergence and differential privacy,” in IEEE Conference on Decision and Control (CDC), 2018, pp. 3409–3414.
  75. F. Fioretto, L. Mitridati, and P. V. Hentenryck, “Differential privacy for stackelberg games,” in Proceedings of the International Joint Conference on Artificial Intelligence, IJCAI, 2020, pp. 3480–3486.
  76. J. C. Duchi, M. I. Jordan, and M. J. Wainwright, “Local privacy and statistical minimax rates,” in IEEE Annual Symposium on Foundations of Computer Science, 2013, pp. 429–438.
  77. L. Chen, B. Ghazi, R. Kumar, and P. Manurangsi, “On distributed differential privacy and counting distinct elements,” in Innovations in Theoretical Computer Science (ITCS), 2021.
Citations (2)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.