Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Differentially Private Distributed Stochastic Optimization with Time-Varying Sample Sizes (2310.11892v2)

Published 18 Oct 2023 in eess.SY and cs.SY

Abstract: Differentially private distributed stochastic optimization has become a hot topic due to the urgent need of privacy protection in distributed stochastic optimization. In this paper, two-time scale stochastic approximation-type algorithms for differentially private distributed stochastic optimization with time-varying sample sizes are proposed using gradient- and output-perturbation methods. For both gradient- and output-perturbation cases, the convergence of the algorithm and differential privacy with a finite cumulative privacy budget $\varepsilon$ for an infinite number of iterations are simultaneously established, which is substantially different from the existing works. By a time-varying sample sizes method, the privacy level is enhanced, and differential privacy with a finite cumulative privacy budget $\varepsilon$ for an infinite number of iterations is established. By properly choosing a Lyapunov function, the algorithm achieves almost-sure and mean-square convergence even when the added privacy noises have an increasing variance. Furthermore, we rigorously provide the mean-square convergence rates of the algorithm and show how the added privacy noise affects the convergence rate of the algorithm. Finally, numerical examples including distributed training on a benchmark machine learning dataset are presented to demonstrate the efficiency and advantages of the algorithms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. J. F. Zhang, J. W. Tan, and J. M. Wang, “Privacy security in control systems,” Science China Information Sciences, vol. 64, pp. 176201:1-176201:3, 2021.
  2. J. L. Ny and G. J. Pappas, “Differentially private filtering,” IEEE Transactions on Automatic Control, vol. 59, no. 2, pp. 341-354, 2014.
  3. S. Han, U. Topcu, and G. J. Pappas, “Differentially private distributed constrained optimization,” IEEE Transactions on Automatic Control, vol. 62, no. 1, pp. 50-64, 2017.
  4. Y. Lu and M. H. Zhu, “Privacy preserving distributed optimization using homomorphic encryption,” Automatica, vol. 96, pp. 314-325, 2018.
  5. M. Ruan, H. Gao, and Y. Wang, “Secure and privacy-preserving consensus,” IEEE Transactions on Automatic Control, vol. 64, no. 10, pp. 4035-4049, 2019.
  6. Y. Q. Wang, “Privacy-preserving average consensus via state decomposition,” IEEE Transactions on Automatic Control, vol. 64, no. 11, pp. 4711-4716, 2019.
  7. Y. L. Mo and R. M. Murray, “Privacy preserving average consensus,” IEEE Transactions on Automatic Control, vol. 62, no. 2, pp. 753-765, 2017.
  8. J. He, L. Cai, and X. Guan, “Preserving data-privacy with added noises: Optimal estimation and privacy analysis,” IEEE Transactions on Information Theory, vol. 64, no. 8, pp. 5677-5690, 2018.
  9. C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,” Found. Trends Theor. Comput. Sci., vol. 9, nos. 3-4, 2014, pp. 211-407.
  10. M. Abadi, A. Chu, I. Goodfellow, H. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308-318, 2016.
  11. Z. Lu, H. J. Asghar, M. A. Kaafar, D. Webb, and P. Dickinson, “A differentially private framework for deep learning with convexified loss functions,” IEEE Transactions on Information Forensics and Security, vol. 17, pp. 2151-2165, 2022.
  12. R. Bassily, A. Smith, and A. Thakurta, “Private empirical risk minimization: Efficient algorithms and tight error bounds,” in 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pp. 464-473, 2014.
  13. S. Song, K. Chaudhuri, and A. D. Sarwate, “Stochastic gradient descent with differentially private updates,” in Proceedings of the IEEE Global Conference on Signal and Information Processing, pp. 245-248, Dec. 2013.
  14. R. Bassily, V. Feldman, and K. Talwar, “Private stochastic convex optimization with optimal rates,” Advances in Neural Information Processing Systems (NeurIPS), Vancouver, Canada, vol. 32, 2019.
  15. R. Bassily, C. Guzman, and M. Menart, “Differentially private stochastic optimization: New results in convex and non-convex settings,” Advances in Neural Information Processing Systems, vol. 34, 2021.
  16. R. Bassily, C. Guzman, and A. Nandi, “Non-euclidean differentially private stochastic convex optimization,” in Proceedings of Thirty Fourth Conference on Learning Theory, vol. 134, pp. 474-499, Aug 2021.
  17. D. Wang, H. Xiao, S. Devadas, and J. Xu, “On differentially private stochastic convex optimization with heavy-tailed data,” in Proceedings of the 37th International Conference on Machine Learning, pp. 10081-10091, 2020.
  18. Q. Zhang, J. Ma, J. Lou, and L. Xiong, “Private stochastic non-convex optimization with improved utility rates,” in Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, pp. 3370-3376, 2021.
  19. E. Nozari, P. Tallapragada, and J. Cortes, “Differentially private average consensus: obstructions, trade-offs, and optimal algorithm design,” Automatica, vol. 81, pp. 221-231, 2017.
  20. X. K. Liu, J. F. Zhang, and J. M. Wang, “Differentially private consensus algorithm for continuous-time heterogeneous multi-agent systems,” Automatica, vol. 12, 109283, 2020.
  21. J. M. Wang, J. M. Ke, and J. F. Zhang, “Differentially private bipartite consensus over signed networks with time-varying noises,” arXiv:2212.11479v1, 2022.
  22. T. Ding, S. Y. Zhu, J. P. He, C. L. Chen, and X. P. Guan, “Differentially private distributed optimization via state and direction perturbation in multi-agent systems,” IEEE Transactions on Automatic Control, vol. 67, no. 2, pp. 722-737, 2022.
  23. M. J. Ye, G. Q. Hu, L. H. Xie, and S. Y. Xu, “Differentially private distributed Nash equilibrium seeking for aggregative games,” IEEE Transactions on Automatic Control, vol. 67, no. 5, pp. 2451-2458, 2022.
  24. J. M. Wang, J. F. Zhang, and X. K. He, “Differentially private distributed algorithms for stochastic aggregative games,” Automatica, vol. 142, 110440, 2022.
  25. A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” IEEE Transactions on Automatic Control, vol. 48, no. 1, pp. 48-61, 2009.
  26. A. Nedic, A. Ozdaglar, and P. A. Parrilo, “Constrained consensus and optimization in multi-agent networks,” IEEE Transactions on Automatic Control, vol. 55, no. 4, pp. 922-938, 2010.
  27. A. Olshevsky, I. C. Paschalidis, and S. Pu, “A Non-asymptotic analysis of network independence for distributed stochastic gradient descent,” arXiv preprint arXiv:1906.02702, 2019.
  28. T. Li, K. Fu, and X. Fu, “Distributed stochastic subgradient optimization algorithms over random and noisy networks,” arXiv:2008.08796v5, 2022.
  29. T. T. Doan, S. T. Maguluri, and J. Romberg, “Convergence rates of distributed gradient methods under random quantization: A stochastic approximation approach,” IEEE Transactions on Automatic Control, vol. 66, no. 10, pp. 4469- 4484, 2021.
  30. T. T. Doan, S. T. Maguluri, and J. Romberg, “Fast convergence rates of distributed subgradient methods with adaptive quantization,” IEEE Trans. Automatic Control, vol. 66, no. 5, pp. 2191-2205, May 2021.
  31. T. T. Doan, “Finite-time analysis and restarting scheme for linear two-time-scale stochastic approximation,” SIAM Journal on Control and Optimization, vol. 59, no. 4, pp. 2798-2819, 2021.
  32. H. Reisizadeh, B. Touri, and S. Mohajer, “Distributed optimization over time-varying graphs with imperfect sharing of information,” IEEE Transactions on Automatic Control, vol. 68, no. 7, pp. 4420-4427, July. 2023.
  33. R. H. Byrd, G. M. Chin, J. Nocedal, and Y. Wu, “Sample size selection in optimization methods for machine learning,” Mathematical Programming, vol. 134, pp. 127-155, 2012.
  34. J. L. Lei, and U. V. Shanbhag, “Asynchronous schemes for stochastic and misspecified potential games and nonconvex optimization,” Operations Research, vol. 68, no. 6, pp. 1742-1766, 2020.
  35. J. L. Lei, P. Yi, J. Chen, and Y. G. Hong, “Distributed variable sample-size stochastic optimization with fixed step-sizes”, IEEE Transactions on Automatic Control, vol. 67, no. 10, pp. 5630-5637, 2022.
  36. Y. Xie, and U. V. Shanbhag, “SI-ADMM: A stochastic inexact ADMM framework for stochastic convex programs,” IEEE Transactions on Automatic Control, vol. 65, no. 6, pp. 2355-2370, 2020.
  37. A. Jalilzadeh, A. Nedic, U. V. Shanbhag, and F. Yousefian, “A variable sample-size stochastic quasi-Newton method for smooth and nonsmooth stochastic convex optimization,” Mathematics of Operations Research, vol. 47, no. 1, pp. 690-719, 2022.
  38. S. Cui, and U. V. Shanbhag, “Variance-reduced splitting schemes for monotone stochastic generalized equations,” IEEE Transactions on Automatic Control, DOI 10.1109/TAC.2023.3290121, 2023.
  39. Y. Q. Wang and H. V. Poor, “Decentralized stochastic optimization with inherent privacy protection,” IEEE Trans. Automatic Control, vol. 68, no. 4, pp. 2293-2308, 2023.
  40. Y. Q. Wang and Tamer Başar, “Quantization enabled privacy protection in decentralized stochastic optimization,” IEEE Trans. Automatic Control, vol. 68, no. 7, pp. 4038-4052, 2023.
  41. Y. Q. Wang and A. Nedic. “Tailoring gradient methods for differentially-private distributed optimization,” IEEE Trans. Automatic Control, DOI 10.1109/TAC.2023.3272968, 2022.
  42. C. Li, P. Zhou, L. Xiong, Q. Wang, and T. Wang, “Differentially private distributed online learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 30, no. 8, pp. 1440-1453, 2018.
  43. J. Xu, W. Zhang, and F. Wang, “A(DP)22{}^{2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPTSGD: Asynchronous decentralized parallel stochastic gradient descent with differential privacy,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 11, pp. 8036-8047, 2022.
  44. J. Ding, G. Liang, J. Bi, and M. Pan, “Differentially private and communication efficient collaborative learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, pp. 7219-7227, Feb. 2021.
  45. C. X. Liu, K. H. Johansson, and Y. Shi, “Private stochastic dual averaging for decentralized empirical risk minimization,” IFAC-PapersOnLine, vol. 55, no. 13, pp. 43-48, 2022.
  46. Z. H. Huang, R. Hu, Y. X. Guo, E. Chan-Tin, and Y. N. Gong, “DP-ADMM: ADMM-based distributed learning with differential privacy,” IEEE Transactions on Information Forensics and Security, vol. 15, pp. 1002-1012, 2019.
  47. C. Gratton, N. K. D. Venkategowda, R. Arablouei, and S. Werner, “Privacy-preserved distributed learning with zeroth-order optimization,” IEEE Transactions on Information Forensics and Security, vol. 17, pp. 265-279, 2021.
  48. J. M. Ke, Y. Wang, Y. L. Zhao, and J. F. Zhang, “Recursive identification of set-valued systems under uniform persistent excitations,” arXiv:2212.01777, 2023.
Citations (2)

Summary

We haven't generated a summary for this paper yet.