From Noisy Fixed-Point Iterations to Private ADMM for Centralized and Federated Learning (2302.12559v3)
Abstract: We study differentially private (DP) machine learning algorithms as instances of noisy fixed-point iterations, in order to derive privacy and utility results from this well-studied framework. We show that this new perspective recovers popular private gradient-based methods like DP-SGD and provides a principled way to design and analyze new private optimization algorithms in a flexible manner. Focusing on the widely-used Alternating Directions Method of Multipliers (ADMM) method, we use our general framework to derive novel private ADMM algorithms for centralized, federated and fully decentralized learning. For these three algorithms, we establish strong privacy guarantees leveraging privacy amplification by iteration and by subsampling. Finally, we provide utility guarantees using a unified analysis that exploits a recent linear convergence result for noisy fixed-point iterations.
- Deep Learning with Differential Privacy. In CCS, 2016.
- Privacy of noisy stochastic gradient descent: More iterations without more privacy loss. In NeurIPS, 2022.
- Privacy Amplification by Subsampling: Tight Analyses via Couplings and Divergences. In NeurIPS, 2018.
- Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds. In FOCS, 2014.
- Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, 2011.
- A coordinate descent primal-dual algorithm and application to distributed asynchronous optimization. IEEE Transactions on Automatic Control, 61(10):2947–2957, 2016.
- Practical Secure Aggregation for Privacy-Preserving Machine Learning. In CCS, 2017.
- Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine Learning, 3(1):1–122, 2011.
- Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Problems, 20(1):103–120, 2003.
- Differentially private admm for regularized consensus optimization. IEEE Transactions on Automatic Control (TAC), 66(8):3718–3725, 2021.
- Differentially Private Empirical Risk Minimization. Journal of Machine Learning Research, 12(29):1069–1109, 2011.
- Stochastic quasi-fejér block-coordinate fixed point iterations with random sweeping ii: mean-square and linear convergence. Mathematical Programming, 174(1):433–451, 2019.
- Fixed point strategies in data science. IEEE Transactions on Signal Processing, 69:3878–3905, 2021. ISSN 1941-0476. doi: 10.1109/tsp.2021.3069677. URL http://dx.doi.org/10.1109/TSP.2021.3069677.
- Privacy Amplification by Decentralization. In AISTATS, 2022.
- Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging. In NeurIPS, 2022.
- Towards plausible differentially private admm based distributed machine learning. Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2020.
- Local privacy and statistical minimax rates. In FOCS, 2013.
- The Algorithmic Foundations of Differential Privacy. Foundations and Trends® in Theoretical Computer Science, 9(3-4):211–407, 2014.
- Understanding the convergence of the alternating direction method of multipliers: Theoretical and computational perspectives, 2015.
- Privacy amplification by iteration. 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), Oct 2018. doi: 10.1109/focs.2018.00056. URL http://dx.doi.org/10.1109/FOCS.2018.00056.
- Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557, 2017.
- Diagonal scaling in douglas-rachford splitting and admm. In CDC, 2014.
- Line search for averaged operator iteration. 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 1015–1022, 2016.
- Learning privately over distributed features: An admm sharing approach. arXiv preprint arXiv:1907.07735, 2019.
- Dp-admm: Admm-based distributed learning with differential privacy. IEEE Transactions on Information Forensics and Security, 15:1002–1012, 2019.
- Asynchronous distributed optimization using a randomized alternating direction method of multipliers. In CDC, 2013.
- A randomized incremental subgradient method for distributed optimization in networked systems. SIAM Journal on Optimization, 20(3):1157–1170, 2010. doi: 10.1137/08073038X. URL https://doi.org/10.1137/08073038X.
- (Nearly) Dimension Independent Private ERM with AdaGrad Rates via Publicly Estimated Subspaces. In COLT, 2021a.
- Practical and private (deep) learning without sampling or shuffling. In ICML, 2021b.
- Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2):1–210, 2021c.
- What Can We Learn Privately? In FOCS, 2008.
- A unified theory of decentralized sgd with changing topology and local updates, 2021.
- Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. In NIPS, 2017.
- Convergence rates with inexact non-expansive operators. Mathematical Programming, 159(1-2):403–434, 2015.
- Splitting algorithms for the sum of two nonlinear operators. SIAM Journal on Numerical Analysis, 16(6):964–979, 1979.
- Laplacian smoothing stochastic admms with differential privacy guarantees. Trans. Info. For. Sec., 17:1814–1826, jan 2022. ISSN 1556-6013. doi: 10.1109/TIFS.2022.3170271. URL https://doi.org/10.1109/TIFS.2022.3170271.
- Differentially Private Coordinate Descent for Composite Empirical Risk Minimization. In ICML, 2022.
- Walkman: A communication-efficient random-walk algorithm for decentralized optimization. IEEE Transactions on Signal Processing, 68:2513–2528, 2020. ISSN 1941-0476. doi: 10.1109/tsp.2020.2983167. URL http://dx.doi.org/10.1109/TSP.2020.2983167.
- Learning differentially private recurrent language models. In International Conference on Learning Representations, 2018.
- Mironov, I. Renyi differential privacy. CoRR, abs/1702.07476, 2017a. URL http://arxiv.org/abs/1702.07476.
- Mironov, I. Rényi differential privacy. 2017 IEEE 30th Computer Security Foundations Symposium (CSF), Aug 2017b. doi: 10.1109/csf.2017.11. URL http://dx.doi.org/10.1109/CSF.2017.11.
- Rényi differential privacy of the sampled gaussian mechanism. arXiv preprint arXiv:1908.10530, 2019.
- Differentially Private Federated Learning on Heterogeneous Data. In AISTATS, 2022.
- A primer on monotone operator methods. 2015.
- Operator splitting performance estimation: Tight contraction factors and optimal parameter selection. SIAM Journal on Optimization, 30(3):2251–2271, 2020.
- Differentially Private Federated Learning via Inexact ADMM with Multiple Local Updates. arXiv e-prints, art. arXiv:2202.09409, 2022.
- Differentially private admm algorithms for machine learning. IEEE Transactions on Information Forensics and Security, 16:4733–4745, 2021.
- On the linear convergence of the admm in decentralized consensus optimization. IEEE Transactions on Signal Processing, 62(7):1750–1761, 2014.
- Nearly Optimal Private LASSO. NeurIPS, 28, 2015.
- Federated learning of oligonucleotide drug molecule thermodynamics with differentially private admm-based svm. In Machine Learning and Principles and Practice of Knowledge Discovery in Databases: International Workshops of ECML PKDD 2021, Virtual Event, September 13-17, 2021, Proceedings, Part II, pp. 459–467. Springer, 2022.
- Decentralized Collaborative Learning of Personalized Models over Networks. In AISTATS, 2017.
- Differentially private empirical risk minimization revisited: Faster and more general. In NeurIPS, 2017.
- Distributed Alternating Direction Method of Multipliers. In Proceedings of the 51th IEEE Conference on Decision and Control (CDC), pp. 5445–5450, 2012.
- On the O(1/k) Convergence of Asynchronous Distributed Alternating Direction Method of Multipliers. In IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2013.
- Dynamic differential privacy for admm-based distributed classification learning. IEEE Transactions on Information Forensics and Security (TIFS), 12(1):172–187, 2017.
- Improving the privacy and accuracy of admm-based distributed algorithms. In ICML, 2018.
- Federated learning via inexact admm. Technical report, arXiv:2204.10607, 2022.
- Bypassing the ambient dimension: Private SGD with gradient subspace identification. In ICLR, 2021.