MUSIC: Accelerated Convergence for Distributed Optimization With Inexact and Exact Methods (2403.02589v1)
Abstract: Gradient-type distributed optimization methods have blossomed into one of the most important tools for solving a minimization learning task over a networked agent system. However, only one gradient update per iteration is difficult to achieve a substantive acceleration of convergence. In this paper, we propose an accelerated framework named as MUSIC allowing each agent to perform multiple local updates and a single combination in each iteration. More importantly, we equip inexact and exact distributed optimization methods into this framework, thereby developing two new algorithms that exhibit accelerated linear convergence and high communication efficiency. Our rigorous convergence analysis reveals the sources of steady-state errors arising from inexact policies and offers effective solutions. Numerical results based on synthetic and real datasets demonstrate both our theoretical motivations and analysis, as well as performance advantages.
- H. Jaleel and J. S. Shamma, “Distributed optimization for robot networks: From real-time convex optimization to game-theoretic self-organization,” Proceedings of the IEEE, vol. 108, no. 11, pp. 1953–1967, 2020.
- D. K. Molzahn, F. Dörfler, H. Sandberg, S. H. Low, S. Chakrabarti, R. Baldick, and J. Lavaei, “A survey of distributed optimization and control algorithms for electric power systems,” IEEE Transactions on Smart Grid, vol. 8, no. 6, pp. 2941–2962, 2017.
- A. Nedic, “Distributed gradient methods for convex machine learning problems in networks,” IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 92–101, 2020.
- S. Yang, Q. Liu, and J. Wang, “A collaborative neurodynamic approach to multiple-objective distributed optimization,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 4, pp. 981–992, 2017.
- A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” IEEE Transactions on Automatic Control, vol. 54, no. 1, pp. 48–61, 2009.
- A. Nedić and A. Olshevsky, “Stochastic gradient-push for strongly convex functions on time-varying directed graphs,” IEEE Transactions on Automatic Control, vol. 61, no. 12, pp. 3936–3947, 2016.
- A. Nedić, A. Olshevsky, W. Shi, and C. A. Uribe, “Geometrically convergent distributed optimization with uncoordinated step-sizes,” in 2017 American Control Conference (ACC), 2017, pp. 3950–3955.
- H. Li, H. Cheng, Z. Wang, and G.-C. Wu, “Distributed nesterov gradient and heavy-ball double accelerated asynchronous optimization,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 12, pp. 5723–5737, 2020.
- Q. Lü, X. Liao, H. Li, and T. Huang, “A nesterov-like gradient tracking algorithm for distributed optimization over directed networks,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 10, pp. 6258–6270, 2021.
- A. Koloskova, T. Lin, and S. U. Stich, “An improved analysis of gradient tracking for decentralized machine learning,” Advances in Neural Information Processing Systems, vol. 34, pp. 11 422–11 435, 2021.
- S. Pu and A. Nedić, “Distributed stochastic gradient tracking methods,” Mathematical Programming, vol. 187, no. 1, pp. 409–457, 2021.
- J. Liu, Z. Yu, and D. W. C. Ho, “Distributed constrained optimization with delayed subgradient information over time-varying network under adaptive quantization,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–14, Early Access, 2022.
- P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings et al., “Advances and open problems in federated learning,” Foundations and Trends in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021.
- Q. Yang, Y. Liu, Y. Cheng, Y. Kang, T. Chen, and H. Yu, “Federated learning,” Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 13, no. 3, pp. 1–207, 2019.
- J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” in NIPS Workshop on Private Multi-Party Machine Learning, 2016.
- T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 50–60, 2020.
- K. Yuan, Q. Ling, and W. Yin, “On the convergence of decentralized gradient descent,” SIAM Journal on Optimization, vol. 26, no. 3, pp. 1835–1854, 2016.
- A. Nedić and A. Olshevsky, “Distributed optimization over time-varying directed graphs,” IEEE Transactions on Automatic Control, vol. 60, no. 3, pp. 601–615, 2014.
- A. H. Sayed, “Adaptation, learning, and optimization over networks,” Foundations and Trends in Machine Learning, vol. 7, no. 4-5, pp. 311–801, 2014.
- ——, “Adaptive networks,” Proceedings of the IEEE, vol. 102, no. 4, pp. 460–497, 2014.
- A. Nedic, “Asynchronous broadcast-based convex optimization over a network,” IEEE Transactions on Automatic Control, vol. 56, no. 6, pp. 1337–1351, 2011.
- S. Sundhar Ram, A. Nedić, and V. V. Veeravalli, “Distributed stochastic subgradient projection algorithms for convex optimization,” Journal of optimization theory and applications, vol. 147, no. 3, pp. 516–545, 2010.
- Z. Li, B. Liu, and Z. Ding, “Consensus-based cooperative algorithms for training over distributed data sets using stochastic gradients,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 10, pp. 5579–5589, 2022.
- W. Tao, G. W. Wu, and Q. Tao, “Momentum acceleration in the individual convergence of nonsmooth convex optimization with constraints,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 3, pp. 1107–1118, 2022.
- W. Shi, Q. Ling, G. Wu, and W. Yin, “EXTRA: An exact first-order algorithm for decentralized consensus optimization,” SIAM Journal on Optimization, vol. 25, no. 2, pp. 944–966, 2015.
- H. Li and Z. Lin, “Revisiting extra for smooth distributed optimization,” SIAM Journal on Optimization, vol. 30, no. 3, pp. 1795–1821, 2020.
- X. Jiang, X. Zeng, J. Sun, and J. Chen, “Distributed stochastic gradient tracking algorithm with variance reduction for non-convex optimization,” IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 9, pp. 5310–5321, 2023.
- Z. Li, W. Shi, and M. Yan, “A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates,” IEEE Transactions on Signal Processing, vol. 67, no. 17, pp. 4494–4506, 2019.
- Y. Sun, G. Scutari, and A. Daneshmand, “Distributed optimization based on gradient tracking revisited: Enhancing convergence rate via surrogation,” SIAM Journal on Optimization, vol. 32, no. 2, pp. 354–385, 2022.
- B. Li, S. Cen, Y. Chen, and Y. Chi, “Communication-efficient distributed optimization in networks with gradient tracking and variance reduction,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2020, pp. 1662–1672.
- A. Nedic, A. Olshevsky, and W. Shi, “Achieving geometric convergence for distributed optimization over time-varying graphs,” SIAM Journal on Optimization, vol. 27, no. 4, pp. 2597–2633, 2017.
- Z. Li, B. Liu, and Z. Ding, “Consensus-based cooperative algorithms for training over distributed data sets using stochastic gradients,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, pp. 5579–5589, 2022.
- K. Yuan, B. Ying, X. Zhao, and A. H. Sayed, “Exact diffusion for distributed optimization and learning-part I: Algorithm development,” IEEE Transactions on Signal Processing, vol. 67, no. 3, pp. 708–723, 2018.
- ——, “Exact diffusion for distributed optimization and learning-part II: Convergence analysis,” IEEE Transactions on Signal Processing, vol. 67, no. 3, pp. 724–739, 2018.
- K. Yuan, S. A. Alghunaim, B. Ying, and A. H. Sayed, “On the influence of bias-correction on distributed stochastic optimization,” IEEE Transactions on Signal Processing, vol. 68, pp. 4352–4367, 2020.
- A. S. Berahas, R. Bollapragada, N. S. Keskar, and E. Wei, “Balancing communication and computation in distributed optimization,” IEEE Transactions on Automatic Control, vol. 64, no. 8, pp. 3141–3155, 2019.
- H. Li and Z. Lin, “Accelerated gradient tracking over time-varying graphs for decentralized optimization,” arXiv preprint arXiv:2104.02596, 2021.
- G. Qu and N. Li, “Accelerated distributed Nesterov gradient descent,” IEEE Transactions on Automatic Control, vol. 65, no. 6, pp. 2566–2581, 2020.
- D. Kovalev, A. Salim, and P. Richtárik, “Optimal and practical algorithms for smooth and strongly convex decentralized optimization,” Advances in Neural Information Processing Systems, vol. 33, pp. 18 342–18 352, 2020.
- L. Mangasarian, “Parallel gradient distribution in unconstrained optimization,” SIAM Journal on Control and Optimization, vol. 33, no. 6, pp. 1916–1925, 1995.
- S. U. Stich, “Local SGD converges fast and communicates little,” in International Conference on Learning Representations (ICLR), 2019. [Online]. Available: https://openreview.net/forum?id=S1g2JnRcFX
- J. Wang and G. Joshi, “Cooperative SGD: A unified framework for the design and analysis of local-update SGD algorithms,” Journal of Machine Learning Research, vol. 22, pp. 1–50, 2021.
- A. Khaled, K. Mishchenko, and P. Richtárik, “Tighter theory for local SGD on identical and heterogeneous data,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2020, pp. 4519–4529.
- C. Xi and U. A. Khan, “Distributed subgradient projection algorithm over directed graphs,” IEEE Transactions on Automatic Control, vol. 62, no. 8, pp. 3986–3992, 2016.
- A. Simonetto, A. Koppel, A. Mokhtari, G. Leus, and A. Ribeiro, “Decentralized prediction-correction methods for networked time-varying convex optimization,” IEEE Transactions on Automatic Control, vol. 62, no. 11, pp. 5724–5738, 2017.
- A. Simonetto, A. Mokhtari, A. Koppel, G. Leus, and A. Ribeiro, “A class of prediction-correction methods for time-varying convex optimization,” IEEE Transactions on Signal Processing, vol. 64, no. 17, pp. 4576–4591, 2016.
- I. Lobel and A. Ozdaglar, “Distributed subgradient methods for convex optimization over random networks,” IEEE Transactions on Automatic Control, vol. 56, no. 6, pp. 1291–1306, 2010.
- J. Xu, S. Zhu, Y. C. Soh, and L. Xie, “Augmented distributed gradient methods for multi-agent optimization under uncoordinated constant stepsizes,” in 2015 54th IEEE Conference on Decision and Control (CDC), 2015, pp. 2055–2060.
- G. Qu and N. Li, “Harnessing smoothness to accelerate distributed optimization,” IEEE Transactions on Control of Network Systems, vol. 5, no. 3, pp. 1245–1260, 2017.
- S. A. Alghunaim, E. K. Ryu, K. Yuan, and A. H. Sayed, “Decentralized proximal gradient algorithms with linear convergence rates,” IEEE Transactions on Automatic Control, vol. 66, no. 6, pp. 2787–2794, 2020.
- C. C. Chang and C. J. Lin, “LIBSVM: A library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, pp. 27:1–27:27, 2011, software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
- Y. Liu, T. Lin, A. Koloskova, and S. U. Stich, “Decentralized gradient tracking with local steps,” arXiv preprint arXiv:2301.01313, 2023.