Distributed Conjugate Gradient Method via Conjugate Direction Tracking (2309.12235v2)
Abstract: We present a distributed conjugate gradient method for distributed optimization problems, where each agent computes an optimal solution of the problem locally without any central computation or coordination, while communicating with its immediate, one-hop neighbors over a communication network. Each agent updates its local problem variable using an estimate of the average conjugate direction across the network, computed via a dynamic consensus approach. Our algorithm enables the agents to use uncoordinated step-sizes. We prove convergence of the local variable of each agent to the optimal solution of the aggregate optimization problem, without requiring decreasing step-sizes. In addition, we demonstrate the efficacy of our algorithm in distributed state estimation problems, and its robust counterparts, where we show its performance compared to existing distributed first-order optimization methods.
- H. Zhang, X. Zhou, Z. Wang, H. Yan, and J. Sun, “Adaptive consensus-based distributed target tracking with dynamic cluster in sensor networks,” IEEE transactions on cybernetics, vol. 49, no. 5, pp. 1580–1591, 2018.
- S. Zhu, C. Chen, W. Li, B. Yang, and X. Guan, “Distributed optimal consensus filter for target tracking in heterogeneous sensor networks,” IEEE transactions on cybernetics, vol. 43, no. 6, pp. 1963–1976, 2013.
- O. Shorinwa, J. Yu, T. Halsted, A. Koufos, and M. Schwager, “Distributed multi-target tracking for autonomous vehicle fleets,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 3495–3501.
- S.-S. Park, Y. Min, J.-S. Ha, D.-H. Cho, and H.-L. Choi, “A distributed admm approach to non-myopic path planning for multi-target tracking,” IEEE Access, vol. 7, pp. 163 589–163 603, 2019.
- M. Rabbat and R. Nowak, “Distributed optimization in sensor networks,” in Proceedings of the 3rd international symposium on Information processing in sensor networks, 2004, pp. 20–27.
- I. Necoara, V. Nedelcu, and I. Dumitrache, “Parallel and distributed optimization methods for estimation and control in networks,” Journal of Process Control, vol. 21, no. 5, pp. 756–766, 2011.
- G. Mateos, J. A. Bazerque, and G. B. Giannakis, “Distributed sparse linear regression,” IEEE Transactions on Signal Processing, vol. 58, no. 10, pp. 5262–5276, 2010.
- J. Konečnỳ, H. B. McMahan, D. Ramage, and P. Richtárik, “Federated optimization: Distributed machine learning for on-device intelligence,” arXiv preprint arXiv:1610.02527, 2016.
- T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” Proceedings of Machine learning and systems, vol. 2, pp. 429–450, 2020.
- Y. Zhou, Q. Ye, and J. Lv, “Communication-efficient federated learning with compensated overlap-fedavg,” IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 1, pp. 192–205, 2021.
- A. Nedić and J. Liu, “Distributed optimization for control,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 1, pp. 77–103, 2018.
- Y. Wang, S. Wang, and L. Wu, “Distributed optimization approaches for emerging power systems operation: A review,” Electric Power Systems Research, vol. 144, pp. 127–135, 2017.
- T. Erseghe, “Distributed optimal power flow using admm,” IEEE transactions on power systems, vol. 29, no. 5, pp. 2370–2380, 2014.
- R. Rostami, G. Costantini, and D. Görges, “Admm-based distributed model predictive control: Primal and dual approaches,” in 2017 IEEE 56th Annual Conference on Decision and Control (CDC). IEEE, 2017, pp. 6598–6603.
- W. Tang and P. Daoutidis, “Distributed nonlinear model predictive control through accelerated parallel admm,” in 2019 American Control Conference (ACC). IEEE, 2019, pp. 1406–1411.
- O. Shorinwa and M. Schwager, “Distributed model predictive control via separable optimization in multi-agent networks,” IEEE Transactions on Automatic Control, 2023.
- A. Nedić, D. P. Bertsekas, and V. S. Borkar, “Distributed asynchronous incremental subgradient methods,” Studies in Computational Mathematics, vol. 8, no. C, pp. 381–407, 2001.
- A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” IEEE Transactions on Automatic Control, vol. 54, no. 1, pp. 48–61, 2009.
- I. Matei and J. S. Baras, “Performance evaluation of the consensus-based distributed subgradient method under random communication topologies,” IEEE Journal of Selected Topics in Signal Processing, vol. 5, no. 4, pp. 754–771, 2011.
- A. Olshevsky and J. N. Tsitsiklis, “Convergence speed in distributed consensus and averaging,” SIAM Journal on Control and Optimization, vol. 48, no. 1, pp. 33–55, 2009.
- F. Bénézit, V. Blondel, P. Thiran, J. Tsitsiklis, and M. Vetterli, “Weighted gossip: Distributed averaging using non-doubly stochastic matrices,” in 2010 IEEE International Symposium on Information Theory. IEEE, 2010, pp. 1753–1757.
- I. Lobel and A. Ozdaglar, “Distributed subgradient methods for convex optimization over random networks,” IEEE Transactions on Automatic Control, vol. 56, no. 6, pp. 1291–1306, 2010.
- K. Yuan, Q. Ling, and W. Yin, “On the convergence of decentralized gradient descent,” SIAM Journal on Optimization, vol. 26, no. 3, pp. 1835–1854, 2016.
- W. Shi, Q. Ling, G. Wu, and W. Yin, “EXTRA: An exact first-order algorithm for decentralized consensus optimization,” SIAM Journal on Optimization, vol. 25, no. 2, pp. 944–966, 2015.
- G. Qu and N. Li, “Harnessing smoothness to accelerate distributed optimization,” IEEE Transactions on Control of Network Systems, vol. 5, no. 3, pp. 1245–1260, 2017.
- A. Nedic, A. Olshevsky, and W. Shi, “Achieving geometric convergence for distributed optimization over time-varying graphs,” SIAM Journal on Optimization, vol. 27, no. 4, pp. 2597–2633, 2017.
- J. Chen and A. H. Sayed, “Diffusion adaptation strategies for distributed optimization and learning over networks,” IEEE Transactions on Signal Processing, vol. 60, no. 8, pp. 4289–4305, 2012.
- K. Yuan, B. Ying, X. Zhao, and A. H. Sayed, “Exact diffusion for distributed optimization and learning—part i: Algorithm development,” IEEE Transactions on Signal Processing, vol. 67, no. 3, pp. 708–723, 2018.
- ——, “Exact diffusion for distributed optimization and learning—part II: Convergence analysis,” IEEE Transactions on Signal Processing, vol. 67, no. 3, pp. 724–739, 2018.
- J. Xu, S. Zhu, Y. C. Soh, and L. Xie, “Augmented distributed gradient methods for multi-agent optimization under uncoordinated constant stepsizes,” in 2015 54th IEEE Conference on Decision and Control (CDC). IEEE, 2015, pp. 2055–2060.
- F. Saadatniaki, R. Xin, and U. A. Khan, “Optimization over time-varying directed graphs with row and column-stochastic matrices,” arXiv preprint arXiv:1810.07393, 2018.
- J. Zeng and W. Yin, “ExtraPush for convex smooth decentralized optimization over directed networks,” Journal of Computational Mathematics, vol. 35, no. 4, pp. 383–396, 2017.
- C. Xi, R. Xin, and U. A. Khan, “ADD-OPT: Accelerated distributed directed optimization,” IEEE Transactions on Automatic Control, vol. 63, no. 5, pp. 1329–1339, 2017.
- R. Xin and U. A. Khan, “A linear algorithm for optimization over directed graphs with geometric convergence,” IEEE Control Systems Letters, vol. 2, no. 3, pp. 315–320, 2018.
- R. Xin, D. Jakovetić, and U. A. Khan, “Distributed Nesterov gradient methods over arbitrary graphs,” IEEE Signal Processing Letters, vol. 26, no. 8, pp. 1247–1251, 2019.
- G. Qu and N. Li, “Accelerated distributed nesterov gradient descent,” IEEE Transactions on Automatic Control, 2019.
- Q. Lü, X. Liao, H. Li, and T. Huang, “A Nesterov-like gradient tracking algorithm for distributed optimization over directed networks,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2020.
- A. Mokhtari, Q. Ling, and A. Ribeiro, “Network Newton,” Conference Record - Asilomar Conference on Signals, Systems and Computers, vol. 2015-April, pp. 1621–1625, 2015.
- M. Eisen, A. Mokhtari, and A. Ribeiro, “A Primal-Dual Quasi-Newton Method for Exact Consensus Optimization,” IEEE Transactions on Signal Processing, vol. 67, no. 23, pp. 5983–5997, 2019.
- F. Mansoori and E. Wei, “A fast distributed asynchronous newton-based optimization algorithm,” IEEE Transactions on Automatic Control, vol. 65, no. 7, pp. 2769–2784, 2019.
- Q. Ling, W. Shi, G. Wu, and A. Ribeiro, “DLM: Decentralized linearized alternating direction method of multipliers,” IEEE Transactions on Signal Processing, vol. 63, no. 15, pp. 4051–4064, 2015.
- T.-H. Chang, M. Hong, and X. Wang, “Multi-agent distributed optimization via inexact consensus ADMM,” IEEE Transactions on Signal Processing, vol. 63, no. 2, pp. 482–497, 2014.
- F. Farina, A. Garulli, A. Giannitrapani, and G. Notarstefano, “A distributed asynchronous method of multipliers for constrained nonconvex optimization,” Automatica, vol. 103, pp. 243–253, 2019.
- M. R. Hestenes, E. Stiefel et al., “Methods of conjugate gradients for solving linear systems,” Journal of research of the National Bureau of Standards, vol. 49, no. 6, pp. 409–436, 1952.
- W. W. Hager and H. Zhang, “A survey of nonlinear conjugate gradient methods,” Pacific journal of Optimization, vol. 2, no. 1, pp. 35–58, 2006.
- Y.-H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM Journal on optimization, vol. 10, no. 1, pp. 177–182, 1999.
- G. Yuan, T. Li, and W. Hu, “A conjugate gradient algorithm for large-scale nonlinear equations and image restoration problems,” Applied numerical mathematics, vol. 147, pp. 129–141, 2020.
- J. C. Gilbert and J. Nocedal, “Global convergence properties of conjugate gradient methods for optimization,” SIAM Journal on optimization, vol. 2, no. 1, pp. 21–42, 1992.
- G. Yuan, Z. Wei, and Y. Yang, “The global convergence of the polak–ribière–polyak conjugate gradient algorithm under inexact line search for nonconvex functions,” Journal of Computational and Applied Mathematics, vol. 362, pp. 262–275, 2019.
- J. R. Shewchuk et al., “An introduction to the conjugate gradient method without the agonizing pain,” 1994.
- L. Ismail and R. Barua, “Implementation and performance evaluation of a distributed conjugate gradient method in a cloud computing environment,” Software: Practice and Experience, vol. 43, no. 3, pp. 281–304, 2013.
- F. Chen, K. B. Theobald, and G. R. Gao, “Implementing parallel conjugate gradient on the earth multithreaded architecture,” in 2004 IEEE International Conference on Cluster Computing (IEEE Cat. No. 04EX935). IEEE, 2004, pp. 459–469.
- P. Lanucara and S. Rovida, “Conjugate-gradient algorithms: An mpi open-mp implementation on distributed shared memory systems,” in First European Workshop on OpenMP, 1999, pp. 76–78.
- R. Helfenstein and J. Koko, “Parallel preconditioned conjugate gradient algorithm on gpu,” Journal of Computational and Applied Mathematics, vol. 236, no. 15, pp. 3584–3590, 2012.
- A. Engelmann and T. Faulwasser, “Essentially decentralized conjugate gradients,” arXiv preprint arXiv:2102.12311, 2021.
- S. Xu, R. C. De Lamare, and H. V. Poor, “Distributed estimation over sensor networks based on distributed conjugate gradient strategies,” IET Signal Processing, vol. 10, no. 3, pp. 291–301, 2016.
- H. Ping, Y. Wang, and D. Li, “Dcg: Distributed conjugate gradient for efficient linear equations solving,” arXiv preprint arXiv:2107.13814, 2021.
- C. Xu, J. Zhu, Y. Shang, and Q. Wu, “A distributed conjugate gradient online learning method over networks,” Complexity, vol. 2020, pp. 1–13, 2020.
- L. Xiao, S. Boyd, and S.-J. Kim, “Distributed average consensus with least-mean-square deviation,” Journal of parallel and distributed computing, vol. 67, no. 1, pp. 33–46, 2007.
- A. H. Sayed, “Diffusion adaptation over networks,” in Academic Press Library in Signal Processing. Elsevier, 2014, vol. 3, pp. 323–453.
- R. Fletcher and C. M. Reeves, “Function minimization by conjugate gradients,” The computer journal, vol. 7, no. 2, pp. 149–154, 1964.
- E. Polak and G. Ribiere, “Note sur la convergence de directions conjugeés. rev. francaise informat,” Recherche Opertionelle, 3e année, vol. 16, pp. 35–43, 1969.
- B. T. Polyak, “The conjugate gradient method in extremal problems,” USSR Computational Mathematics and Mathematical Physics, vol. 9, no. 4, pp. 94–112, 1969.
- M. Zhu and S. Martínez, “Discrete-time dynamic average consensus,” Automatica, vol. 46, no. 2, pp. 322–329, 2010.
- R. Xin, S. Pu, A. Nedić, and U. A. Khan, “A general framework for decentralized optimization with first-order methods,” Proceedings of the IEEE, vol. 108, no. 11, pp. 1869–1889, 2020.
- R. Xin and U. A. Khan, “Distributed heavy-ball: A generalization and acceleration of first-order methods with gradient tracking,” IEEE Transactions on Automatic Control, vol. 65, no. 6, pp. 2627–2633, 2019.