Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed Quasi-Newton Method for Multi-Agent Optimization (2402.06778v2)

Published 9 Feb 2024 in math.OC, cs.MA, cs.SY, and eess.SY

Abstract: We present a distributed quasi-Newton (DQN) method, which enables a group of agents to compute an optimal solution of a separable multi-agent optimization problem locally using an approximation of the curvature of the aggregate objective function. Each agent computes a descent direction from its local estimate of the aggregate Hessian, obtained from quasi-Newton approximation schemes using the gradient of its local objective function. Moreover, we introduce a distributed quasi-Newton method for equality-constrained optimization (EC-DQN), where each agent takes Karush-Kuhn-Tucker-like update steps to compute an optimal solution. In our algorithms, each agent communicates with its one-hop neighbors over a peer-to-peer communication network to compute a common solution. We prove convergence of our algorithms to a stationary point of the optimization problem. In addition, we demonstrate the competitive empirical convergence of our algorithm in both well-conditioned and ill-conditioned optimization problems, in terms of the computation time and communication cost incurred by each agent for convergence, compared to existing distributed first-order and second-order methods. Particularly, in ill-conditioned problems, our algorithms achieve a faster computation time for convergence, while requiring a lower communication cost, across a range of communication networks with different degrees of connectedness.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (56)
  1. M. Bashir, S. Longtin-Martel, R. M. Botez, and T. Wong, “Aerodynamic design optimization of a morphing leading edge and trailing edge airfoil–application on the uas-s45,” Applied Sciences, vol. 11, no. 4, p. 1664, 2021.
  2. M. Masdari, M. Tahani, M. H. Naderi, and N. Babayan, “Optimization of airfoil based savonius wind turbine using coupled discrete vortex method and salp swarm algorithm,” Journal of Cleaner Production, vol. 222, pp. 47–56, 2019.
  3. V. Harish and A. Kumar, “Reduced order modeling and parameter identification of a building energy system model through an optimization routine,” Applied Energy, vol. 162, pp. 1010–1023, 2016.
  4. I. Ahmadianfar, W. Gong, A. A. Heidari, N. A. Golilarz, A. Samadi-Koucheksaraee, and H. Chen, “Gradient-based optimization with ranking mechanisms for parameter identification of photovoltaic systems,” Energy Reports, vol. 7, pp. 3979–3997, 2021.
  5. W. Long, T. Wu, M. Xu, M. Tang, and S. Cai, “Parameters identification of photovoltaic models by using an enhanced adaptive butterfly optimization algorithm,” Energy, vol. 229, p. 120750, 2021.
  6. C. Toumieh and A. Lambert, “Decentralized multi-agent planning using model predictive control and time-aware safe corridors,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 11 110–11 117, 2022.
  7. A. Torreno, E. Onaindia, and O. Sapena, “Fmap: Distributed cooperative multi-agent planning,” Applied Intelligence, vol. 41, pp. 606–626, 2014.
  8. S. Mishra, C. Bordin, A. Tomasgard, and I. Palu, “A multi-agent system approach for optimal microgrid expansion planning under uncertainty,” International Journal of Electrical Power & Energy Systems, vol. 109, pp. 696–709, 2019.
  9. G. Franzè, W. Lucia, and F. Tedesco, “A distributed model predictive control scheme for leader–follower multi-agent systems,” International Journal of Control, vol. 91, no. 2, pp. 369–382, 2018.
  10. P. Wang and B. Ding, “A synthesis approach of distributed model predictive control for homogeneous multi-agent system with collision avoidance,” International Journal of Control, vol. 87, no. 1, pp. 52–63, 2014.
  11. R. Luo, R. Bourdais, T. J. van den Boom, and B. De Schutter, “Multi-agent model predictive control based on resource allocation coordination for a class of hybrid systems with limited information sharing,” Engineering Applications of Artificial Intelligence, vol. 58, pp. 123–133, 2017.
  12. I. Lobel and A. Ozdaglar, “Distributed subgradient methods for convex optimization over random networks,” IEEE Transactions on Automatic Control, vol. 56, no. 6, pp. 1291–1306, 2010.
  13. C. Xi, R. Xin, and U. A. Khan, “ADD-OPT: Accelerated distributed directed optimization,” IEEE Transactions on Automatic Control, vol. 63, no. 5, pp. 1329–1339, 2017.
  14. C. Xi, V. S. Mai, R. Xin, E. H. Abed, and U. A. Khan, “Linear convergence in optimization over directed graphs with row-stochastic matrices,” IEEE Transactions on Automatic Control, vol. 63, no. 10, pp. 3558–3565, 2018.
  15. H. Liu, J. Zhang, A. M.-C. So, and Q. Ling, “A communication-efficient decentralized newton’s method with provably faster convergence,” IEEE Transactions on Signal and Information Processing over Networks, 2023.
  16. M. Eisen, A. Mokhtari, and A. Ribeiro, “Decentralized quasi-newton methods,” IEEE Transactions on Signal Processing, vol. 65, no. 10, pp. 2613–2628, 2017.
  17. T. Yang, X. Yi, J. Wu, Y. Yuan, D. Wu, Z. Meng, Y. Hong, H. Wang, Z. Lin, and K. H. Johansson, “A survey of distributed optimization,” Annual Reviews in Control, vol. 47, pp. 278–305, 2019.
  18. O. Shorinwa, T. Halsted, J. Yu, and M. Schwager, “Distributed optimization methods for multi-robot systems: Part ii–a survey,” arXiv preprint arXiv:2301.11361, 2023.
  19. A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” IEEE Transactions on Automatic Control, vol. 54, no. 1, pp. 48–61, 2009.
  20. F. Bénézit, V. Blondel, P. Thiran, J. Tsitsiklis, and M. Vetterli, “Weighted gossip: Distributed averaging using non-doubly stochastic matrices,” in 2010 IEEE International Symposium on Information Theory.   IEEE, 2010, pp. 1753–1757.
  21. K. Yuan, Q. Ling, and W. Yin, “On the convergence of decentralized gradient descent,” SIAM Journal on Optimization, vol. 26, no. 3, pp. 1835–1854, 2016.
  22. W. Shi, Q. Ling, G. Wu, and W. Yin, “EXTRA: An exact first-order algorithm for decentralized consensus optimization,” SIAM Journal on Optimization, vol. 25, no. 2, pp. 944–966, 2015.
  23. A. Nedic, A. Olshevsky, and W. Shi, “Achieving geometric convergence for distributed optimization over time-varying graphs,” SIAM Journal on Optimization, vol. 27, no. 4, pp. 2597–2633, 2017.
  24. Y. Liao, Z. Li, K. Huang, and S. Pu, “A compressed gradient tracking method for decentralized optimization with linear convergence,” IEEE Transactions on Automatic Control, vol. 67, no. 10, pp. 5622–5629, 2022.
  25. H. Li and Z. Lin, “Accelerated gradient tracking over time-varying graphs for decentralized optimization,” arXiv preprint arXiv:2104.02596, 2021.
  26. Q. Lü, X. Liao, H. Li, and T. Huang, “A Nesterov-like gradient tracking algorithm for distributed optimization over directed networks,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2020.
  27. Y. Sun, G. Scutari, and A. Daneshmand, “Distributed optimization based on gradient tracking revisited: Enhancing convergence rate via surrogation,” SIAM Journal on Optimization, vol. 32, no. 2, pp. 354–385, 2022.
  28. J. Chen and A. H. Sayed, “Diffusion adaptation strategies for distributed optimization and learning over networks,” IEEE Transactions on Signal Processing, vol. 60, no. 8, pp. 4289–4305, 2012.
  29. A. H. Sayed, “Adaptive networks,” Proceedings of the IEEE, vol. 102, no. 4, pp. 460–497, 2014.
  30. A. H. Sayed et al., “Adaptation, learning, and optimization over networks,” Foundations and Trends® in Machine Learning, vol. 7, no. 4-5, pp. 311–801, 2014.
  31. K. Yuan, B. Ying, X. Zhao, and A. H. Sayed, “Exact diffusion for distributed optimization and learning—part i: Algorithm development,” IEEE Transactions on Signal Processing, vol. 67, no. 3, pp. 708–723, 2018.
  32. D. Mateos-Núnez and J. Cortés, “Distributed saddle-point subgradient algorithms with laplacian averaging,” IEEE Transactions on Automatic Control, vol. 62, no. 6, pp. 2720–2735, 2016.
  33. J. Cortés and S. K. Niederländer, “Distributed coordination for nonsmooth convex optimization via saddle-point dynamics,” Journal of Nonlinear Science, vol. 29, pp. 1247–1272, 2019.
  34. N. S. Aybat and E. Yazdandoost Hamedani, “A primal-dual method for conic constrained distributed optimization problems,” Advances in neural information processing systems, vol. 29, 2016.
  35. G. Mateos, J. A. Bazerque, and G. B. Giannakis, “Distributed sparse linear regression,” IEEE Transactions on Signal Processing, vol. 58, no. 10, pp. 5262–5276, 2010.
  36. J. F. Mota, J. M. Xavier, P. M. Aguiar, and M. Püschel, “D-admm: A communication-efficient distributed algorithm for separable optimization,” IEEE Transactions on Signal processing, vol. 61, no. 10, pp. 2718–2723, 2013.
  37. T.-H. Chang, M. Hong, and X. Wang, “Multi-agent distributed optimization via inexact consensus ADMM,” IEEE Transactions on Signal Processing, vol. 63, no. 2, pp. 482–497, 2014.
  38. O. Shorinwa, T. Halsted, and M. Schwager, “Scalable distributed optimization with separable variables in multi-agent networks,” in 2020 American Control Conference (ACC).   IEEE, 2020, pp. 3619–3626.
  39. R. Carli and M. Dotoli, “Distributed alternating direction method of multipliers for linearly constrained optimization over a network,” IEEE Control Systems Letters, vol. 4, no. 1, pp. 247–252, 2019.
  40. Y. Zhang and M. M. Zavlanos, “A consensus-based distributed augmented lagrangian method,” in 2018 IEEE Conference on Decision and Control (CDC).   IEEE, 2018, pp. 1763–1768.
  41. D. Jakovetić, J. M. Moura, and J. Xavier, “Linear convergence rate of a class of distributed augmented lagrangian algorithms,” IEEE Transactions on Automatic Control, vol. 60, no. 4, pp. 922–936, 2014.
  42. S. S. Kia, “An augmented lagrangian distributed algorithm for an in-network optimal resource allocation problem,” in 2017 American Control Conference (ACC).   IEEE, 2017, pp. 3312–3317.
  43. A. Mokhtari, Q. Ling, and A. Ribeiro, “Network newton distributed optimization methods,” IEEE Transactions on Signal Processing, vol. 65, no. 1, pp. 146–161, 2016.
  44. F. Mansoori and E. Wei, “A fast distributed asynchronous newton-based optimization algorithm,” IEEE Transactions on Automatic Control, vol. 65, no. 7, pp. 2769–2784, 2019.
  45. S. Soori, K. Mishchenko, A. Mokhtari, M. M. Dehnavi, and M. Gurbuzbalaban, “Dave-qn: A distributed averaged quasi-newton method with local superlinear convergence rate,” in International Conference on Artificial Intelligence and Statistics.   PMLR, 2020, pp. 1965–1976.
  46. A. Mokhtari, W. Shi, Q. Ling, and A. Ribeiro, “A decentralized second-order method with exact linear convergence rate for consensus optimization,” IEEE Transactions on Signal and Information Processing over Networks, vol. 2, no. 4, pp. 507–522, 2016.
  47. M. Eisen, A. Mokhtari, and A. Ribeiro, “A primal-dual quasi-newton method for exact consensus optimization,” IEEE Transactions on Signal Processing, vol. 67, no. 23, pp. 5983–5997, 2019.
  48. J. E. Dennis, Jr and J. J. Moré, “Quasi-newton methods, motivation and theory,” SIAM review, vol. 19, no. 1, pp. 46–89, 1977.
  49. W. Davidon, “Variable metric method for minimization, argonne natl,” Labs., ANL-5990 Rev, 1959.
  50. R. H. Byrd, H. F. Khalfan, and R. B. Schnabel, “Analysis of a symmetric rank-one trust region method,” SIAM Journal on Optimization, vol. 6, no. 4, pp. 1025–1039, 1996.
  51. M. Zhu and S. Martínez, “Discrete-time dynamic average consensus,” Automatica, vol. 46, no. 2, pp. 322–329, 2010.
  52. P. Deuflhard and G. Heindl, “Affine invariant convergence theorems for newton’s method and extensions to related methods,” SIAM Journal on Numerical Analysis, vol. 16, no. 1, pp. 1–10, 1979.
  53. J. Xu, S. Zhu, Y. C. Soh, and L. Xie, “Augmented distributed gradient methods for multi-agent optimization under uncoordinated constant stepsizes,” in 2015 54th IEEE Conference on Decision and Control (CDC).   IEEE, 2015, pp. 2055–2060.
  54. R. Fontecilla, T. Steihaug, and R. A. Tapia, “A convergence theory for a class of quasi-newton methods for constrained optimization,” SIAM Journal on Numerical Analysis, vol. 24, no. 5, pp. 1133–1151, 1987.
  55. R. Tapia, “Quasi-newton methods for equality constrained optimization: Equivalence of existing methods and a new implementation,” in Nonlinear programming 3.   Elsevier, 1978, pp. 125–164.
  56. R. Xin and U. A. Khan, “Distributed heavy-ball: A generalization and acceleration of first-order methods with gradient tracking,” IEEE Transactions on Automatic Control, vol. 65, no. 6, pp. 2627–2633, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Ola Shorinwa (16 papers)
  2. Mac Schwager (89 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com