Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Linear Convergent Distributed Nash Equilibrium Seeking with Compression (2211.07849v2)

Published 15 Nov 2022 in cs.MA, cs.SY, and eess.SY

Abstract: Information compression techniques are majorly employed to address the concern of reducing communication cost over peer-to-peer links. In this paper, we investigate distributed Nash equilibrium (NE) seeking problems in a class of non-cooperative games over directed graphs with information compression. To improve communication efficiency, a compressed distributed NE seeking (C-DNES) algorithm is proposed to obtain a NE for games, where the differences between decision vectors and their estimates are compressed. The proposed algorithm is compatible with a general class of compression operators, including both unbiased and biased compressors. Moreover, our approach only requires the adjacency matrix of the directed graph to be row-stochastic, in contrast to past works that relied on balancedness or specific global network parameters. It is shown that C-DNES not only inherits the advantages of conventional distributed NE algorithms, achieving linear convergence rate for games with restricted strongly monotone mappings, but also saves communication costs in terms of transmitted bits. Finally, numerical simulations illustrate the advantages of C-DNES in saving communication cost by an order of magnitude under different compressors.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. Z. Ma, D. S. Callaway, and I. A. Hiskens, “Decentralized charging control of large populations of plug-in electric vehicles,” IEEE Transactions on control systems technology, vol. 21, no. 1, pp. 67–78, 2011.
  2. S. Grammatico, “Dynamic control of agents playing aggregative games with coupling constraints,” IEEE Transactions on Automatic Control, vol. 62, no. 9, pp. 4537–4548, 2017.
  3. W. Saad, Z. Han, H. V. Poor, and T. Basar, “Game-theoretic methods for the smart grid: An overview of microgrid systems, demand-side management, and smart grid communications,” IEEE Signal Processing Magazine, vol. 29, no. 5, pp. 86–105, 2012.
  4. C.-K. Yu, M. Van Der Schaar, and A. H. Sayed, “Distributed learning for stochastic generalized Nash equilibrium problems,” IEEE Transactions on Signal Processing, vol. 65, no. 15, pp. 3893–3908, 2017.
  5. G. Belgioioso and S. Grammatico, “Projected-gradient algorithms for generalized equilibrium seeking in aggregative games arepreconditioned forward-backward methods,” in 2018 European Control Conference (ECC).   IEEE, 2018, pp. 2188–2193.
  6. J. S. Shamma and G. Arslan, “Dynamic fictitious play, dynamic gradient play, and distributed convergence to Nash equilibria,” IEEE Transactions on Automatic Control, vol. 50, no. 3, pp. 312–327, 2005.
  7. J. Ghaderi and R. Srikant, “Opinion dynamics in social networks with stubborn agents: Equilibrium and convergence rate,” Automatica, vol. 50, no. 12, pp. 3209–3215, 2014.
  8. K. Bimpikis, S. Ehsani, and R. Ilkılıç, “Cournot competition in networked markets,” Management Science, vol. 65, no. 6, pp. 2467–2481, 2019.
  9. M. Ye and G. Hu, “Distributed Nash equilibrium seeking by a consensus based approach,” IEEE Transactions on Automatic Control, vol. 62, no. 9, pp. 4811–4818, 2017.
  10. Y. Lou, Y. Hong, L. Xie, G. Shi, and K. H. Johansson, “Nash equilibrium computation in subnetwork zero-sum games with switching communications,” IEEE Transactions on Automatic Control, vol. 61, no. 10, pp. 2920–2935, 2015.
  11. J. Koshal, A. Nedić, and U. V. Shanbhag, “Distributed algorithms for aggregative games on graphs,” Operations Research, vol. 64, no. 3, pp. 680–704, 2016.
  12. F. Salehisadaghiani and L. Pavel, “Distributed Nash equilibrium seeking: A gossip-based algorithm,” Automatica, vol. 72, pp. 209–216, 2016.
  13. T. Tatarenko and A. Nedić, “Geometric convergence of distributed gradient play in games with unconstrained action sets,” IFAC-PapersOnLine, vol. 53, no. 2, pp. 3367–3372, 2020.
  14. M. Bianchi and S. Grammatico, “Nash equilibrium seeking under partial-decision information over directed communication networks,” in 2020 59th IEEE Conference on Decision and Control (CDC).   IEEE, 2020, pp. 3555–3560.
  15. F. Seide, H. Fu, J. Droppo, G. Li, and D. Yu, “1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns,” in Fifteenth Annual Conference of the International Speech Communication Association.   Citeseer, 2014.
  16. D. Alistarh, D. Grubic, J. Li, R. Tomioka, and M. Vojnovic, “QSGD: Communication-efficient SGD via gradient quantization and encoding,” Advances in Neural Information Processing Systems, vol. 30, pp. 1709–1720, 2017.
  17. J. Bernstein, Y.-X. Wang, K. Azizzadenesheli, and A. Anandkumar, “signSGD: Compressed optimisation for non-convex problems,” in International Conference on Machine Learning.   PMLR, 2018, pp. 560–569.
  18. S. P. Karimireddy, Q. Rebjock, S. Stich, and M. Jaggi, “Error feedback fixes SignSGD and other gradient compression schemes,” in International Conference on Machine Learning.   PMLR, 2019, pp. 3252–3261.
  19. K. Mishchenko, E. Gorbunov, M. Takáč, and P. Richtárik, “Distributed learning with compressed gradient differences,” arXiv preprint arXiv:1901.09269, 2019.
  20. X. Liu, Y. Li, J. Tang, and M. Yan, “A double residual compression algorithm for efficient distributed learning,” in International Conference on Artificial Intelligence and Statistics.   PMLR, 2020, pp. 133–143.
  21. Y. Liao, Z. Li, K. Huang, and S. Pu, “Compressed gradient tracking methods for decentralized optimization with linear convergence,” arXiv preprint arXiv:2103.13748, 2021.
  22. D. Kovalev, A. Koloskova, M. Jaggi, P. Richtarik, and S. Stich, “A linearly convergent algorithm for decentralized optimization: Sending less bits for free!” in International Conference on Artificial Intelligence and Statistics.   PMLR, 2021, pp. 4087–4095.
  23. X. Liu, Y. Li, R. Wang, J. Tang, and M. Yan, “Linear convergent decentralized optimization with compression,” arXiv preprint arXiv:2007.00232, 2020.
  24. J. Zhang, K. You, and L. Xie, “Innovation compression for communication-efficient distributed optimization with linear convergence,” arXiv preprint arXiv:2105.06697, 2021.
  25. E. Nekouei, G. N. Nair, and T. Alpcan, “Performance analysis of gradient-based Nash seeking algorithms under quantization,” IEEE Transactions on Automatic Control, vol. 61, no. 12, pp. 3771–3783, 2016.
  26. Z. Chen, J. Ma, S. Liang, and L. Li, “Distributed Nash equilibrium seeking under quantization communication,” Automatica, vol. 141, p. 110318, 2022.
  27. M. Ye, Q.-L. Han, L. Ding, S. Xu, and G. Jia, “Distributed Nash equilibrium seeking strategies under quantized communication,” IEEE/CAA Journal of Automatica Sinica, 2022.
  28. N. Singh, D. Data, J. George, and S. Diggavi, “SPARQ-SGD: Event-triggered and compressed communication in decentralized optimization,” IEEE Transactions on Automatic Control, 2022.
  29. ——, “SQuARM-SGD: Communication-efficient momentum SGD for decentralized optimization,” IEEE Journal on Selected Areas in Information Theory, vol. 2, no. 3, pp. 954–969, 2021.
  30. X. Yi, S. Zhang, T. Yang, T. Chai, and K. H. Johansson, “Communication compression for decentralized nonconvex optimization,” IEEE Transactions on Automatic Control, 2023.
  31. A. Koloskova, S. Stich, and M. Jaggi, “Decentralized stochastic optimization and gossip algorithms with compressed communication,” in International Conference on Machine Learning.   PMLR, 2019, pp. 3478–3487.
  32. W. Wen, C. Xu, F. Yan, C. Wu, Y. Wang, Y. Chen, and H. Li, “TernGrad: Ternary gradients to reduce communication in distributed deep learning,” Advances in neural information processing systems, vol. 30, 2017.
  33. A. Koloskova, T. Lin, S. U. Stich, and M. Jaggi, “Decentralized deep learning with arbitrary communication compression,” arXiv preprint arXiv:1907.09356, 2019.
  34. S. U. Stich, “On communication compression for distributed optimization on heterogeneous data,” arXiv preprint arXiv:2009.02388, 2020.
  35. A. Beznosikov, S. Horváth, P. Richtárik, and M. Safaryan, “On biased compression for distributed learning,” arXiv preprint arXiv:2002.12410, 2020.
  36. T. Tatarenko, W. Shi, and A. Nedić, “Geometric convergence of gradient play algorithms for distributed Nash equilibrium seeking,” IEEE Transactions on Automatic Control, vol. 66, no. 11, pp. 5342–5353, 2020.
  37. M. Ye, “Distributed robust seeking of Nash equilibrium for networked games: An extended state observer-based approach,” IEEE Transactions on Cybernetics, vol. 52, no. 3, pp. 1527–1538, 2020.
Citations (4)

Summary

We haven't generated a summary for this paper yet.