Papers
Topics
Authors
Recent
Search
2000 character limit reached

Graphon Mean Field Games with a Representative Player: Analysis and Learning Algorithm

Published 8 May 2024 in math.OC, cs.AI, cs.GT, cs.LG, and stat.ML | (2405.08005v2)

Abstract: We propose a discrete time graphon game formulation on continuous state and action spaces using a representative player to study stochastic games with heterogeneous interaction among agents. This formulation admits both philosophical and mathematical advantages, compared to a widely adopted formulation using a continuum of players. We prove the existence and uniqueness of the graphon equilibrium with mild assumptions, and show that this equilibrium can be used to construct an approximate solution for finite player game on networks, which is challenging to analyze and solve due to curse of dimensionality. An online oracle-free learning algorithm is developed to solve the equilibrium numerically, and sample complexity analysis is provided for its convergence.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (52)
  1. Infinite Dimensional Analysis. Springer, 3rd edition, 2006.
  2. Unified reinforcement q-learning for mean field game and control problems. Mathematics of Control, Signals, and Systems, pp.  1–55, 2022a.
  3. Unified reinforcement q-learning for mean field game and control problems. Mathematics of Control, Signals, and Systems, pp.  1–55, 2022b.
  4. Stochastic graphon games: II. the linear-quadratic case. Applied Mathematics and Optimization, 85, 06 2022.
  5. Billingsley, P. Probability and Measure. John Wiley and Sons, 3rd edition, 1995.
  6. A note on uniform in time mean-field limit in graphs, 2023.
  7. Mimicking an ito process by a solution of a stochastic differential equiation. The Annals of Applied Probability, pp.  1584–1628, 2013.
  8. Graphon mean field games and their equations. SIAM Journal on Control and Optimization, 59(6):4373–4399, 2021.
  9. Learning in mean field games: the fictitious play. ESAIM: Control, Optimisation and Calculus of Variations, 23, 07 2015. doi: 10.1051/cocv/2016004.
  10. Probabilistic Theory of Mean Field Games with Applications I. Springer, 2018.
  11. Stochastic graphon games: I. the static case. Mathematics of Operations Research, 47(1):750–778, 2021.
  12. Physics-informed neural operator for coupled forward-backward partial differential equations. In ICML Workshop on the Synergy of Scientific and Machine Learning Modeling, 2023a.
  13. A hybrid framework of reinforcement learning and physics-informed deep learning for spatiotemporal mean field games. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, pp.  1079–1087, 2023b.
  14. Learning dual mean field games on graphs. In Proceedings of the 26th European Conference on Artificial Intelligence, ECAI, 2023c.
  15. Approximately solving mean field games via entropy-regularized deep reinforcement learning. In International Conference on Artificial Intelligence and Statistics, 2021.
  16. Learning graphon mean field games and approximate nash equilibria. In International Conference on Learning Representations, 2022.
  17. Hypergraphon mean field games. Chaos: An Interdisciplinary Journal of Nonlinear Science, 32(11), 2022.
  18. A note on dynamical models on random graphs and fokker–planck equations. Publications of the Mathematical Institute of the Hungarian Academy of Sciences, 165:785–798, 2016.
  19. On the convergence of model free learning in mean field games. In AAAI, 2020.
  20. On random graphs. i. Publicationes Mathematicae, 6(3–4):290–297, 1959.
  21. On the evolution of random graphs. Publications of the Mathematical Institute of the Hungarian Academy of Sciences, 5:17–61, 1960.
  22. Learning sparse graphon mean field games. In International Conference on Artificial Intelligence and Statistics, pp.  4486–4514. PMLR, 2023.
  23. On the properties of the softmax function with application in game theory and reinforcement learning. arXiv preprint arXiv:1704.00805, 2017.
  24. Linear quadratic graphon field games. Communications in Information and Systems, 21(3):341–369, 06 2021.
  25. Learning mean-field games. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019a.
  26. Learning mean-field games. Advances in Neural Information Processing Systems, 32, 2019b.
  27. A game-theoretic framework for autonomous vehicles velocity control: Bridging microscopic differential games and macroscopic mean field games. Discrete and Continuous Dynamical Systems - Series B, 25(12):4869–4903, 2020.
  28. Large population stochastic dynamic games: closed-loop Mckean-Vlasov systems and the Nash certainty equivalence principle. Communications in Information & Systems, 6(3):221–252, 2006.
  29. Mean-field limit of non-exchangeable systems, 2022.
  30. Lacker, D. Mean field games via controlled martingale problems: Existence of markovian equilibria. Stochastic Processes and their Applications, 23(4):2856–2894, 2015.
  31. Lacker, D. Mean field games and interacting particle systems. preprint, 2018.
  32. A label-state formulation of stochastic graphon games and approximate equilibria on large networks. Mathematics of Operations Research, 2022.
  33. Local weak convergence for sparse networks of interacting processes. The Annals of Applied Probability, 33(2):843 – 888, 2023.
  34. Mean field games. Japanese Journal of Mathematics, 2(1):229–260, 2007.
  35. Scalable deep reinforcement learning algorithms for mean field games. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp.  12078–12095. PMLR, 2022.
  36. Littman, M. L. Markov Games As a Framework for Multi-agent Reinforcement Learning. In Proceedings of the Eleventh International Conference on International Conference on Machine Learning, ICML’94, pp.  157–163, San Francisco, CA, USA, 1994. Morgan Kaufmann Publishers Inc. ISBN 978-1-55860-335-6. event-place: New Brunswick, NJ, USA.
  37. Lovász, L. Large networks and graph limits, volume 60. American Mathematical Soc., 2012.
  38. Mitrophanov, A. Y. Sensitivity and convergence of uniformly ergodic markov chains. Journal of Applied Probability, 42(4):1003–1014, 2005.
  39. Fictitious play for mean field games: Continuous time analysis and applications. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS’20, 2020.
  40. Mean field games flock! the reinforcement learning way. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pp.  356–362, 2021.
  41. Generalization in mean field games by learning master policies. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp.  9413–9421, 2022.
  42. Stochastic games. Proceedings of the National Academy of Sciences, 112(45):13743–13746, 2015.
  43. Sun, Y. The exact law of large numbers via fubini extension and characterization of insurable risks. Journal of Economic Theory, 126:31–69, 2006.
  44. Reinforcement learning: An introduction. MIT press, 2018.
  45. Optimal investment in a large population of competitive and heterogeneous agents, 2023.
  46. Learning flocking control in an artificial swarm. Neural Information Processing Systems, 2017.
  47. Breaking the curse of many agents: Provable mean embedding q-iteration for mean-field reinforcement learning. In International conference on machine learning, pp.  10092–10103. PMLR, 2020.
  48. Deep mean field games for learning optimal behavior policy of large populations. In International Conference on Learning Representations, 2018a.
  49. Mean Field Multi-Agent Reinforcement Learning. In International Conference on Machine Learning, pp.  5571–5580, July 2018b.
  50. A single online agent can efficiently learn mean field games. arXiv preprint arXiv:2405.03718, 2024.
  51. Learning regularized monotone graphon mean-field games. Neural Information Processing Systems, 2023.
  52. Finite-sample analysis for sarsa with linear function approximation. Advances in neural information processing systems, 32, 2019.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 6 tweets with 0 likes about this paper.