Logit-Q Dynamics for Efficient Learning in Stochastic Teams (2302.09806v3)
Abstract: We present a new family of logit-Q dynamics for efficient learning in stochastic games by combining the log-linear learning (also known as logit dynamics) for the repeated play of normal-form games with Q-learning for unknown Markov decision processes within the auxiliary stage-game framework. In this framework, we view stochastic games as agents repeatedly playing some stage game associated with the current state of the underlying game while the agents' Q-functions determine the payoffs of these stage games. We show that the logit-Q dynamics presented reach (near) efficient equilibrium in stochastic teams with unknown dynamics and quantify the approximation error. We also show the rationality of the logit-Q dynamics against agents following pure stationary strategies and the convergence of the dynamics in stochastic games where the stage-payoffs induce potential games, yet only a single agent controls the state transitions beyond stochastic teams. The key idea is to approximate the dynamics with a fictional scenario where the Q-function estimates are stationary over epochs whose lengths grow at a sufficiently slow rate. We then couple the dynamics in the main and fictional scenarios to show that these two scenarios become more and more similar across epochs due to the vanishing step size and growing epoch lengths.
- L. Baudin and R. Laraki. Fictitious play and best-response dynamics in identical-interest and zero-sum stochastic games. In Int. Conf. Machine Learn. (ICML), pages 1664–1690, 2022.
- I. Bistriz and A. Leshem. Game of thrones: Fully distributed learning for multiplayer bandits. Math. Oper. Res., 46(1):159–178, 2021.
- L. Blume. The statistical mechanics of strategic interaction. Games Econom. Behav., 5:387–424, 1993.
- V. S. Borkar. Stochastic Approximation: A Dynamical Systems Viewpoint. Hindustan Book Agency, 2008.
- C. Boutilier. Planning, learning and coordination in multiagent decision processes. In Conf. Theoretical Aspects of Rationality and Knowledge (TARK), pages 195–210, 1996.
- Aspiration learning in coordination games. SIAM J. Control Optim., 51(1):465–490, 2013.
- Y.-W. Cheung and D. Friedman. Individual learning in normal form games: Some laboratory results. Games Econom. Behav., 19:46–76, 1997.
- Comparison of perturbation bounds for the stationary distribution of a Markov chain. Linear Algebra and Its Applications, 335(1-3):137–150, 2001.
- A. M. Fink. Equilibrium in stochastic n-person game. J. Science Hiroshima University Series A-I, 28:89–93, 1964.
- D. Fudenberg and D. K. Levine. Learning and equilibrium. The Annual Rev. Econ., 1:385–419, 2009.
- B. Gao and L. Pavel. On the properties of the softmax function with application in game theory and reinforcement learning. arXiv preprint:1704.00805v4, 2018.
- J. Hu and M. P. Wellman. Nash Q-learning for general-sum stochastic games. J. Machine Learn. Res., pages 1039–1069, 2003.
- M Lauer and M. A. Riedmiller. An algorithm for distributed reinforcement learning in cooperative multi-agent systems. In International Conference on Machine Learn., 2000.
- Global convergence of multi-agent policy gradient in Markov potential games. In Proc. Internat. Conf. Learning Representations (ICLR), 2022.
- Best-response dynamics in zero-sum stochastic games. J. Econ. Theory, 189, 2020.
- D. A. Levin and Y. Peres. Markov Chains and Mixing Times. American Mathematical Society, 2nd edition, 2017a.
- D. A. Levin and Y. Peres. Markov Chains and Mixing Times. American Mathematical Society, 2nd edition, 2017b.
- M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proc. Internat. Conf. Mach. Learn. (ICML), 1994.
- M. L. Littman. Friend-or-foe Q-learning in general-sum games. In Int. Conf. Machine Learn. (ICML), pages 322–328, 2001.
- Y. Liu. Perturbation bounds for the stationary distributions of Markov chains. SIAM J. Matrix Anal. Appl., 33(4):1057–1074, 2012.
- Revisiting log-linear learning: Asynchrony, completeness and payoff-based implementation. Games Econom. Behav., 75:788–808, 2012.
- Achieving Pareto optimality through distributed learning. SIAM J. Control Optim., 52(5):2753–2770, 2014.
- J. Nachbar. Learning in games. In R. A. Meyers, editor, Complex Social and Behavioral Systems, Encyclopedia of Complexity and Systems Science Series, pages 485–498. Springer, 2020.
- Learning efficient Nash equilibria in distributed systems. Games Econom. Behav., 75(2):882–897, 2012.
- Decentralized Q-learning in zero-sum Markov games. In Proc. Advances in Neural Inform. Process. (NeurIPS), 2021.
- Fictitious play in zero-sum stochastic games. SIAM J. Control Optim., 60(4), 2022a.
- Fictitious play in Markov games with single controller. In Proc. ACM Conf. Econ. Computation (EC), 2022b.
- L. S. Shapley. Stochastic games. Proc. Natl. Acad. Sci. USA, 39(10):1095–1100, 1953.
- Reinforcement Learning: An Introduction. The MIT Press, 2nd edition, 2018.
- T. Tatarenko. Logit dynamics in potential games with memoryless players. In Game-Theoretic Learning and Distributed Optimization in Memoryless Multi-Agent Systems. Springer, Cham., 2017.
- O. Unlu and M. O. Sayin. Episodic logit-q dynamics for efficient learning in stochastic teams. In IEEE Conf. Decision and Control (CDC), 2023.
- X. Wang and T. Sandholm. Reinforcement learning to play an optimal Nash equilibrium in team Markov games. In Proc. Advances in Neural Inform. Process. (NeurIPS), 2002.
- Decentralized learning for optimality in stochastic dynamic teams and games with local control and global state information. IEEE Trans. Automatic Control, 67(10):5230–5245, 2022.
- Multi-agent reinforcement learning: A selective overview of theories and algorithms. In K. G. Vamvoudakis, Y. Wan, F. L. Lewis, and D. Cansever, editors, Handbook on RL and Control, volume 325 of Studies in Systems, Decision and Control. Springer, Cham, 2021.
- Muhammed O. Sayin (27 papers)
- Onur Unlu (3 papers)
- Ahmed Said Donmez (3 papers)