Actor-Critic Policy Optimization in Partially Observable Multiagent Environments (1810.09026v5)
Abstract: Optimization of parameterized policies for reinforcement learning (RL) is an important and challenging problem in artificial intelligence. Among the most common approaches are algorithms based on gradient ascent of a score function representing discounted return. In this paper, we examine the role of these policy gradient and actor-critic algorithms in partially-observable multiagent environments. We show several candidate policy update rules and relate them to a foundation of regret minimization and multiagent learning techniques for the one-shot and tabular cases, leading to previously unknown convergence guarantees. We apply our method to model-free multiagent reinforcement learning in adversarial sequential decision problems (zero-sum imperfect information games), using RL-style function approximation. We evaluate on commonly used benchmark Poker domains, showing performance against fixed policies and empirical convergence to approximate Nash equilibria in self-play with rates similar to or better than a baseline model-free algorithm for zero sum games, without any domain-specific state space reductions.
- Sriram Srinivasan (23 papers)
- Marc Lanctot (60 papers)
- Vinicius Zambaldi (13 papers)
- Julien Perolat (37 papers)
- Karl Tuyls (58 papers)
- Michael Bowling (67 papers)
- Remi Munos (45 papers)