Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Actor-Critic Policy Optimization in Partially Observable Multiagent Environments (1810.09026v5)

Published 21 Oct 2018 in cs.LG, cs.AI, cs.GT, cs.MA, and stat.ML

Abstract: Optimization of parameterized policies for reinforcement learning (RL) is an important and challenging problem in artificial intelligence. Among the most common approaches are algorithms based on gradient ascent of a score function representing discounted return. In this paper, we examine the role of these policy gradient and actor-critic algorithms in partially-observable multiagent environments. We show several candidate policy update rules and relate them to a foundation of regret minimization and multiagent learning techniques for the one-shot and tabular cases, leading to previously unknown convergence guarantees. We apply our method to model-free multiagent reinforcement learning in adversarial sequential decision problems (zero-sum imperfect information games), using RL-style function approximation. We evaluate on commonly used benchmark Poker domains, showing performance against fixed policies and empirical convergence to approximate Nash equilibria in self-play with rates similar to or better than a baseline model-free algorithm for zero sum games, without any domain-specific state space reductions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Sriram Srinivasan (23 papers)
  2. Marc Lanctot (60 papers)
  3. Vinicius Zambaldi (13 papers)
  4. Julien Perolat (37 papers)
  5. Karl Tuyls (58 papers)
  6. Michael Bowling (67 papers)
  7. Remi Munos (45 papers)
Citations (147)