Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Policy Distillation and Value Matching in Multiagent Reinforcement Learning (1903.06592v1)

Published 15 Mar 2019 in cs.LG, cs.AI, and stat.ML

Abstract: Multiagent reinforcement learning algorithms (MARL) have been demonstrated on complex tasks that require the coordination of a team of multiple agents to complete. Existing works have focused on sharing information between agents via centralized critics to stabilize learning or through communication to increase performance, but do not generally look at how information can be shared between agents to address the curse of dimensionality in MARL. We posit that a multiagent problem can be decomposed into a multi-task problem where each agent explores a subset of the state space instead of exploring the entire state space. This paper introduces a multiagent actor-critic algorithm and method for combining knowledge from homogeneous agents through distillation and value-matching that outperforms policy distillation alone and allows further learning in both discrete and continuous action spaces.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Samir Wadhwania (3 papers)
  2. Dong-Ki Kim (21 papers)
  3. Shayegan Omidshafiei (34 papers)
  4. Jonathan P. How (159 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.