Consolidation via Policy Information Regularization in Deep RL for Multi-Agent Games (2011.11517v1)
Abstract: This paper introduces an information-theoretic constraint on learned policy complexity in the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) reinforcement learning algorithm. Previous research with a related approach in continuous control experiments suggests that this method favors learning policies that are more robust to changing environment dynamics. The multi-agent game setting naturally requires this type of robustness, as other agents' policies change throughout learning, introducing a nonstationary environment. For this reason, recent methods in continual learning are compared to our approach, termed Capacity-Limited MADDPG. Results from experimentation in multi-agent cooperative and competitive tasks demonstrate that the capacity-limited approach is a good candidate for improving learning performance in these environments.
- Tyler Malloy (7 papers)
- Tim Klinger (23 papers)
- Miao Liu (98 papers)
- Matthew Riemer (32 papers)
- Gerald Tesauro (29 papers)
- Chris R. Sims (3 papers)