Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Inducing Cooperative behaviour in Sequential-Social dilemmas through Multi-Agent Reinforcement Learning using Status-Quo Loss (2001.05458v2)

Published 15 Jan 2020 in cs.AI, cs.GT, and cs.LG

Abstract: In social dilemma situations, individual rationality leads to sub-optimal group outcomes. Several human engagements can be modeled as a sequential (multi-step) social dilemmas. However, in contrast to humans, Deep Reinforcement Learning agents trained to optimize individual rewards in sequential social dilemmas converge to selfish, mutually harmful behavior. We introduce a status-quo loss (SQLoss) that encourages an agent to stick to the status quo, rather than repeatedly changing its policy. We show how agents trained with SQLoss evolve cooperative behavior in several social dilemma matrix games. To work with social dilemma games that have visual input, we propose GameDistill. GameDistill uses self-supervision and clustering to automatically extract cooperative and selfish policies from a social dilemma game. We combine GameDistill and SQLoss to show how agents evolve socially desirable cooperative behavior in the Coin Game.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Pinkesh Badjatiya (9 papers)
  2. Mausoom Sarkar (23 papers)
  3. Abhishek Sinha (60 papers)
  4. Siddharth Singh (42 papers)
  5. Nikaash Puri (12 papers)
  6. Jayakumar Subramanian (15 papers)
  7. Balaji Krishnamurthy (68 papers)
Citations (2)