Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Agent, Human-Agent and Beyond: A Survey on Cooperation in Social Dilemmas (2402.17270v2)

Published 27 Feb 2024 in cs.AI, cs.GT, cs.HC, cs.LG, and cs.MA

Abstract: The study of cooperation within social dilemmas has long been a fundamental topic across various disciplines, including computer science and social science. Recent advancements in AI have significantly reshaped this field, offering fresh insights into understanding and enhancing cooperation. This survey examines three key areas at the intersection of AI and cooperation in social dilemmas. First, focusing on multi-agent cooperation, we review the intrinsic and external motivations that support cooperation among rational agents, and the methods employed to develop effective strategies against diverse opponents. Second, looking into human-agent cooperation, we discuss the current AI algorithms for cooperating with humans and the human biases towards AI agents. Third, we review the emergent field of leveraging AI agents to enhance cooperation among humans. We conclude by discussing future research avenues, such as using LLMs, establishing unified theoretical frameworks, revisiting existing theories of human cooperation, and exploring multiple real-world applications.

The paper "Multi-Agent, Human-Agent and Beyond: A Survey on Cooperation in Social Dilemmas" provides a comprehensive examination of the evolving role of AI in fostering cooperation within social dilemmas, an area of paper traditionally explored in disciplines like computer science and social science. The survey focuses on three principal domains:

  1. Multi-Agent Cooperation: The paper explores how AI has been employed to facilitate cooperation among rational agents. It highlights both intrinsic motivations (such as shared goals) and external motivations (such as rewards) that are crucial for cooperation. The survey also discusses various strategies agents use to cope with and outperform diverse opponents, stressing the development of robust algorithms for strategic interaction.
  2. Human-Agent Cooperation: The research explores the interaction between AI and humans, detailing current AI algorithms designed for seamless human-agent collaboration. It also addresses human biases against AI partners, which can hinder effective cooperation. The authors argue that understanding and mitigating these biases is vital for improving human-agent interactions.
  3. Enhancing Human Cooperation Through AI: This emergent area investigates how AI can be used to amplify cooperative behaviors among humans. The paper suggests that AI agents can mediate and improve human interactions by identifying cooperation opportunities and encouraging collaborative actions.

The authors conclude by identifying future research directions, including leveraging LLMs to process and understand complex social interactions. They also propose developing unified theoretical frameworks to better integrate existing theories of human cooperation with AI advancements. Lastly, the paper emphasizes exploring practical applications in various domains, suggesting that such integration could yield significant socio-economic benefits.

The survey underscores the transformative impact of AI on studying cooperation in social dilemmas and highlights the potential of AI to bridge gaps between theoretical research and real-world applications.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (70)
  1. Bowen Baker. Emergent reciprocity and team formation from randomized uncertain social preferences. NeurIPS, 2020.
  2. A comprehensive survey of multiagent reinforcement learning. IEEE Trans. Syst. Man Cybern. Syst., 2008.
  3. Intrinsically motivated reinforcement learning. NeurIPS, 2004.
  4. Get it in writing: Formal contracts mitigate social dilemmas in multi-agent rl. In AAMAS, 2023.
  5. Meta-value learning: a general framework for learning with learning awareness. arXiv, 2023.
  6. Cooperating with machines. Nat. Commun., 2018.
  7. Jacob W Crandall. Towards minimizing disappointment in repeated games. J. Artif. Intell. Res., 2014.
  8. Cooperation with autonomous machines through culture and emotion. PLoS One., 2019.
  9. Human cooperation when acting through autonomous machines. Proc. Natl. Acad. Sci. U.S.A., 2019.
  10. Learning reciprocity in complex sequential social dilemmas. arXiv, 2019.
  11. Strong reciprocity, human cooperation, and the enforcement of social norms. Hum. Nat., 2002.
  12. Delegation to artificial agents fosters prosocial behaviors in the collective risk dilemma. Sci. Rep., 2022.
  13. Learning with opponent-learning awareness. In AAMAS, 2018.
  14. D3c: Reducing the price of anarchy in multi-agent learning. In AAMAS, 2022.
  15. Tackling asymmetric and circular sequential social dilemmas with reinforcement learning and graph-based tit-for-tat. arXiv, 2022.
  16. Facilitating cooperation in human-agent hybrid populations through autonomous agents. Iscience, 2023.
  17. When to (or not to) trust intelligent machines: Insights from an evolutionary game theory analysis of trust in repeated games. Cogn. Syst. Res., 2021.
  18. A survey and critique of multiagent deep reinforcement learning. Auton. Agent. Multi. Agent. Syst., 2019.
  19. Extortion subdues human players but is finally punished in the prisoner’s dilemma. Nat. Commun., 2014.
  20. Language instructed reinforcement learning for human-ai coordination. arXiv, 2023.
  21. Learning optimal” pigovian tax” in sequential social dilemmas. In AAMAS, 2023.
  22. Inequity aversion improves cooperation in intertemporal social dilemmas. NeurIPS, 2018.
  23. Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation. Nat. Mach. Intell., 2019.
  24. Social influence as intrinsic motivation for multi-agent deep reinforcement learning. In ICML, 2019.
  25. Algorithm exploitation: Humans are keen to exploit benevolent ai. Iscience, 2021.
  26. The effect of environmental information on evolution of cooperation in stochastic games. Nat. Commun., 2023.
  27. Negotiation and honesty in artificial intelligence methods for the board game of diplomacy. Nat. Commun., 2022.
  28. Multi-agent reinforcement learning in sequential social dilemmas. In AAMAS, 2017.
  29. Maintaining cooperation in complex social dilemmas using deep reinforcement learning. arXiv, 2017.
  30. Stable opponent shaping in differentiable games. In ICLR, 2018.
  31. Learning roles with emergent social value orientations. arXiv, 2023.
  32. Model-free opponent shaping. In ICML, 2022.
  33. Gifting in multi-agent reinforcement learning. In AAMAS, 2020.
  34. Heterogeneous social value orientation leads to meaningful diversity in sequential social dilemmas. arXiv, 2023.
  35. If it looks like a human and speaks like a human… communication and cooperation in strategic human–robot interactions. J. Behav. Exp. Finance, 2023.
  36. Social diversity and social preferences in mixed-motive reinforcement learning. In AAMAS, 2020.
  37. Deep reinforcement learning models the emergent dynamics of human cooperation. arXiv, 2021.
  38. Scaffolding cooperation in human groups with deep reinforcement learning. Nat. Hum. Behav., 2023.
  39. People do not feel guilty about exploiting machines. TOCHI, 2016.
  40. Martin A Nowak. Five rules for the evolution of cooperation. Science, 2006.
  41. Exploring the impact of tunable agents in sequential social dilemmas. arXiv, 2021.
  42. How ai wins friends and influences people in repeated games with cheap talk. In AAAI, 2018.
  43. Generative agents: Interactive simulacra of human behavior. In UIST, 2023.
  44. Statistical physics of human cooperation. Physics Reports, 2017.
  45. Towards a better understanding of learning with multiagent teams. In IJCAI, 2023.
  46. Human cooperation. Trends Cogn. Sci., 2013.
  47. Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemp. Educ. Psychol., 2000.
  48. Social norm complexity and past reputations in the evolution of cooperation. Nature, 2018.
  49. Picky losers and carefree winners prevail in collective risk dilemmas with partner selection. Auton Agent Multi Agent Syst, 2020.
  50. Learning to participate through trading of reward shares. In ALA, Workshop, 2022.
  51. Social behavior for autonomous vehicles. Proc. Natl. Acad. Sci. U.S.A., 2019.
  52. Small bots, big impact: Solving the conundrum of cooperation in optional prisoner’s dilemma game through simple strategies. J. R. Soc. Interface, 2023.
  53. Network engineering using autonomous agents increases cooperation in human groups. Iscience, 2020.
  54. Emergence and collapse of reciprocity in semiautomatic driving coordination experiments with humans. Proc. Natl. Acad. Sci. U.S.A., 2023.
  55. Melioration as rational choice: Sequential decision making in uncertain environments. Psychol. Rev., 2013.
  56. John Maynard Smith. Evolution and the theory of games. In Did Darwin get it right? Essays on games, sex and evolution. 1982.
  57. The art of compensation: how hybrid teams solve collective risk dilemmas. In ALA, Workshop, 2022.
  58. The psychology of social dilemmas: A review. Organ. Behav. Hum Decis. Process., 2013.
  59. A learning agent that acquires social norms from public sanctions in decentralized multi-agent settings. Collective Intelligence, 2023.
  60. Extortion can outperform generosity in the iterated prisoner’s dilemma. Nat. Commun., 2016.
  61. Exploiting a cognitive bias promotes cooperation in social dilemma experiments. Nat. Commun., 2018.
  62. Evolving intrinsic motivations for altruistic behavior. In AAMAS, 2019.
  63. Achieving cooperation through deep multiagent reinforcement learning in sequential prisoner’s dilemmas. In DAI, 2019.
  64. Cola: consistent learning with opponent-learning awareness. In ICML, 2022.
  65. Learning to incentivize other learning agents. NeurIPS, 2020.
  66. Multi-agent learning with policy prediction. In AAAI, 2010.
  67. Multi-agent reinforcement learning: A selective overview of theories and algorithms. Handbook of reinforcement learning and control, 2021.
  68. Building cooperative embodied agents modularly with large language models. In NeurIPS, Workshop, 2023.
  69. Rethinking safe control in the presence of self-seeking humans. In AAAI, 2023.
  70. Proximal learning with opponent-learning awareness. NeurIPS, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hao Guo (172 papers)
  2. Chunjiang Mu (1 paper)
  3. Yang Chen (535 papers)
  4. Chen Shen (165 papers)
  5. Shuyue Hu (27 papers)
  6. Zhen Wang (571 papers)
Citations (2)