Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
98 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
52 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
15 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
Gemini 2.5 Flash Deprecated
12 tokens/sec
2000 character limit reached

Evolution of cooperation in the public goods game with Q-learning (2407.19851v1)

Published 29 Jul 2024 in q-bio.PE, cond-mat.stat-mech, and nlin.AO

Abstract: Recent paradigm shifts from imitation learning to reinforcement learning (RL) is shown to be productive in understanding human behaviors. In the RL paradigm, individuals search for optimal strategies through interaction with the environment to make decisions. This implies that gathering, processing, and utilizing information from their surroundings are crucial. However, existing studies typically study pairwise games such as the prisoners' dilemma and employ a self-regarding setup, where individuals play against one opponent based solely on their own strategies, neglecting the environmental information. In this work, we investigate the evolution of cooperation with the multiplayer game -- the public goods game using the Q-learning algorithm by leveraging the environmental information. Specifically, the decision-making of players is based upon the cooperation information in their neighborhood. Our results show that cooperation is more likely to emerge compared to the case of imitation learning by using Fermi rule. Of particular interest is the observation of an anomalous non-monotonic dependence which is revealed when voluntary participation is further introduced. The analysis of the Q-table explains the mechanisms behind the cooperation evolution. Our findings indicate the fundamental role of environment information in the RL paradigm to understand the evolution of cooperation, and human behaviors in general.

Citations (1)

Summary

We haven't generated a summary for this paper yet.