Optimal coordination of resources: A solution from reinforcement learning (2312.14970v2)
Abstract: Efficient allocation is important in nature and human society, where individuals frequently compete for limited resources. The Minority Game (MG) is perhaps the simplest toy model to address this issue. However, most previous solutions assume that the strategies are provided a priori and static, failing to capture their adaptive nature. Here, we introduce the reinforcement learning (RL) paradigm to MG, where individuals adjust decisions based on accumulated experience and expected rewards dynamically. We find that this RL framework achieves optimal resource coordination when individuals balance the exploitation of experience with random exploration. Yet, the imbalanced strategies of the two lead to suboptimal partial coordination or even anti-coordination. Our mechanistic analysis reveals a symmetry-breaking in action preferences at the optimum, offering a fresh solution to the MG and new insights into the resource allocation problem.