Priority Based Synchronization for Faster Learning in Games (2209.02617v1)
Abstract: Learning in games has been widely used to solve many cooperative multi-agent problems such as coverage control, consensus, self-reconfiguration or vehicle-target assignment. One standard approach in this domain is to formulate the problem as a potential game and to use an algorithm such as log-linear learning to achieve the stochastic stability of globally optimal configurations. Standard versions of such learning algorithms are asynchronous, i.e., only one agent updates its action at each round of the learning process. To enable faster learning, we propose a synchronization strategy based on decentralized random prioritization of agents, which allows multiple agents to change their actions simultaneously when they do not affect each other's utility or feasible actions. We show that the proposed approach can be integrated into any standard asynchronous learning algorithm to improve the convergence speed while maintaining the limiting behavior (e.g., stochastically stable configurations). We support our theoretical results with simulations in a coverage control scenario.