Learning in Time-Varying Monotone Network Games with Dynamic Populations (2408.06253v1)
Abstract: In this paper, we present a framework for multi-agent learning in a nonstationary dynamic network environment. More specifically, we examine projected gradient play in smooth monotone repeated network games in which the agents' participation and connectivity vary over time. We model this changing system with a stochastic network which takes a new independent realization at each repetition. We show that the strategy profile learned by the agents through projected gradient dynamics over the sequence of network realizations converges to a Nash equilibrium of the game in which players minimize their expected cost, almost surely and in the mean-square sense. We then show that the learned strategy profile is an almost Nash equilibrium of the game played by the agents at each stage of the repeated game with high probability. Using these two results, we derive non-asymptotic bounds on the regret incurred by the agents.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.