Finite Approximations for Mean Field Type Multi-Agent Control and Their Near Optimality (2211.09633v3)
Abstract: We study a multi-agent mean field type control problem in discrete time where the agents aim to find a socially optimal strategy and where the state and action spaces for the agents are assumed to be continuous. The agents are only weakly coupled through the distribution of their state variables. The problem in its original form can be formulated as a classical Markov decision process (MDP), however, this formulation suffers from several practical difficulties. In this work, we attempt to overcome the curse of dimensionality, coordination complexity between the agents, and the necessity of perfect feedback collection from all the agents (which might be hard to do for large populations.) We provide several approximations: we establish the near optimality of the action and state space discretization of the agents under standard regularity assumptions for the considered formulation by constructing and studying the measure valued MDP counterpart for finite and infinite population settings. It is a well known approach to consider the infinite population problem for mean-field type models, since it provides symmetric policies for the agents which simplifies the coordination between the agents. However, the optimality analysis is harder as the state space of the measure valued infinite population MDP is continuous (even after space discretization of the agents). Therefore, as a final step, we provide further approximations for the infinite population problem by focusing on smaller sized sub-population distributions.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Collections
Sign up for free to add this paper to one or more collections.