Distributed Stochastic Search for Multi-Agent Model Predictive Control (2510.18211v1)
Abstract: Many real-world multi-agent systems exhibit nonlinear dynamics and complex inter-agent interactions. As these systems increase in scale, the main challenges arise from achieving scalability and handling nonconvexity. To address these challenges, this paper presents a distributed sampling-based optimization framework for multi-agent model predictive control (MPC). We first introduce stochastic search, a generalized sampling-based optimization method, as an effective approach to solving nonconvex MPC problems because of its exploration capabilities. Nevertheless, optimizing the multi-agent systems in a centralized fashion is not scalable as the computational complexity grows intractably as the number of agents increases. To achieve scalability, we formulate a distributed MPC problem and employ the alternating direction method of multipliers (ADMM) to leverage the distributed approach. In multi-robot navigation simulations, the proposed method shows a remarkable capability to navigate through nonconvex environments, outperforming a distributed optimization baseline using the interior point optimizer (IPOPT). In a 64-agent multi-car formation task with a challenging configuration, our method achieves 100% task completion with zero collisions, whereas distributed IPOPT fails to find a feasible solution.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.