Papers
Topics
Authors
Recent
2000 character limit reached

Distributed Model Predictive Controller

Updated 11 December 2025
  • Distributed Model Predictive Controller is a decentralized control strategy where local controllers solve optimization problems and exchange information to achieve global objectives.
  • It employs techniques like ADMM, consensus, and tube-based methods to ensure scalability, robustness, and recursive feasibility despite dynamic and coupling constraints.
  • Applications include multi-agent robotics, smart energy systems, and wind farms, where local coordination enhances real-time performance and overall system efficiency.

A distributed model predictive controller (DMPC) is a control strategy in which a global system, often consisting of spatially or functionally separated subsystems, is coordinated via multiple MPC controllers that each optimize local or localized objectives, exchanging information with their neighbors or collaborators. Each controller solves an optimization problem at every time step, potentially accounting for dynamic coupling, state and input constraints, and cooperation or competitive objectives. The essential feature is that the computation and communication are distributed or decentralized, eschewing central coordination for scalability, robustness, privacy, and reduced computational burden.

1. Mathematical Formulation and Core Principles

Consider a network of mm agents or subsystems. Agent ii is represented by (possibly nonlinear) discrete-time dynamics:

xi(t+1)=fi(xi(t),ui(t)),xiXi, uiUi,x_i(t+1) = f_i(x_i(t), u_i(t)), \quad x_i \in X_i, \ u_i \in U_i,

with local constraints ZiXi×UiZ_i \subset X_i \times U_i, neighboring set Ni\mathcal{N}_i, coupling constraints (e.g., collision avoidance, resource limits, communication topology), and possibly additional global or network-wide constraints.

Each agent at time tt solves an optimization problem of the form:

minui() Jitr(xi(t),ui,rT,i)+Wic(yT,i,yT,Ni)+λ(N)ViΔ(yT,i,yT,ipr), s.t. (xi,ui(k),ui(k))Zi,0kN, (xi,ui(k),xNi,uN(k))Ci, xi,ui(N)Xif,(yT,i,yT,Ni)YT,i.\begin{aligned} \min_{u_i(\cdot)}\ & J_i^{\rm tr}\bigl(x_i(t), u_i, r_{T,i}\bigr) + W_i^c\bigl(y_{T,i}, y_{T,\mathcal{N}_i}\bigr) + \lambda(N) V_i^\Delta\bigl(y_{T,i}, y_{T,i}^{\rm pr}\bigr),\ \text{s.t. }& (x_{i,u_i}(k), u_i(k)) \in Z_i, \quad 0 \leq k \leq N,\ & (x_{i,u_i}(k), x_{\mathcal{N}_i, u_{\mathcal{N}}}(k)) \in \mathcal{C}_i,\ & x_{i,u_i}(N) \in \mathcal{X}_i^f, \quad (y_{T,i}, y_{T,\mathcal{N}_i}) \in \mathcal{Y}_{T,i}. \end{aligned}

Here, rT,ir_{T,i} is an artificial reference or periodic trajectory, WicW_i^c is a cooperative penalty coupling to neighbors, and ViΔV_i^\Delta penalizes rapid or non-smooth adaptation of the reference trajectory. The cooperation cost may encode objectives like formation keeping, tracking, consensus, or more sophisticated team missions (Köhler et al., 31 Mar 2025, Köhler et al., 2023).

Local optimization is carried out using state, constraint, and coupling data acquired only from the agent’s neighborhood; global feasibility and stability are guaranteed by careful terminal set selection, invariance conditions, and cooperative cost design. Closed-loop execution typically involves the application of the first element of the optimal control sequence and the update and communication of the reference or prediction trajectories.

2. Distributed Optimization, Communication, and Scalability

DMPC algorithms exploit problem structure (sparsity, locality, separability) to enable scalable computation and communication. Several architectural patterns and distributed optimization techniques are prevalent:

  • Neighbor-to-Neighbor Communication: Each agent transmits only local variables or trajectories to its immediate neighbors, matching the system’s physical or logical interaction graph. For example, in multi-agent robotic systems with connectivity constraints, agents exchange predicted positions and reference trajectories (Carron et al., 2023).
  • Parallelization: Subsystems solve their local optimization problems simultaneously, exchanging the necessary coordination information at each iteration or time step. This parallelized execution is critical for large-scale systems and enables significant computational acceleration (Wiltz et al., 2021, Scheur et al., 2020).
  • ADMM and Consensus-Based Methods: Alternating Direction Method of Multipliers (ADMM) and related consensus schemes decompose the global DMPC problem into local subproblems, enforcing consistency among shared variables via dual updates and proximal terms. This enables closed-form or low-dimensional local QP or SDP solves even with complex global structure (Alonso et al., 2022, Alonso et al., 2019, Alonso et al., 2021).
  • Primal-Dual and Dual Decomposition: When system constraints involve global variables (e.g., networked resource or connectivity constraints), distributed primal-dual strategies or dual decomposition can relax and distribute the coupling via Lagrange multipliers, leading to scalable iterative algorithms (Su et al., 2019, Lefebure et al., 2021).
  • Tube-Based and Robust Approaches: Robust distributed MPC schemes employ tube-based approaches with invariant set tightening to manage coupling uncertainties and maintain constraint satisfaction under bounded disturbances, leveraging parallel computation of tubes and invariance conditions (Hernandez et al., 2016, Alonso et al., 2021).
  • Asynchronous Execution and Event-Triggering: Some architectures allow subsystems to update at their own event-driven schedules, adapting prediction horizons and constraints to optimize performance-communication tradeoffs (Chen et al., 17 May 2024).

The computational and communication complexity per agent typically depends only on the number of neighbors and the prediction horizon, and not on the overall system size, enabling scalability to large networks (Alonso et al., 2021, Alonso et al., 2021, Alonso et al., 2021).

3. Coupling, Cooperative Objectives, and Coordination Mechanisms

Distributed MPC frameworks support various forms of subsystem interaction, including:

  • State/Constraint Coupling: Subsystem dynamics may be coupled through states (e.g., interconnected masses, wind farm wake interaction (Scheur et al., 2020)), or through constraints (e.g., joint state/input limits, collision avoidance, connectivity maintenance).
  • Cooperative Objective Encoding: Cooperative behavior is shaped by artificial references, penalty terms, or explicit constraints. Artificial references or trajectories (periodic or otherwise) serve as intermediate targets generated within the DMPC loop and are incrementally aligned to approach the joint cooperative objective (e.g., periodic formation, synchronized orbits, narrow-passage traversal) (Köhler et al., 31 Mar 2025, Köhler et al., 2023).
  • Global Constraints via Local Penalties: Global properties (e.g., communication connectivity via algebraic graph constraints on the Fiedler eigenvalue (Carron et al., 2023), aggregate power-tracking in wind farms (Scheur et al., 2020), or multi-building energy balance (Lefebure et al., 2021)) are embedded as locally computable penalties or constraints, enforced in the distributed optimization.
  • Goal Coordination via Augmented Lagrangian: Coupling variables between neighbors are coordinated via dual variables and consistency penalties, forming an augmented Lagrangian framework that allows for scalable and robust convergence (Eini et al., 2019).
  • Switching and Nonconvexity: Distributed MPC for hybrid or PWA systems uses switching-ADMM-like algorithms to efficiently manage mode-dependent nonconvexities by alternately convexifying over locally detected active regions, rather than direct MIQP solves (Mallick et al., 25 Apr 2024).

The choice of cooperative objective and coordination strategy directly determines the system’s emergent behavior, the speed and reliability of convergence, and the tradeoff between local autonomy and global performance.

4. Stability, Recursive Feasibility, and Theoretical Guarantees

Rigorous guarantees on closed-loop behavior are central to DMPC.

  • Recursive Feasibility: Most DMPC schemes ensure that, if the local (distributed) OCP is feasible at time t=0t=0, then shifted or appropriately constructed solutions ensure feasibility at all subsequent times, despite dynamic coupling, reference adaptation, and constraint tightening (Köhler et al., 31 Mar 2025, Alonso et al., 2022, Alonso et al., 2021). Tube-based tightening and carefully designed invariant sets are key for robustness (Hernandez et al., 2016, Darivianakis et al., 2018).
  • Lyapunov-Based Stability: Stability proofs commonly use Lyapunov or contractive cost function arguments, combining local stage costs, terminal costs, and cooperative or reference-change penalties to establish asymptotic or even exponential convergence to the desired set (e.g., consensus set, cooperation set, equilibrium) (Köhler et al., 31 Mar 2025, Carron et al., 2023, Eini et al., 2019).
  • Turnpike and Performance Bounds: For dynamic cooperation scenarios, transient performance bounds and “turnpike” results quantify the time spent outside neighborhoods of the target set as a function of initial conditions and cost parameters (Köhler et al., 31 Mar 2025).
  • Duality and Convergence of Distributed Algorithms: ADMM and primal-dual algorithms used in DMPC rely on convexity, boundedness, Slater-type conditions, or regularization for convergence; guarantees are established for primal and dual convergence, performance gaps, and residuals (Su et al., 2019, Alonso et al., 2022).
  • Handling Nonconvexity: For nonconvex problems (PWA, hybrid), stability and recursive feasibility are obtained via local convexification and terminal Lyapunov arguments, provided region-switching is properly managed and local invariance/controllability assumptions hold (Mallick et al., 25 Apr 2024).

5. Practical Implementations and Applications

DMPC is deployed in a broad variety of technological domains:

  • Robotics, Multi-Agent Systems: Cooperative control for formation, rendezvous, deadlock avoidance, or connectivity maintenance in multi-robot, UAV, and satellite constellations. Artificial periodic references and local tracking enable self-organizing collective motion (Köhler et al., 31 Mar 2025, Köhler et al., 2023, Carron et al., 2023).
  • Energy Systems and Buildings: Temperature and energy management in smart buildings and energy hubs, using dual decomposition and local coordination to manage comfort vs. energy consumption at large scale, with proven practical energy and computational savings (Lefebure et al., 2021, Eini et al., 2019).
  • Wind Farms and Renewable Grid Supports: Active power control in large wind farms uses DMPC for reference tracking, grid services, and inter-turbine wake modeling to maintain scalability and real-time feasibility (Scheur et al., 2020).
  • Piecewise Affine and Hybrid Systems: Distributed control of automobile platoons, vehicle hybrids, or switched industrial plants leveraging switching-ADMM to handle nonlinearity and mode constraints (Mallick et al., 25 Apr 2024).
  • Asynchronous and Event-Triggered Control: Adaptive prediction horizon and communication event-triggering reduce computational and communication loads under performance constraints for large, asynchronous networks (Chen et al., 17 May 2024).

Implementation success depends on methods for local prediction and disturbance compensation, efficient protocol design for neighbor-to-neighbor exchange (e.g., ZigBee mesh in IoT building setups), local computation platforms (e.g., embedded processors), and robust handling of communication failures or asynchrony.

6. Design Tradeoffs, Limitations, and Research Challenges

Several key tradeoffs and limitations define the current state and ongoing research in DMPC:

  • Conservatism vs. Feasibility: Tightening of constraints and strongly invariant terminal sets can reduce conservatism, but often at the expense of reduced region of attraction or increased local problem complexity. This challenge motivates adaptive or learning-based terminal ingredients (Darivianakis et al., 2018, Stürz et al., 2020).
  • Scalability vs. Coupling: Scalability is achieved when each agent’s local problem dimension is independent of global network size, but strong coupling or global constraints can limit this unless handled via locality-enforcing techniques or scalable primal-dual decomposition (Alonso et al., 2021, Alonso et al., 2022).
  • Nonconvexity and Real-Time Solvability: Systems with hybrid, PWA, or nonlinear dynamics present fundamental nonconvexity; distributed MIQP solving is computationally challenging, and algorithms such as switching-ADMM or local convexification are needed to avoid intractability (Mallick et al., 25 Apr 2024).
  • Asynchronous and Stochastic Execution: Real-world implementations must manage network latency, link failures, or asynchrony; techniques include self-triggered update mechanisms and adaptive predictive coordination (Chen et al., 17 May 2024).
  • Distributed System Identification: For unknown systems, data-driven SLS-based DMPC uses trajectory data to parameterize local control policies and system responses, enabling distributed design and implementation with rigorous guarantees (Alonso et al., 2021).
  • Performance Gap and Suboptimality: In some instances, performance of DMPC approaches remains close to centralized MPC, but certain adverse scenarios, nonconvexities, or communication failures can lead to degraded global performance or suboptimal outcomes (Lefebure et al., 2021, Mallick et al., 25 Apr 2024).

7. Representative Algorithms and Techniques

The landscape of DMPC includes several influential algorithmic frameworks, summarized in the table:

Approach/Framework Key Features/Assumptions Reference Examples
SLS-based DLMPC System Level Synthesis, locality, ADMM (Alonso et al., 2021, Alonso et al., 2019, Alonso et al., 2021, Alonso et al., 2021, Alonso et al., 2022)
Tube-based Robust DMPC Nested tube MPC, invariance sets (Hernandez et al., 2016)
Dual Decomposition (Augmented) Coupling via duals, distributed QP/MIQP (Lefebure et al., 2021, Eini et al., 2019, Stürz et al., 2020)
Primal-Dual Gradient / Consensus Global coupling, Laplacian consensus (Su et al., 2019)
Switching-ADMM for PWA Systems Piecewise-affine consensus via QP-only (Mallick et al., 25 Apr 2024)
Data-driven SLS DMPC Identification from local data, SLS (Alonso et al., 2021)
Asynchronous/Self-Triggered DMPC Adaptive horizon, asynchronous update (Chen et al., 17 May 2024)
Artificial Trajectory-based Cooperation Emergent cooperation via DMPC tracking artificial references (Köhler et al., 31 Mar 2025, Köhler et al., 2023)

By careful selection and synthesis of techniques from this toolbox, DMPC can be tailored to address performance, robustness, and feasibility for a broad array of networked and distributed control problems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Distributed Model Predictive Controller.