Papers
Topics
Authors
Recent
2000 character limit reached

Decentralized Control Structures

Updated 21 November 2025
  • Decentralized control structure is an architecture where local controllers make decision based on limited, local information without a central coordinator, promoting scalability and fault tolerance.
  • Various implementations such as one-hop, partially nested, and hierarchical architectures use techniques like Riccati recursions, multi-parametric programming, and decentralized MPC to optimize performance.
  • While enhancing robustness and privacy, decentralized designs inherently involve trade-offs in global performance due to information constraints and limited inter-controller communication.

A decentralized control structure refers to any architecture or policy class in which local controllers (agents, subsystems) make decisions based on restricted, typically local, information—without reliance on a central coordinator granting real-time access to all global measurements or states. Decentralized control design is of fundamental importance for large-scale, networked, or distributed systems, where physical limitations or explicit architectural choices prevent the use of centralized controllers. Decentralization may be enforced for reasons of scalability, communication, robustness, privacy, or fault tolerance, and is rigorously characterized via explicit information constraints, coupling structures, or decomposition of decision-making authority.

1. Problem Formulations and Taxonomy

Decentralized control structures are usually defined by (i) a global system model comprising multiple physically or functionally partitioned subsystems, and (ii) an information architecture specifying the measurements, states, or messages available to each controller at each decision epoch. In both deterministic and stochastic settings, the precise information structure can range from:

  • Completely decentralized: Each controller knows only its own local measurements or states and acts independently.
  • Partially nested: Controllers have access to the information that is causally required by the plant structure (Ho & Chu 1972 paradigm).
  • Partially history sharing: Controllers periodically or asynchronously share selected subsets of their observations and control actions with others (Nayyar et al., 2012, Mahajan et al., 2014).
  • One-hop or k-hop structures: Each controller accesses states in a prescribed neighborhood—e.g., its subsystem and immediate neighbors (topological decentralization) (Jafari et al., 2018).
  • Hierarchical and multi-level: Hybrid structures with distinct local and group-level coordination layers, e.g., low-level groups coordinated by local aggregators or supervisors (Komenda et al., 2014, Kaza et al., 28 Jun 2025).

Decentralization can be task-enforced (as in distributed Model Predictive Control (Riverso et al., 2013), decentralized LQR/LQG (Asghari et al., 2016, Ye et al., 2021)), or can arise as the optimal structure in certain distributionally symmetric coordination problems (Madjidian et al., 2013). Architectures are further classified by the allowed policy class (linear, affine, piecewise-affine, dynamic output feedback, etc.) and the extent of inter-controller communication during or after the control design.

2. Canonical Decentralized Control Structures

Several canonical architectures and structural results for decentralized controllers are well-established in the literature.

2.1. Diagonal-plus-rank-one Coordination

For large-scale homogeneous linear systems with global center-of-mass type constraints, the optimal controller is the sum of local uncoordinated LQR laws plus a global low-rank (rank-one) averaging term. Explicitly, for vv agents with continuous-time dynamics

x˙i=Axi+Bui,i=1,,v,\dot{x}_i = Ax_i + Bu_i, \qquad i=1,\ldots,v,

and local quadratic costs, under a global coordination constraint, the optimal decentralized law for agent ii is (Madjidian et al., 2013): ui(t)=Faxi(t)+Mi(FFa)xˉ(t)u_i(t) = F_a x_i(t) + M_i (F - F_a) \bar{x}(t) where xˉ(t)=j=1vMjxj(t)\bar{x}(t) = \sum_{j=1}^v M_j x_j(t). This controller uses only the local state xi(t)x_i(t) and the global average xˉ(t)\bar{x}(t). Offline synthesis requires solving only two n×nn \times n Riccati equations, and the complexity is independent of vv.

2.2. One-Hop Decentralization for Network Flow

For finite-horizon optimal control in flow or transport networks with local supply/demand constraints, explicit multi-parametric programming yields piecewise-affine optimal feedback globally. Relaxing constraints involving non-neighboring cells (dropping non-local variables from each controller's policy) results in a decentralized controller at each node depending only on its OWN state and those of its downstream neighbors. Under certain monotonicity/positivity conditions, this one-hop decentralized law is globally optimal, and otherwise incurs only small performance loss even in networks of hundreds of nodes (Jafari et al., 2018).

2.3. Partially History Sharing and Sufficient Statistic/Coordinator Reformulation

General decentralized stochastic control problems with arbitrary partial history sharing can always be reformulated as a centralized POMDP from the "common information" (shared memory) viewpoint (Nayyar et al., 2012). The fictitious coordinator issues partial prescriptions (maps from local information to actions), with the optimal decentralized policy for agent ii taking the form

Ati=g^ti(Yti,Mti,Πt)A_t^i = \hat{g}_t^i(Y_t^i, M_t^i, \Pi_t)

where (Yti,Mti)(Y_t^i, M_t^i) are the agent's private observation and memory, and Πt\Pi_t denotes the belief over global state and local memories given the shared history.

2.4. Substitutable Actions and Action-Decomposable Optimality

In the decentralized LQG setting, when the system and cost are such that controllers can "substitute for each other" in open-loop, the optimal decentralized law is linear, memoryless, and achieves centralized performance. In such systems, each controller ii applies

uti=ΛiKtixtiu^i_t = \Lambda^i K^i_t x^i_t

where KtiK^i_t is the block of the centralized gain, and Λi\Lambda^i is a substitution mapping. This holds regardless of whether the information structure is partially nested or quadratically invariant (Asghari et al., 2016).

3. Mathematical Properties and Implementation

3.1. Scalability

Decentralized controllers typically have local computational/implementation complexity that scales with the dimension of the local subsystem or neighborhood, not the full network. The complexity of synthesizing optimal decentralized controllers may grow with the graph size for general, non-separable cost or hard coupling, but for structures described above—tube-based MPC (Riverso et al., 2013), low-rank coordination (Madjidian et al., 2013), or one-hop flow (Jafari et al., 2018)—the per-agent control law and required online computation are fixed-size, independent of the global network.

3.2. Communication

  • Purely decentralized architectures require no real-time exchange beyond local sensing/actuation.
  • Low-rank coordination or mean-field-like terms may require distributed averaging, which can be achieved via lightweight consensus/aggregation protocols (Madjidian et al., 2013).
  • One-hop or k-hop structures necessitate communication only with local neighbors (Jafari et al., 2018).
  • Supervisory/hierarchical architectures involve controllers at multiple layers, where inter-group communication is limited to group boundaries (Komenda et al., 2014).

3.3. Structural Limitations and Performance

Decentralization introduces hard constraints on achievable closed-loop performance due to either information limits or enforced local policies. Nevertheless, for some topologies and under suitable structural assumptions (partial nestedness, substitutable actions, monotonicity), optimal decentralized controllers can attain centralized-optimal performance (Asghari et al., 2016, Jafari et al., 2018). When strong coupling or nonclassical information structures are present, there may be an unavoidable (and quantifiable) performance gap which can sometimes be lower-bounded via convex information relaxations (Lin et al., 2017).

4. Approaches to Synthesis and Optimization

The design of decentralized controllers is highly sensitive to the plant dynamics and information structure.

  • Convex and tractable subclasses: For partially nested information, the decentralized LQR/LQG problem can be solved via coupled Riccati recursions (Ye et al., 2021, Mahajan et al., 2014).
  • Multi-parametric programming: For flow/network systems with piecewise-affine constraints/costs, multi-parametric LP yields explicit closed-form local policies—global optimum subject to monotonicity (Jafari et al., 2018).
  • Assume–Guarantee Contracts: For general non-classical structures, robust decentralized synthesis can be formulated as a joint optimization over affine policy parameters and contract sets constraining coupling signals, leading to implementable controllers via SDP relaxations (Lin et al., 2020).
  • Reinforcement learning: For articulated robots or large-scale multi-agent systems, decentralized controllers can be learned by distributed policies (e.g., A3C on robot body segments), leveraging modularity and local observability (Sartoretti et al., 2019).
  • Plug-and-play architectures: Decentralized robust MPC with tube-based invariant sets enables local controllers to be synthesized/retuned automatically as new subsystems are attached or removed, guaranteeing overall stability and constraint satisfaction (Riverso et al., 2013).
  • Hierarchical and supervisory synthesis: Decomposition into local, group, and global supervisors enables scalable synthesis for large discrete-event systems and multi-level automata, provided conditions of two-level conditional controllability/decomposability are satisfied (Komenda et al., 2014).

5. Performance, Limitations, and Extensions

Decentralized architectures are inherently robust to failures, allow for scalable synthesis and real-time implementation, and often have strong modularity and "plug-and-play" properties (Riverso et al., 2013, Chandan, 2021). However, they typically exhibit a trade-off between achievable global performance and the degree of decentralization; fully decentralized systems may be suboptimal compared to centralized or hierarchical designs, especially under strong subsystem coupling (Chandan, 2021).

Hybrid structures and thresholding policies have been shown to interpolate between pure decentralization and hierarchical centralization, balancing operator cognizance, response latency, and system scalability (Kelly et al., 5 Aug 2024). In cyber-physical systems, hierarchical decentralized stochastic control with budget-constrained resource allocation can achieve full optimality under certain monotonicity and reward-assumptions, but otherwise incurs a bounded loss compared to centralized solution (Kaza et al., 28 Jun 2025).

Decentralized estimation and control in leader–follower or generally asymmetric systems provide further examples: exploiting one-sided information structures allows for exact optimal decompositions and tractable feedback design, even in the presence of forward-backward stochastic couplings (Luo et al., 18 Sep 2025).

6. Advanced Topics and Open Questions

  • Partial Nestedness vs. Non-classical Structures: The stark contrast in complexity and convexity between partially nested and nonclassical patterns underlies most algorithmic boundaries (Lin et al., 2017, Lin et al., 2020).
  • Information Structure Limitations: Not all no-signaling or local strategies are implementable by purely decentralized agents; certain statistical/randomized protocols may require active or mediated common randomness, not passive local information (Dhingra et al., 20 Jan 2024).
  • Scalability to Very Large Systems: For systems with tens or hundreds of agents, only those decentralized architectures whose implementation and online execution complexity do not scale with network size are practical (Madjidian et al., 2013, Jafari et al., 2018).
  • Learning and Adaptation: End-to-end sample complexity for learning decentralized controllers under partially nested information structures matches centralized learning rates, provided sample-based identification and decentralized synthesis are tightly integrated (Ye et al., 2021).
  • Hybrid/Adaptive Architectures: Empirical studies in swarm robotics and building control indicate that hybrid architectures—allowing local adaptation of control structure based on online metrics or event density—can achieve near-centralized performance while preserving robustness (Kelly et al., 5 Aug 2024, Chandan, 2021).

Decentralized control structure remains a vibrant area of research, with ongoing developments in information theory, optimization, learning, and networked cyber-physical system applications. Rigorous characterization of structural optimality, efficient scalable synthesis algorithms, and robustness to communication or modeling uncertainty are core themes in the contemporary literature.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Decentralized Control Structure.