Distributed ARX Estimation Techniques
- Distributed ARX estimation is a framework enabling sensor networks to collaboratively identify unknown ARX model orders and parameters using local and neighboring data.
- Techniques integrate local information criteria with recursive least squares and consensus diffusion to ensure strong convergence and order consistency.
- The approach is robust to noise and weak individual sensor excitation, making it valuable for adaptive signal processing and decentralized control applications.
Distributed ARX estimation addresses the collaborative identification of both model order and parameters for autoregressive systems with exogenous inputs (ARX) in multi-agent sensor networks. This problem is fundamental in scenarios where networked agents must learn the dynamics of an unknown stochastic system using only local and neighboring information, and where the model complexity (orders) is also unknown. Modern distributed ARX estimation schemes integrate local statistical model selection, recursive least squares (RLS), information diffusion, and cooperative excitation concepts to achieve strong convergence guarantees under minimal stochastic assumptions, without requiring global data centralization or independent input processes (Gan et al., 2021, Kar et al., 2013).
1. ARX Model Structures and Distributed Observation Setting
In the prototypical distributed ARX context, each of sensors (agents) observes, at discrete time ,
where and are the (unknown) orders of the autoregressive and exogenous-input components, and are the unknown system parameters, and is zero-mean observation noise. The problem is to jointly estimate both and in a distributed fashion, leveraging the inter-agent communication graph for cooperation.
Model compactness for arbitrary candidate orders is achieved by defining the regression vector
such that
Each agent maintains local estimates for candidate orders and parameters, and exchanges information with its neighborhood as specified by the network topology (Gan et al., 2021).
2. Local Information Criteria for Distributed Order Selection
Selection of the correct ARX order pair is realized via a distributed Local Information Criterion (LIC) framework. At each time , agent computes, for each candidate ,
with
where are neighbor weights and is a non-decreasing penalty sequence, typically . The first term accumulates squared prediction errors (locally and from neighbors), while the penalty controls model complexity, suppressing overfitting as increases. The current model order estimate at node is
where are known upper bounds, or—if unknown—are replaced by an expanding search set (Gan et al., 2021).
3. Distributed Recursive Least Squares and Information Diffusion
Given an order selection , each agent implements a consensus-type distributed Recursive Least Squares (RLS) algorithm. The "adaptation" step at sensor reads: followed by a "diffusion" (consensus) step: Alternatively, gradient-form stochastic approximation updates or consensusinnovations laws can be used, as in the general distributed exponential family estimation framework (Kar et al., 2013). Here, the update at agent is: with innovation stepsize , consensus stepsize , and adaptive gain .
4. Cooperative Excitation and Global Identifiability
The cooperative excitation condition is devised to ensure identifiability of system orders and parameters even under regressors that are correlated and/or nonstationary, i.e., weakening classical persistent excitation. Formally, there exists a scalar sequence such that, for the maximally over-parameterized settings and ,
for all almost surely, with and defined as
Collective network excitation, even in the presence of individually weak sensors, guarantees the statistical growth of the covariance matrices in all directions, ensuring convergence of both order and parameter estimates (Gan et al., 2021).
5. Statistical Guarantees and Convergence Theory
Under the martingale difference noise model and graph connectivity, the following consistency results are established:
- Order Consistency: almost surely for all (Theorem 3.1).
- Parameter Consistency: almost surely for all (Theorem 3.2).
Proof strategies combine martingale convergence arguments, stochastic Lyapunov techniques for RLS-type updates, and careful analysis of the local information criteria under correct and incorrect model orders. The double-array martingale limit theorem is crucial for establishing convergence when the model order itself is time-varying (Gan et al., 2021).
For fixed-order estimation, the consensusinnovations estimator achieves the asymptotic efficiency (inverse centralized Fisher information) under global observability and mean connectivity of the network. The estimate at each node attains
with centralized Fisher information (Kar et al., 2013).
6. Order and Parameter Estimation Without Upper Bounds
When prior upper bounds are unavailable, the order search space is incrementally enlarged, e.g., to . A nested minimization is applied:
- For , run the diffusion-RLS at order , compute
- Select , then minimize over .
- Rerun RLS at the chosen order .
A modified cooperative excitation condition and double-array martingale arguments yield that and ultimately almost surely (Gan et al., 2021).
7. Applications, Practical Considerations, and Extensions
Distributed ARX estimation is robust to stochastic feedback and correlated input scenarios, as it does not require independence or stationarity of the regression process. The cooperative excitation framework enables the network to succeed even when individual nodes fail to satisfy classical persistent excitation, highlighting the advantage of sensor cooperation.
Potential extensions of the distributed ARX estimation paradigm include:
- Distributed ARMAX (inclusion of moving-average terms),
- Time-varying parameter ARX models,
- Nonlinear or kernelized ARX estimators (adapting the LIC penalty and local recurrence structures accordingly).
For the distributed consensusinnovations method, parameter stepsizes and consensus weights with and are recommended for achieving optimal rates and covariance properties. Adaptive gain tuning via Fisher information consensus is practical when sensor models are heterogeneous (Kar et al., 2013).
Summary Table of Key Elements in Distributed ARX Estimation
| Component | Key Equation/Concept | Reference |
|---|---|---|
| ARX Model (node ) | (Gan et al., 2021) | |
| Local Information Criterion | (Gan et al., 2021) | |
| Distributed RLS Update | Adaptation + Diffusion (consensus) | (Gan et al., 2021) |
| ConsensusInnovations | (Kar et al., 2013) | |
| Cooperative Excitation | (Gan et al., 2021) | |
| Statistical Guarantees | Strong consistency, efficiency | (Gan et al., 2021, Kar et al., 2013) |
Distributed ARX estimation presents a unified framework for decentralized system identification in networked environments with unknown dynamics and is substantiated by rigorous convergence analysis, with broad applicability to adaptive signal processing, control, and sensor networks.